[Ebooks PDF] download Graphs algorithms and optimization 2nd ed Edition Kocay full chapters
[Ebooks PDF] download Graphs algorithms and optimization 2nd ed Edition Kocay full chapters
com
https://ebookgate.com/product/graphs-algorithms-and-
optimization-2nd-ed-edition-kocay/
OR CLICK HERE
DOWLOAD NOW
https://ebookgate.com/product/optimization-algorithms-in-physics-1st-
ed-edition-alexander-k-hartmann/
ebookgate.com
https://ebookgate.com/product/optimization-algorithms-on-matrix-
manifolds-p-a-absil/
ebookgate.com
https://ebookgate.com/product/bandit-algorithms-for-website-
optimization-1st-edition-john-myles-white/
ebookgate.com
https://ebookgate.com/product/support-vector-machines-optimization-
based-theory-algorithms-and-extensions-1st-edition-naiyang-deng/
ebookgate.com
Inherently Parallel Algorithms in Feasibility and
Optimization and their Applications 1st Edition Dan
Butnariu
https://ebookgate.com/product/inherently-parallel-algorithms-in-
feasibility-and-optimization-and-their-applications-1st-edition-dan-
butnariu/
ebookgate.com
https://ebookgate.com/product/precalculus-functions-and-graphs-12th-
ed-edition-earl-william-swokowski/
ebookgate.com
https://ebookgate.com/product/optimization-of-chemical-processes-2nd-
ed-edition-thomas-f-edgar/
ebookgate.com
https://ebookgate.com/product/applications-of-combinatorial-
optimization-2nd-ed-edition-vangelis-th-paschos/
ebookgate.com
https://ebookgate.com/product/handbook-of-product-graphs-2nd-edition-
richard-hammack/
ebookgate.com
GRAPHS,
ALGORITHMS,
AND OPTIMIZATION
Second edition
DISCRETE
MATHEMATICS
ITS APPLICATIONS
R. B. J. T. Allenby and Alan Slomson, How to Count: An Introduction to Combinatorics,
Third Edition
Craig P. Bauer, Secret History: The Story of Cryptology
Jürgen Bierbrauer, Introduction to Coding Theory, Second Edition
Katalin Bimbó, Combinatory Logic: Pure, Applied and Typed
Katalin Bimbó, Proof Theory: Sequent Calculi and Related Formalisms
Donald Bindner and Martin Erickson, A Student’s Guide to the Study, Practice, and Tools of
Modern Mathematics
Francine Blanchet-Sadri, Algorithmic Combinatorics on Partial Words
Miklós Bóna, Combinatorics of Permutations, Second Edition
Miklós Bóna, Handbook of Enumerative Combinatorics
Miklós Bóna, Introduction to Enumerative and Analytic Combinatorics, Second Edition
Jason I. Brown, Discrete Structures and Their Interactions
Richard A. Brualdi and Dragos̆ Cvetković, A Combinatorial Approach to Matrix Theory and Its
Applications
Kun-Mao Chao and Bang Ye Wu, Spanning Trees and Optimization Problems
Charalambos A. Charalambides, Enumerative Combinatorics
Gary Chartrand and Ping Zhang, Chromatic Graph Theory
Henri Cohen, Gerhard Frey, et al., Handbook of Elliptic and Hyperelliptic Curve Cryptography
Charles J. Colbourn and Jeffrey H. Dinitz, Handbook of Combinatorial Designs, Second Edition
Abhijit Das, Computational Number Theory
Matthias Dehmer and Frank Emmert-Streib, Quantitative Graph Theory:
Mathematical Foundations and Applications
Martin Erickson, Pearls of Discrete Mathematics
Martin Erickson and Anthony Vazzana, Introduction to Number Theory
Titles (continued)
Steven Furino, Ying Miao, and Jianxing Yin, Frames and Resolvable Designs: Uses,
Constructions, and Existence
Mark S. Gockenbach, Finite-Dimensional Linear Algebra
Randy Goldberg and Lance Riek, A Practical Handbook of Speech Coders
Jacob E. Goodman and Joseph O’Rourke, Handbook of Discrete and Computational Geometry,
Second Edition
Jonathan L. Gross, Combinatorial Methods with Computer Applications
Jonathan L. Gross and Jay Yellen, Graph Theory and Its Applications, Second Edition
Jonathan L. Gross, Jay Yellen, and Ping Zhang Handbook of Graph Theory, Second Edition
David S. Gunderson, Handbook of Mathematical Induction: Theory and Applications
Richard Hammack, Wilfried Imrich, and Sandi Klavžar, Handbook of Product Graphs,
Second Edition
Darrel R. Hankerson, Greg A. Harris, and Peter D. Johnson, Introduction to Information Theory
and Data Compression, Second Edition
Darel W. Hardy, Fred Richman, and Carol L. Walker, Applied Algebra: Codes, Ciphers, and
Discrete Algorithms, Second Edition
Daryl D. Harms, Miroslav Kraetzl, Charles J. Colbourn, and John S. Devitt, Network Reliability:
Experiments with a Symbolic Algebra Environment
Silvia Heubach and Toufik Mansour, Combinatorics of Compositions and Words
Leslie Hogben, Handbook of Linear Algebra, Second Edition
Derek F. Holt with Bettina Eick and Eamonn A. O’Brien, Handbook of Computational Group Theory
David M. Jackson and Terry I. Visentin, An Atlas of Smaller Maps in Orientable and
Nonorientable Surfaces
Richard E. Klima, Neil P. Sigmon, and Ernest L. Stitzinger, Applications of Abstract Algebra
with Maple™ and MATLAB®, Second Edition
Richard E. Klima and Neil P. Sigmon, Cryptology: Classical and Modern with Maplets
Patrick Knupp and Kambiz Salari, Verification of Computer Codes in Computational Science
and Engineering
William L. Kocay and Donald L. Kreher, Graphs, Algorithms, and Optimization, Second Edition
Donald L. Kreher and Douglas R. Stinson, Combinatorial Algorithms: Generation Enumeration
and Search
Hang T. Lau, A Java Library of Graph Algorithms and Optimization
C. C. Lindner and C. A. Rodger, Design Theory, Second Edition
San Ling, Huaxiong Wang, and Chaoping Xing, Algebraic Curves in Cryptography
Nicholas A. Loehr, Bijective Combinatorics
Toufik Mansour, Combinatorics of Set Partitions
Titles (continued)
Toufik Mansour and Matthias Schork, Commutation Relations, Normal Ordering, and Stirling
Numbers
Alasdair McAndrew, Introduction to Cryptography with Open-Source Software
Elliott Mendelson, Introduction to Mathematical Logic, Fifth Edition
Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone, Handbook of Applied
Cryptography
Stig F. Mjølsnes, A Multidisciplinary Introduction to Information Security
Jason J. Molitierno, Applications of Combinatorial Matrix Theory to Laplacian Matrices of Graphs
Richard A. Mollin, Advanced Number Theory with Applications
Richard A. Mollin, Algebraic Number Theory, Second Edition
Richard A. Mollin, Codes: The Guide to Secrecy from Ancient to Modern Times
Richard A. Mollin, Fundamental Number Theory with Applications, Second Edition
Richard A. Mollin, An Introduction to Cryptography, Second Edition
Richard A. Mollin, Quadratics
Richard A. Mollin, RSA and Public-Key Cryptography
Carlos J. Moreno and Samuel S. Wagstaff, Jr., Sums of Squares of Integers
Gary L. Mullen and Daniel Panario, Handbook of Finite Fields
Goutam Paul and Subhamoy Maitra, RC4 Stream Cipher and Its Variants
Dingyi Pei, Authentication Codes and Combinatorial Designs
Kenneth H. Rosen, Handbook of Discrete and Combinatorial Mathematics
Yongtang Shi, Matthias Dehmer, Xueliang Li, and Ivan Gutman, Graph Polynomials
Douglas R. Shier and K.T. Wallenius, Applied Mathematical Modeling: A Multidisciplinary
Approach
Alexander Stanoyevitch, Introduction to Cryptography with Mathematical Foundations and
Computer Implementations
Jörn Steuding, Diophantine Analysis
Douglas R. Stinson, Cryptography: Theory and Practice, Third Edition
Roberto Tamassia, Handbook of Graph Drawing and Visualization
Roberto Togneri and Christopher J. deSilva, Fundamentals of Information Theory and Coding
Design
W. D. Wallis, Introduction to Combinatorial Designs, Second Edition
W. D. Wallis and J. C. George, Introduction to Combinatorics
Jiacun Wang, Handbook of Finite State Based Models and Applications
Lawrence C. Washington, Elliptic Curves: Number Theory and Cryptography, Second Edition
DISCRETE MATHEMATICS AND ITS APPLICATIONS
GRAPHS,
ALGORITHMS,
AND OPTIMIZATION
Second edition
William l. Kocay
University of Manitoba
Winnipeg, Canada
DonalD l. Kreher
Michigan Technological University
Houghton, USA
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2017 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a photo-
copy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
The authors would like to take this opportunity to express their appreciation and
gratitude to the following people who have had a very significant effect on their
mathematical development:
Adrian Bondy, Earl Kramer, Spyros Magliveras, Ron Read, and Ralph Stanton.
Preface xvii
3 Subgraphs 45
3.1 Counting subgraphs . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.1 Möbius inversion . . . . . . . . . . . . . . . . . . . . 46
3.1.2 Counting triangles . . . . . . . . . . . . . . . . . . . 49
3.2 Multiplying subgraph counts . . . . . . . . . . . . . . . . . . 50
3.3 Mixed subgraphs . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Graph reconstruction . . . . . . . . . . . . . . . . . . . . . . 53
3.4.1 Nash-Williams’ lemma . . . . . . . . . . . . . . . . . 54
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
ix
x Contents
3.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7 Connectivity 125
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.3 Finding the blocks of a graph . . . . . . . . . . . . . . . . . . 131
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.4 The depth-first search . . . . . . . . . . . . . . . . . . . . . . 134
7.4.1 Complexity . . . . . . . . . . . . . . . . . . . . . . . 140
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.5 Sections and modules . . . . . . . . . . . . . . . . . . . . . . 141
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
12 Digraphs 251
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
12.2 Activity graphs, critical paths . . . . . . . . . . . . . . . . . . 251
12.3 Topological order . . . . . . . . . . . . . . . . . . . . . . . . 253
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
12.4 Strong components . . . . . . . . . . . . . . . . . . . . . . . 256
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
12.4.1 An application to fabrics . . . . . . . . . . . . . . . . 262
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
12.5 Tournaments . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
12.5.1 Modules . . . . . . . . . . . . . . . . . . . . . . . . . 265
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
12.6 2-Satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Contents xiii
Bibliography 527
Index 539
Preface
Our objective in writing this book is to present the theory of graphs from an al-
gorithmic viewpoint. We present the graph theory in a rigorous, but informal style
and cover most of the main areas of graph theory. The ideas of surface topology are
presented from an intuitive point of view. We have also included a discussion on lin-
ear programming that emphasizes problems in graph theory. The text is suitable for
students in computer science or mathematics programs.
Graph theory is a rich source of problems and techniques for programming and
data structure development, as well as for the theory of computing, including NP-
completeness and polynomial reduction.
This book could be used a textbook for a third or fourth year course on graph
algorithms which contains a programming content, or for a more advanced course
at the fourth year or graduate level. It could be used in a course in which the pro-
gramming language is any major programming language (e.g., C, C++, Java). The
algorithms are presented in a generic style and are not dependent on any particular
programming language.
The text could also be used for a sequence of courses like “Graph Algorithms I”
and “Graph Algorithms II”. The courses offered would depend on the selection of
chapters included. A typical course will begin with Chapters 1, 2, 4, and 5. At this
point, a number of options are available.
A possible first course would consist of Chapters 1, 2, 4, 5, 7, 10, 11, 12, 13,
and 14, and a first course stressing optimization would consist of Chapters 1, 2, 3,
5, 10, 11, 12, 17, 18, and 19. Experience indicates that the students consider these
substantial courses. One or two chapters could be omitted for a lighter course.
We would like to thank the many people who provided encouragement while
we wrote this book, pointed out typos and errors, and gave useful suggestions. In
particular, we would like to convey our thanks to Ben Li and John van Rees of the
University of Manitoba for proofreading some chapters.
William Kocay
Donald L. Kreher
August, 2004
xvii
xviii Graphs, Algorithms, and Optimization
William Kocay
Donald L. Kreher
August, 2016
Preface xix
William Kocay obtained his Ph.D. in Combinatorics and Optimization from the
University of Waterloo in 1979. He is currently a member of the Computer Sci-
ence Department, and an adjunct member of the Mathematics Department, at the
University of Manitoba, and a member of St. Paul’s College, a college affiliated
with the University of Manitoba. He has published numerous research papers,
mostly in graph theory and algorithms for graphs. He was managing editor of the
mathematics journal Ars Combinatoria from 1988 to 1997. He is currently on
the editorial board of that journal. He has had extensive experience developing
software for graph theory and related mathematical structures.
Donald L. Kreher obtained his Ph.D. from the University of Nebraska in 1984.
He has held academic positions at Rochester Institute of Technology and the
University of Wyoming. He is currently a University Professor of Mathematical
Sciences at Michigan Technological University, where he teaches and conducts
research in combinatorics and combinatorial algorithms. He has published nu-
merous research papers and is a co-author of the internationally acclaimed text
“Combinatorial Algorithms: Generation Enumeration and Search”, CRC Press,
1999. He serves on the editorial boards of two journals.
Professor Kreher is the sole recipient of the 1995 Marshall Hall Medal, awarded
by the Institute of Combinatorics and its Applications.
1
Graphs and Their Complements
1.1 Introduction
The diagram in Figure 1.1 illustrates a graph. It is called the graph of the cube. The
edges of the geometric cube correspond to the line segments connecting the nodes in
the graph, and the nodes correspond to the corners of the cube where the edges meet.
They are the vertices of the cube.
2 6
3 7
0 4
1 5
FIGURE 1.1
The graph of a cube
5 7
1 3
0 2
4 6
FIGURE 1.2
The graph of the cube
1
2 Graphs, Algorithms, and Optimization
DEFINITION 1.1: A simple graph G consists of a vertex set V (G) and an edge
set E(G), where each edge is a pair {u, v} of vertices u, v ∈ V (G).
We denote the set of all pairs of a set V by V2 . Then E(G) ⊆ V (G) 2 . In the
example of the cube, V (G) = {0, 1, 2, 3, 4, 5, 6, 7}, and E(G) = {01, 13, 23, 02, 45,
57, 67, 46, 15, 37, 26, 04}, where we have used the shorthand notation uv to stand
for the pair {u, v}. If u, v ∈ V (G), then u −→ v means that u is joined to v by
an edge. We say that u and v are adjacent. We use this notation to remind us of the
linked list data structure that we will use to store a graph in the computer. Similarly,
u 6−→ v means that u is not joined to v. We can also express these relations by
writing uv ∈ E(G) or uv 6∈ E(G), respectively. Note that in a simple graph if
u −→ v, then v −→ u. If u is adjacent to each of u1 , u2 , . . . , uk , then we write
u −→ {u1 , u2 , . . . , uk }.
These graphs are called simple graphs because each pair u, v of vertices is joined
by at most one edge. Sometimes we need to allow several edges to join the same pair
of vertices. Such a graph is also called a multigraph. An edge can then no longer be
defined as a pair of vertices, (or the multiple edges would not be distinct), but to each
edge there still corresponds a pair {u, v}. We can express this formally by saying that
a graph G consists of a vertex set V (G), an edge set E(G), and a correspondence
ψ : E(G) → V (G) 2 . Given an edge e ∈ E(G), ψ(e) is a pair {u, v} which are
the endpoints of e. Different edges can then have the same endpoints. We shall use
simple graphs most of the time, which is why we prefer the simpler definition, but
many of the theorems and techniques will apply to multigraphs as well.
This definition can be further extended to graphs with loops as well. A loop is an
edge in which both endpoints are equal. We can include this in the general definition
of a graph by making the mapping ψ : E(G) → V (G) 2 ∪ V (G). An edge e ∈ E(G)
for which ψ(e) = u ∈ V (G) defines a loop. Figure 1.3(a) shows a graph with
multiple edges and loops. However, we shall use simple graphs most of the time, so
that an edge will be considered to be a pair of vertices.
(a) (b)
FIGURE 1.3
A multigraph (a) and a digraph (b)
Graphs and Their Complements 3
A directed graph or digraph has edges which are ordered pairs (u, v) rather than
unordered pairs {u, v}. In this case an edge is also called an arc. The direction of an
edge is indicated by an arrow in diagrams, as in Figure 1.3(b).
The number of vertices of a graph G is denoted |G|. It is called the order of G.
The number of edges is ε(G). If G is simple, then obviously ε(G) ≤ |G| 2 , because
E(G) ⊆ V (G) 2 . We shall often use node or point as synonyms for vertex.
Many graphs havespecial names. The complete graph Kn is a simple graph with
|Kn | = n and ε = n2 . The empty graph K n is a graph with |K n | = n and ε = 0.
K n is the complement of Kn .
FIGURE 1.4
The complete graph K5
1 2 1 2
G= G=
4 3 4 3
FIGURE 1.5
A graph and its complement
Figure 1.6 shows another graph and its complement. Notice that in this case,
when G is redrawn, it looks identical to G.
In a certain sense, this G and G are the same graph. They are not equal, because
E(G) 6= E(G), but it is clear that they have the same structure. If two graphs have
the same structure, then they can only differ in the names of the vertices. Therefore,
we can rename the vertices of one to make it exactly equal to the other graph. In the
4 Graphs, Algorithms, and Optimization
1 1 1
5 2 5 2 4 3
−→
4 3 4 3 2 5
G G G
FIGURE 1.6
Another graph and its complement
Figure 1.6 example, we can rename the vertices of G by the mapping θ given by
k: 1 2 3 4 5
,
θ(k) : 1 3 5 2 4
then θ(G) would equal G. This kind of equivalence of graphs is known as isomor-
phism. Observe that a one-to-one mapping θ of the vertices of a graph G can be
extended to a mapping of the edges of G by defining θ({u, v}) = {θ(u), θ(v)}.
DEFINITION 1.3: Let G and H be simple graphs. G and H are isomorphic if
there is a one-to-one correspondence θ : V (G) → V (H) such that θ(E(G)) =
E(H), where θ(E(G)) = {θ(uv) : uv ∈ E(G)}.
We write G ∼ = H to denote isomorphism. If G ∼ = H, then uv ∈ E(G) if and
only if θ(uv) ∈ E(H). One way to determine whether G ∼ = H is to try and redraw
G so as to make it look identical to H. We can then read off the mapping θ from the
diagram. However, this is limited to small graphs. For example, the two graphs G and
H shown in Figure 1.7 are isomorphic, because the drawing of G can be transformed
into H by first moving vertex 2 to the bottom of the diagram, and then moving vertex
5 to the top. Comparing the two diagrams then gives the mapping
k: 1 2 3 4 5 6
θ(k) : 6 4 2 5 1 3
as an isomorphism.
It is usually more difficult to determine when two graphs G and H are not iso-
morphic than to find an isomorphism when they are isomorphic. One way is to find
a portion of G that cannot be part of H. For example, the graph H of Figure 1.7 is
not isomorphic to the graph of the prism, which is illustrated in Figure 1.8, because
the prism contains a triangle, whereas H has no triangle. A subgraph of a graph G is
a graph K such that V (K) ⊆ V (G) and E(K) ⊆ E(G). If θ : G → H is a possible
isomorphism, then θ(K) will be a subgraph of H which is isomorphic to K.
A subgraph K is an induced subgraph if for every u, v ∈ V (K) ⊆ V (G), uv ∈
E(K) if and only if uv ∈ E(G). That is, we choose a subset U ⊆ V (G) and all
Graphs and Their Complements 5
1
1 2 3
6 2
G= H=
5 3
4 5 6
4
FIGURE 1.7
Two isomorphic graphs
edges uv with both endpoints in U . We can also form an edge subgraph or partial
subgraph by choosing a subset of E(G) as the edges of a subgraph K. Then V (K)
will be all vertices which are an endpoint of some edge of K.
FIGURE 1.8
The graph of the prism
The degree of a vertex u ∈ V (G) is D EG(u), the number of edges which con-
tain u. If k = D EG(u) and u −→ {u1 , u2 , . . . , uk }, then θ(u) −→ {θ(u1 ), θ(u2 ),
. . . , θ(uk )}, so that D EG(u) = D EG(θ(u)). Therefore a necessary condition for G
and H to be isomorphic is that they have the same set of degrees. The examples of
Figures 1.7 and 1.8 show that this is not a sufficient condition.
In Figure 1.6, we saw an example of a graph G that is isomorphic to its comple-
ment. There are many such graphs.
DEFINITION 1.4: A simple graph G is self-complementary if G ∼
= G.
TABLE 1.1
Graphs up to 10 vertices
n No. graphs
1 1
2 2
3 4
4 11
5 34
6 156
7 1,044
8 12,346
9 247,688
10 12,005,188
are consecutive integers, so that one of them is odd. Therefore |G| ≡ 0 (mod 4) or
|G| ≡ 1 (mod 4).
So possible orders for self-complementary graphs are 4, 5, 8, 9, 12, 13, . . ., 4k,
4k + 1, etc.
Exercises
1.1.1 The four graphs on three vertices in Figure 1.9 have 0, 1, 2, and 3 edges,
respectively. Every graph on three vertices is isomorphic to one of these
four. Thus, there are exactly four different isomorphism types of graphs
on three vertices.
G1 G2 G3 G4
FIGURE 1.9
Four graphs on three vertices
Find all the different isomorphism types of graph on 4 vertices (there are
11 of them). Hint: Adding an edge to a graph with ε = m, gives a graph
with ε = m + 1. Every graph with ε = m + 1 can be obtained in this
way. Table 1.1 shows the number of isomorphism types of graphs up to
10 vertices.
1.1.2 Determine whether the two graphs shown in Figure 1.10 are isomorphic
to each other or not. If they are isomorphic, find an explicit isomorphism.
Graphs and Their Complements 7
FIGURE 1.10
Two graphs on eight vertices
1.1.3 Determine whether the three graphs shown in Figure 1.11 are isomorphic
to each other or not. If they are isomorphic, find explicit isomorphisms.
FIGURE 1.11
Three graphs on 10 vertices
1.1.8 If θ is any permutation of {1, 2, . . . , n}, then it depends only on the cycle
structure of θ whether it can be used as a complementing permutation.
Discover what condition this cycle structure must satisfy, and prove it
both necessary and sufficient for θ to be a complementing permutation.
Proof. An edge uv has two endpoints. Therefore each edge will be counted twice in
the summation, once for u and once for v.
We use δ(G) to denote the minimum degree of G; that is, δ(G) = MIN{D EG(u) |
u ∈ V (G)}. ∆(G) denotes the maximum degree of G. By Theorem 1.2, the average
degree equals 2ε/|G|, so that δ ≤ 2ε/|G| ≤ ∆.
Corollary 1.3. The number of vertices of odd degree is even.
Proof. Divide V (G) into VP odd = {u | D EG (u) P is odd }, and Veven = {u |
deg(u)
P is even }. Then 2ε = u∈Vodd D EG (u) + u∈Veven D EG (u). Clearly 2ε and
P
u∈Veven D EG (u) are both even. Therefore, so is u∈Vodd D EG(u), which means
that |Vodd | is even.
DEFINITION 1.5: A graph G is a regular graph if all vertices have the same
degree. G is k-regular if it is regular, of degree k.
For example, the graph of the cube (Figure 1.1) is 3-regular.
Lemma 1.4. If G is simple and |G| ≥ 2, then there are always two vertices of the
same degree.
Proof. In a simple graph, the maximum degree ∆ ≤ |G| − 1. If all degrees were
different, then they would be 0, 1, 2, . . . , |G| − 1. But degree 0 and degree |G| − 1
are mutually exclusive. Therefore there must be two vertices of the same degree.
Let V (G) = {u1 , u2 , . . . , un }. The degree sequence of G is
Sometimes it is useful to construct a graph with a given degree sequence. For ex-
ample, can there be a simple graph with five vertices whose degrees are (4, 3, 3, 2, 1)?
Because there are three vertices of odd degree, Corollary 1.3 tells us that there is no
such graph. We say that a sequence
D = (d1 , d2 , . . . , dn ),
is graphic if
d1 ≥ d2 ≥ · · · ≥ dn ,
and there is a simple graph G with DEG(G) = D. So (2, 2, 2, 1) and (4, 3, 3, 2, 1)
are not graphic, whereas (2, 2, 1, 1), (4, 3, 2, 2, 1), and (2, 2, 2, 2, 2, 2, 2) clearly are.
For example, (7, 6, 5, 4, 3, 3, 2) is not graphic; for any graph G with this degree
sequence has ∆(G) = |G| = 7, which is not possible in a simple graph. Similarly,
(6, 6, 5, 4, 3, 3, 1) is not graphic; here we have ∆(G) = 6, |G| = 7 and δ(G) = 1.
But because two vertices have degree |G| − 1 = 6, it is not possible to have a vertex
of degree one in a simple graph with this degree sequence.
When is a sequence graphic? We want a construction which will find a graph G
with DEG(G) = D, if the sequence D is graphic.
One way is to join up vertices arbitrarily. This does not always work, because
we can get stuck, even if the sequence is graphic. The following algorithm always
produces a graph G with DEG(G) = D, if D is graphic.
procedure G RAPH G EN(D)
Create vertices u1 , u2 , . . . , un
comment: upon completion, ui will have degree D[i]
graphic ← false “assume not graphic”
i←1
while D[i] > 0
k←D[i]
if there are at least k vertices with D EG > 0
join ui to the k vertices of largest degree
decrease each of these degrees by 1
do then D[i] ← 0
comment: vertex ui is now completely joined
“ui cannot be joined”
else exit
i←i+1
graphic ← true
10 Graphs, Algorithms, and Optimization
D = (3, 3, 3, 3, 3, 3),
the first vertex will be joined to the three vertices of largest degree, which will then
reduce the sequence to (∗, 3, 3, 2, 2, 2), because the vertex marked by an asterisk is
now completely joined, and three others have had their degree reduced by 1. At the
next stage, the first remaining vertex will be joined to the three vertices of largest
degree, giving a new sequence (∗, ∗, 2, 2, 1, 1). Two vertices are now completely
joined. At the next step, the first remaining vertex will be joined to two vertices,
leaving (∗, ∗, ∗, 1, 1, 0). The next step joins the two remaining vertices with degree
one, leaving a sequence (∗, ∗, ∗, ∗, 0, 0) of zeroes, which we know to be graphic.
In general, given the sequence
D = (d1 , d2 , . . . , dn ),
where
d1 ≥ d2 ≥ · · · ≥ dn ,
the vertex of degree d1 is joined to the d1 vertices of largest degree. This leaves the
numbers
d2 − 1, d3 − 1, . . . , dd1 +1 − 1, dd1 +2 , . . . , dn ,
in some order. If we rearrange them into descending order, we get the reduced se-
quence D′ . Write
D′ = (d′2 , d′3 . . . , d′n ),
where the first vertex u1 has been deleted. We now do the same calculation, using D′
in place of D. Eventually, after joining all the vertices according to their degree, we
either get a graph G with Deg(G) = D or else at some stage, it is impossible to join
some vertex ui .
An excellent data structure for representing the graph G for this problem is to
have an adjacency list for each vertex v ∈ V (G). The adjacency list for a vertex
v ∈ V (G) is a linked list of the vertices adjacent to v. Thus it is a data structure in
which the vertices adjacent to v are arranged in a linear order. A node x in a linked
list has two fields: data hxi, and next hxi.
x: u •
datahxi nexthxi
Given a node x in the list, data hxi is the data associated with x and next hxi points to
the successor of x in the list or next hxi = NIL if x has no successor. We can insert
data u into the list pointed to by L with procedure L IST I NSERT(), and the first node
on list L can be removed with procedure L IST R EMOVE F IRST().
Graphs and Their Complements 11
procedure L IST I NSERT(pseudocode)
L, ux ← N EW N ODE()
data hxi ← u
next hxi ← L
L←x
We use an array AdjList [·] of linked lists to store the graph. For each vertex v ∈
V (G), AdjList [v] points to the head of the adjacency lists for v. This data structure
is illustrated in Figure 1.12.
3 4 AdjList [1] 2 • 4 • ×
AdjList [2] 1 • 3 • 4 • ×
AdjList [3] 2 • 4 • ×
2 1 AdjList [4] 1 • 2 • 3 • ×
FIGURE 1.12
Adjacency lists of a graph
We can use another array of linked lists, Pts [k], being a linked list of the vertices
ui whose degree-to-be di = k. With this data structure, Algorithm 1.2.1 can be
written as follows:
12 Graphs, Algorithms, and Optimization
This program is illustrated in Figure 1.13 for the sequence D = (4, 4, 2, 2, 2, 2),
where n = 6. The diagram shows the linked lists before vertex 1 is joined to vertices
2, 3, 4, and 5, and the new configuration after joining. Care must be used in transfer-
ring the vertices v from Pts [j] to Pts [j − 1], because we do not want to join u to v
more than once. The purpose of the list Pts [0] is to collect vertices which have been
transferred from Pts [1] after having been joined to u. The degrees d1 , d2 , . . . , dn
Graphs and Their Complements 13
D = (4, 4, 2, 2, 2, 2)
1
Pts [6] ×
5 2
Pts [5] ×
Pts [4] 1 • 2 • × 4 3
Pts [3] ×
Pts [2] 3 • 4 • 5 • 6 • ×
Pts [1] ×
Pts [0] ×
(a)
D = (∗, 3, 1, 1, 1, 2)
1
Pts [6] ×
5 2
Pts [5] ×
Pts [4] × 4 3
Pts [3] 2 • ×
Pts [2] 6 • ×
Pts [1] 3 • 4 • 5 • ×
Pts [0] ×
(b)
FIGURE 1.13
The linked lists Pts [k]. (a) Before 1 is joined to 2, 3, 4, and 5. (b) After 1 is joined to
2, 3, 4, and 5.
14 Graphs, Algorithms, and Optimization
need not necessarily be in descending order for the program to work, because the
points are placed in the lists Pts [k] according to their degree, thereby sorting them
into buckets. Upon completion of the algorithm vertex k will have degree dk . How-
ever, when this algorithm is done by hand, it is much more convenient to begin with
a sorted list of degrees; for example, D = (4, 3, 3, 3, 2, 2, 2, 2, 1), where n = 9. We
begin with vertex u1 , which is to have degree four. It will be joined to the vertices
u2 , u3 , and u4 , all of degree three, and to one of u5 , u6 , u7 , and u8 , which have de-
gree two. In order to keep the list of degrees sorted, we choose u8 . We then have
u1 −→ {u2 , u3 , u4 , u8 }, and D is reduced to (∗, 2, 2, 2, 2, 2, 2, 1, 1). We then choose
u2 and join it to u6 and u7 , thereby further reducing D to (∗, ∗, 2, 2, 2, 2, 1, 1, 1, 1).
Continuing in this way, we obtain a graph G.
In general, when constructing G by hand, when uk is to be joined to one of ui
and uj , where di = dj and i < j, then join uk to uj before ui , in order to keep D
sorted in descending order.
We still need to prove that Algorithm 1.2.1 works. It accepts a possible degree
sequence
D = (d1 , d2 , . . . dn ),
and joins u1 to the d1 vertices of largest remaining degree. It then reduces D to new
sequence
D′ = (d′2 , d′3 , . . . d′n ).
arranged in descending order. Create a new vertex u1 and join it to vertices of degree
d2 − 1, d3 − 1, . . . , dd1 +1 − 1.
Then D EG(u1 ) = d1 . Call the new graph G. Clearly the degree sequence of G is
D = (d1 , d2 , . . . , dn ).
Therefore D is graphic.
Now suppose D is graphic. Then there is a graph G with degree sequence
D = (d1 , d2 , . . . , dn ),
u2 u3 u4 uk−1 uk x
y
FIGURE 1.14
Vertices adjacent to u1
Therefore we know the algorithm will terminate with the correct answer, because
it reduces D to D′ . So we have an algorithmic test to check whether D is graphic
and to generate a graph whenever one exists.
ε1
V1 V2
FIGURE 1.15
The vertices V1 of largest degree and the remaining vertices v2
Suppose that there are ε1 edges within V1 and ε2 edges from V1 to V2 . Then
Pk
i=1 di = 2ε1 + ε2 , because each edge within V1 is counted twice in the sum, once
for each endpoint, but edges between V1 and V2 are counted once only. Now ε1 ≤
k
2 , because V1 can induce a complete subgraph at most. Each vertex v ∈ V2 can be
joined to at most k vertices in V1 , because |V1 | = k, but v can be joined to at most
D EG(v) vertices in V1 , P if D EG(v) < k. Therefore ε2 , the number P of edges between
V1 and V2 , is at most v∈V2 MIN (k, D EG(v)), which equals ni=k+1 MIN (k, di ).
Pk Pn
This now gives i=1 di = 2ε1 + ε2 ≤ k(k − 1) + i=k+1 MIN(k, di ).
Various proofs of the converse are available. Interesting proofs can be found
in the books by H ARARY [80] or B ERGE [14].PHere we outline a proof by
n
C HOUDUM [33]. The proof is by induction on S = i=1 di . If S = 2, it is clear that
the result is true. Without loss of generality, we can assume that dn ≥ 1. Let t be the
smallest integer such that dt > dt+1 , if there is one. Otherwise, if all di are equal,
we take t = n − 1.
Construct D′ = (d′1 , . . . , d′n ) from D as follows: if i 6= t and i 6= n, then d′i = di ;
if i = t or i = n, then d′i = di − 1. That is, we are looking for a graph with degree
sequence D in which vertex t is adjacent to vertex n. Then S ′ = S − 2. If we can
verify that D′ satisfies the conditions of the theorem, with corresponding graph G′ ,
we can then construct a graph G with degree sequence D.
Pk ′
Pk ′
PnConsider Sk = i=1 di and Sk = i=1 di . Let Tk = k(k − 1) +
′ ′ ′
i=k+1 MIN (di , k), and similarly for Tk . If k ≥ t, then Sk = Sk −1 ≤ Tk −1 ≤ Tk .
Thus the conditions of the theorem are satisfied when k ≥ t. If k < t there are
several cases to consider. Note that when k < t, d1 = d2 = . . . = dk , so that
Sk′ = Sk = kdk .
If k < t and dk < k, then Sk′ = kdk ≤ k(k − 1) ≤ Tk′ .
If k < t and dk = k, then Sk′ = kdk = k 2 = k(k − 1) + k. Now d′k+1 is
either k or k − 1. Therefore when i > k, MIN(d′i , k) = d′i . If d′k+1 = k, then
Sk′ = k(k−1)+d′k+1 ≤ Tk′ . Otherwise d′k+1 = k−1, and Sk′ = k(k−1)+d′k+1 +1.
If d′k+2 ≥ 1, we obtain Sk′ ≤ Tk′ . Otherwise n = k + 2 and t = n− 1, so that dn = 1,
giving S = (n − 2)2 + (n − 2) + 1, which must be even, so that n is odd. But then
all degrees are odd, which is impossible.
If k < t and dk > k and dn > k, then MIN(di , k) = MIN(d′i , k) = k when i > k.
Sk = kdk = Sk ≤ Tk = Tk′ .
′
If k < t and dk > k and dn ≤ k, let r be the first integer such that dr ≤ k.
Graphs and Their Complements 17
Exercises
1.2.1 Prove Theorem 1.2 for arbitrary graphs. That is, prove
Theorem 1.7. For any graph G we have
X
Deg(u) + ℓ = 2ε(G).
u∈V (G)
1.3 Analysis
Let us estimate the number of steps that Algorithm 1.2.1 performs. Consider the loop
structure
for k ← n downto 1
do while
Pts [k] 6= NIL
do · · ·
The for-loop performs n iterations. For many of these iterations, the contents of
the while-loop will not be executed, because Pts [k] will be NIL. When the contents
of the loop are executed, vertex u of degree-to-be k will be joined to k vertices. This
means that k edges will be added to the adjacency lists of the graph G being con-
structed. This takes 2k steps, because an edge uv must be added to both GraphAdj [u]
and GraphAdj [v]. It also makes D EG(u) = k. When edge uv is added, v will be
transferred from Pts [j] to Pts [j − 1], requiring additional
P k steps. Once u has been
joined, it is removed from the list. Write ε = 21 i di , the number of edges of G
when D is graphic. Then, in all, the combination for-while-loop will perform exactly
2ε steps adding edges to the graph and a further ε steps transferring vertices to other
lists, plus n steps for the n iterations of the for-loop. This gives a total of 3ε + n steps
for the for-while-loop. The other work that the algorithm performs is to create and
initialize the lists Pts [·], which takes 2n steps altogether. So we can say that in total,
the algorithm performs 3ε + 3n steps.
Now it is obvious that each of these “steps” is composed of many other smaller
steps, for there are various comparisons and assignments in the algorithm which we
have not explicitly taken account of (they are subsumed into the steps we have ex-
plicitly counted). Furthermore, when compiled into assembly language, each step
will be replaced by many smaller steps. Assembly language is in turn executed by
the microprogramming of a computer, and eventually we come down to logic gates,
flip-flops, and registers. Because of this fact, and because each computer has its own
architecture and machine characteristics, it is customary to ignore the constant coef-
ficients of the graph parameters ε and n, and to say that the algorithm has order ε+n,
which is denoted by O(ε + n), pronounced “big Oh of ε + n”. A formal definition
is provided by Definition 1.6. Even though the actual running time of a given algo-
rithm depends on the architecture of the machine it is run on, the programmer can
often make a reasonable estimate of the number of steps of some constant size (e.g.,
counting one assignment, comparison, addition, multiplication, etc. as one step), and
Graphs and Their Complements 19
thereby obtain a formula like 3ε + 3n. Such an algorithm will obviously be superior
to one which takes 15ε + 12n steps of similar size. Because of this fact, we shall try
to obtain formulas of this form whenever possible, as well as expressing the result in
a form like O(ε + n).
The complexity of an algorithm is the number of steps it must perform, in the
worst possible case. That is, it is an upper bound on the number of steps. Because
the size of each step is an unknown constant, formulas like 5n2 /6 and 25n2 are both
expressed as O(n2 ). We now give a formal definition of this notation.
DEFINITION 1.6: Suppose f : Z+ → R and g : Z+ → R. We say that f (n) is
O(g(n)) provided that there exist constants c > 0 and n0 ≥ 0 such that 0 ≤ f (n) ≤
c · g(n) for all n ≥ n0 .
In other words, f (n) is O(g(n)) provided that f (n) is bounded above by a con-
stant factor times g(n) for large enough n. For example, the function 5n3 + 2n + 1
is O(n3 ), because for all n ≥ 1, we have
Exercises
1.3.1 Show that if G is a simple graph with n vertices and ε edges, then log ε =
O(log n).
1.3.2 Consider the following statements which count the number of edges in a
graph, whose adjacency matrix is Adj .
Edges ← 0
for u ← 1 to n − 1
do for v ← u + 1 to n
do if Adj [u, v] = 1
then Edges ← Edges + 1
Calculate the number of steps the algorithm performs. Then calculate the
number of steps required by the following statements in which the graph
is stored in adjacency lists:
Edges ← 0
for u ← 1 to n − 1
do for each v −→ u
do if u < v
then Edges ← Edges + 1
What purpose does the condition u < v fulfill, and how can it be avoided?
1.3.3 Use induction to prove that the following formulas hold:
(a) 1 + 2 + 3 + · · · + n = n+1
2
Graphs and Their Complements 21
(b) 22 + 32 + 42 + · · · + n
2 = n+1
3 .
t
t+1 t+2
n
n+1
(c) t + t + t + ··· + t = t+1 .
1.3.4 Show that 3n2 + 12n = O(n2 ); that is, find constants A and N such that
3n2 + 12n ≤ An2 whenever n ≥ N .
1.3.5 Show that log(n + 1) = O(log n), where the logarithm is to base 2.
1.3.6 Use the answer to the previous question to prove that
1.3.7 Prove that if f1 (n) and f2 (n) are both O(g(n)), then f1 (n) + f2 (n) is
O(g(n)).
1.3.8 Prove that if f1 (n) is O(g1 (n)) and f2 (n) is O(g2 (n)), then f1 (n) f2 (n)
is O(g1 (n) g2 (n)).
1.4 Notes
Some good general books on graph theory are B ERGE [14], B OLLOB ÁS [20],
B ONDY and M URTY [23], C HARTRAND and L ESNIAK [31], C HARTRAND and
O ELLERMANN [32], D IESTEL [44], G OULD [73], and W EST [189]. A very read-
able introductory book is T RUDEAU [172]. G IBBONS [66] is an excellent treatment
of graph algorithms. A good book discussing the analysis of algorithms is P URDOM
and B ROWN [138]. A HO , H OPCROFT, and U LLMAN [1], S EDGEWICK [157] and
W EISS [188] are all excellent treatments of data structures and algorithm analysis.
2
Paths and Walks
2.1 Introduction
Let u and v be vertices of a simple graph G. A path P from u to v is a sequence of
vertices u0 , u1 , . . . , uk such that u = u0 , v = uk , ui −→ ui+1 , and all the ui are
distinct vertices. The length of a path P is ℓ(P ), the number of edges it uses. In this
example, ℓ(P ) = k, and P is called a uv-path of length k. A uv-path of length 4 is
illustrated in Figure 2.1, with dashed edges.
A cycle C is a sequence of vertices u0 , u1 , . . . , uk forming a u0 uk -path, such that
uk −→ u0 . The length of C is ℓ(C), the number of edges that it uses. In this case,
ℓ(C) = k + 1.
A uv-path P connects u to v. The set of all vertices connected to any vertex
u forms a subgraph Cu , the connected component of G containing u. It will often
be the case that Cu contains all of G, in which case G is a connected graph. ω(G)
denotes the number of distinct connected components of G. The graph of Figure 2.1
is disconnected, with ω = 3.
FIGURE 2.1
A graph with three components
There are several ways of finding the connected components of a graph G. One
way to find the sets Cu for a graph G is as follows:
23
24 Graphs, Algorithms, and Optimization
procedure C OMPONENTS(G)
for each u ∈ V (G)
do initialize Cu to contain only u
u ∈ V (G)
for each
for each v −→ u
do
do if Cu 6= Cv then M ERGE(Cu , Cv )
The assignment
CompPtr [u] ← theRep
Paths and Walks 25
is called path compression. It ensures that the next time CompPtr (u) is computed,
the representative will be found more quickly. The algorithm C OMPONENTS() can
now be written as follows:
n ← |G|
for u ← 1 to n
do CompPtr [u] ← −1
← 1 to n
for u
v −→ u
for each
uRep ← C OMP R EP (u)
do vRep ← C OMP R EP (v)
do
if uRep 6= vRep
then M ERGE(uRep , vRep )
or
CompPtr [uRep ] ← vRep
The best one to choose is that which merges the smaller component onto the larger.
We can determine the size of each component by making use of the negative values
of CompPtr [uRep ] and CompPtr [vRep ]. Initially, CompPtr [u] = −1, indicating a
component of size one.
When Cu and Cv are merged, the new component representative (either uRep
or vRep ) has its CompPtr [·] assigned equal to −(uSize + vSize ). The component
pointers can be illustrated graphically. They are shown in Figure 2.2 as arrows. The
merge operation is indicated by the dashed line.
26 Graphs, Algorithms, and Optimization
uRep
uRep
u
CompPtr [vRep] = −5
CompPtr [uRep] = −8
FIGURE 2.2
Component representatives
2.2 Complexity
The components algorithm is very efficient. The for-loop which initializes the
CompPtr array requires n steps. If adjacency lists are used to store G, then the total
number of times that the body of the main loop is executed is
X
D EG(u) = 2ε.
Thus C OMP R EP () is called 4ε times. How many times is M ERGE() called? At each
merge, two existing components are replaced by one, so that at most n−1 merges can
take place. Each merge can be performed using four assignments and a comparison.
It takes n steps to initialize the CompPtr array. Thus the total number of steps is
about 6n + 4ε·(number of steps per call to C OMP R EP ()). The number of steps each
call to C OMP R EP () requires depends on the depth of the trees which represent the
components. The depth is changed by path compression, and by merging. It is proved
in A HO, H OPCROFT, and U LLMAN [1], that if there are a total of n points involved,
the number of steps required is O(α(n)), where α(n) is the inverse of the function
A(n), defined recursively as follows.
A(1) = 1
A(k) = 2A(k−1)
Paths and Walks 27
Exercises
2.2.1 Assuming the data structures described in Section 2.1, program the
C OMPONENTS () algorithm, merging the smaller component onto the
larger. Include an integer variable NComps which contains the current
number of components. Upon completion, its value will equal ω(G).
2.2.2 Algorithm 2.1.1 computes the connected components Cu using the array
CompPtr . If we now want to print the vertices of each distinct Cu , it
cannot be done very efficiently. Show how to use linked lists so that for
each component, a list of the vertices it contains is available. Rewrite the
M ERGE() procedure to include this. Is the complexity thereby affected?
2.2.3 In the Algorithm 2.1.1 procedure, the for-loop
for u ← 1 to n do
2.3 Walks
Paths do not contain repeated vertices or edges. A walk in G is any sequence of ver-
tices u0 , u1 , . . . , uk such that ui −→ ui+1 . Thus, in a walk, edges and vertices may
be repeated. Walks are important because of their connection with the adjacency ma-
trix of a graph. Let A be the adjacency matrix of G, where V (G) = {u1 , u2 , . . . , un },
such that row and column i of A correspond to vertex ui .
Theorem 2.1. Entry [i, j] of Ak is the number of walks of length k from vertex ui to
uj .
Proof. By induction on k. When k = 1, there is a walk of length 1 from ui to uj if
and only if ui −→ uj , in which case entry A[i, j] = 1. Assume it’s true whenever
k ≤ t and consider At+1 . Let W be a ui uj -walk of length t + 1, where t ≥ 2. If ul
is the vertex before uj on W , then W can be written as (W ′ , ul , uj ), where W ′ is
a ui ul -walk of length t. Furthermore, every ui ul -walk of length t gives a ui uj -walk
of length t + 1 whenever ul −→ uj . Therefore the number of ui uj -walks of length
t + 1 is X
(the number of ui ul − walks of length t)(A[l, j]).
l
But the number of ui ul -walks of length t is At [i, l], so that the number of ui uj -walks
of length t + 1 is
X n
At [i, l]A[l, j],
l=1
which equals At+1 [i, j]. Therefore the result is true when k = t + 1. By induction,
it’s true for all values of k.
Notice that this result is also true for multigraphs, where now A[i, j] is the num-
ber of edges joining ui to uj . For multigraphs, a walk W must be specified by giving
the sequence of edges traversed, as well as the sequence of vertices, because there
can be more than one edge joining the same pair of vertices.
Exercises
2.3.1 Show that A2 [i, j] equals the number of ui uj -paths of length 2, if i 6= j,
and that A2 [i, i] = D EG(ui ).
2.3.2 Show that A3 [i, i] equals the number of triangles containing vertex ui .
Find a similar interpretation of A3 [i, j], when i 6= j. (A triangle is a cycle
of length 3.)
2.3.3 Ak contains the number of walks of length k connecting any two vertices.
Multiply Ak by xk , the k th power of a variable x, and sum over k, to get
the matrix power series I + Ax + A2 x2 + A3 x3 + · · · , where I is the
identity matrix. The sum of this power series is a matrix whose ij th entry
Paths and Walks 29
FIGURE 2.3
Compute the number of walks in this graph
Given a vertex u, one way of computing D IST(u, v), for all v, is to use a breadth-
first search (BFS), as is done in procedure BFS().
Random documents with unrelated
content Scribd suggests to you:
undisturbed alike by the troubles and allurements of the world, as the
most perfect state of the soul; but it did not provide any adequate
motive for attaining this desirable equipoise. It could only state the
theory and insist on its importance; but refractory human nature had
its own way, in spite of philosophy. The apostle supplies in these words
a nobler and more workable philosophy. He not only exhorts us to
tranquillity of mind but shows us how it may be attained and kept. In
all kinds of anxieties and especially in the struggles of religious doubt,
prayer is the truest philosophy. Our difficulties vanish when we take
them to God.
True Prayer.
I. True prayer is specific as well as earnest.—Nothing is too little to
be made the subject of prayer. The very act of confidence is pleasing to
God and tranquillising to the supplicant. God is not only willing to
hear the details, but He desires that we should tell Him.
II. True prayer consists of confession, supplication, and
thanksgiving.—We are to confess our sins, ask forgiveness, and do it
with gratitude and thankfulness. God will not answer the requests of
unthankful beggars. Without thanksgiving what we call prayer is
presumption.—Homiletic Monthly.
Christian Contentment.
I. What it is.—1. That our desires of worldly good are low and
moderate. 2. That in all our views of bettering our worldly condition we
indulge not immoderate cares. 3. That whatever our present condition
be, we cheerfully submit to the providence of God in it. 4. That we are so
easy with our own lot as not to envy others who may be in more
prosperous circumstances. 5. That we will not use any unlawful means
to better our present condition. 6. That we make the best of our
condition whatever it be.
II. How it may be learned.—1. Christianity sets in view the most solid
principles of contentment and the strongest motives to it. 2. Furnishes
us with the brightest patterns of contentment to enforce its precepts
and prevent our despair of attaining it.
Lessons.—1. The present state should be considered as a state of
learning. 2. More depends on our spirits than upon our outward
condition in order to contentment. 3. Labour to have our minds so
formed that they may be content and tolerably easy in any state of life.
THE
INTRODUCTION.
Colossæ and its people.—In Asia Minor, a few days’ journey to the
east of Ephesus, is a district which for natural beauty, as described by
many travellers, is hardly to be surpassed. At the foot of Mount
Cadmus—now known as Baba Dagh, or “the Father of Mountains”—
near the stream of the Lycus, a tributary of the Mæander, stood the
town of Colossæ. Within a day’s journey stood Hierapolis and
Laodicea, the latter the home of a Church in the later years where a
poor, half-hearted religion was a constant offence to God. Owing to its
political significance, it quite eclipsed Colossæ, as Hierapolis also did,
owing to its natural advantages as a health-resort or watering-place.
Though at one time Colossæ was a flourishing town, where the vast
forces of Xerxes or those of Cyrus could halt, in this country it was only
with difficulty and some uncertainty that its exact site was discovered.
Chronos (so called from the funnel-shaped holes into which the river
drops) is its modern substitute, though from two to three miles south
of the site of Colossæ.
The inhabitants of Colossæ were largely of Phrygian derivation, highly
religious, if dread of the supernatural in every form constitutes
religion, but ready to yield themselves up to the wildest orgies and the
most degradingly sensual types of worship. But there were also many
Jews in the town, as we learn not only from the indications in this
letter, but from other sources. It was not the only occasion in history
when travelled Jews had learnt to blend with their ancestral religion
the philosophical or theosophical opinions of the neighbourhood
where they had settled. The result was an amalgam very hard to
catalogue. The Hellenism of these Phrygian Jews did as little for them
as in later days it did for Heine, the German Jew. So, because its results
were pernicious, the uncompromising opponent of Pharisaic dead
works and herald of one God set himself to make known to the
Colossians the sufficiency of Christian doctrine without admixture of
heathen wisdom (ch. ii. 8, 9) or the administration of Jewish rites (ch.
ii. 11).
Occasion, aim, time, and place of composition.—Epaphras, a
member of the Colossian Church, and to whom the whole
neighbourhood was indebted as the bringer of Gospel tidings, had
given St. Paul an account of the state of the Church to which he
ministered, with intimations of the perils threatening it. This it was
which led the apostle to send Tychicus with this letter. The runaway
slave Onesimus accompanied him, sent back to Philemon his master in
Colossæ by St. Paul.
The aim of the apostle in writing the letter was chiefly to warn the
Colossians against the specious errors of certain teachers who had tried
to unite Christianity with Judaism, and these to theosophical notions.
The results of this blend could only be regarded with a pitiful smile. It
was pernicious, and, with all its semblance of humility, immoral. Its
main offence to the apostle was that it dishonoured his Lord, “who is
the image of the invisible God, the firstborn of all creation.”
Lightfoot thinks this letter, with those to Ephesus and Philemon, was
sent by Tychicus “towards the close of the apostle’s captivity in Rome,
about the year 63.” Meyer, who contends that it was sent, not from
Rome, but from Cæsarea, thinks 60 or 61 was the date. The ancient
tradition was that the letter emanated from a Roman prison, and the
reasons given against this are too slender to set it aside in favour of
Cæsarea.
Style of the epistle.—“The style of the epistle is somewhat laboured.
It lacks the spontaneity, the fire, the passion, the tender emotion which
mark most of St. Paul’s letters. The reason for this is twofold. It is partly
because he is addressing strangers, the members of Churches which he
had not directly founded, and to whom his expressions did not flow
forth from the same full spring of intimate affection. It is still more
because he is refuting errors with which he was not familiar, and which
he had not witnessed in their direct workings. . . . When he was a little
more familiar with the theme (in writing Ephesians) he writes with
more fervency and ease. . . . In the close similarity between these two,
and yet in the strongly marked individuality of each, we have one of the
most indisputable proofs of the genuineness of both. . . . If Colossians
has less of the attractive personal element and the winning pathos of
other letters of St. Paul, it is still living, terse, solid, manly, vigorous;
and brief though it be, it still, as Calvin says, contains the nucleus of
the Gospel” (Farrar).
Outline of the epistle.
Introduction. i. 1, 2, greeting. 3–8, thanksgiving. 9–14, prayer and
supplication with thanksgiving.
i. 15–23. Main theme of the epistle. Christ’s personal
supremacy and the universal efficacy of His
mediatorial work.
24–29. The apostle’s personal explanation of his motive in
addressing them.
ii. 1–7. His interest in the highest welfare of Christians
unknown to him.
8–15. Warning against a philosophy born of earth, able
only to deal externally with outbursts of sin as
contrasted with the complete putting away of it by
Christ’s death and resurrection.
16–23. A protest against the attempt to foist precepts and
prohibitions on those who in Christ have passed
beyond the stage of legalism.
iii. 1–17. The sufficiency, for conduct, of living consistently
with the life hid with Christ in God, which is fatal, as
it grows, to every form and manifestation of the old
and corrupt life.
18–22. Duties of wives (18), husbands (19), children (20),
fathers (21), servants (22).
23–25. Motives, incentives, and deterrents in service.
iv. 1. Duties of masters, and motive of conduct.
2–18. Sundry exhortations, commendations, and
greetings. The latter concludes with the apostle’s
autograph signature, a touching reference to his
“bonds,” and a benediction.
CHAPTER I.
ebookgate.com