Algorithm Design Foundations Analysis and Internet Examples 1st Edition Michael T. Goodrich Download PDF
Algorithm Design Foundations Analysis and Internet Examples 1st Edition Michael T. Goodrich Download PDF
com
https://ebookname.com/product/algorithm-design-foundations-
analysis-and-internet-examples-1st-edition-michael-t-
goodrich/
OR CLICK BUTTON
DOWNLOAD EBOOK
https://ebookname.com/product/data-structures-and-algorithms-
in-c-2nd-edition-michael-t-goodrich/
https://ebookname.com/product/algorithm-design-1st-edition-jon-
kleinberg/
https://ebookname.com/product/mathematics-for-algorithm-and-
systems-analysis-edward-a-bender/
https://ebookname.com/product/understanding-community-politics-
policy-and-practice-2nd-edition-peter-somerville/
Kim Ki duk 1st Edition Edition Hye Seung Chung
https://ebookname.com/product/kim-ki-duk-1st-edition-edition-hye-
seung-chung/
https://ebookname.com/product/powerpoint-2007-just-the-steps-for-
dummies-1st-edition-barbara-obermeier/
https://ebookname.com/product/the-science-of-science-policy-a-
handbook-first-edition-kaye-husbands-fealing/
https://ebookname.com/product/inheriting-the-world-the-atlas-of-
children-s-health-and-the-environment-1st-edition-bruce-gordon/
https://ebookname.com/product/crimes-against-humanity-the-
struggle-for-global-justice-fourth-edition-edition-geoffrey-
robertson/
The Road to Unity in Psychoanalytic Theory 1st Edition
Leo Rangell M.D.
https://ebookname.com/product/the-road-to-unity-in-
psychoanalytic-theory-1st-edition-leo-rangell-m-d/
ALGORITHM
DESIGN
,
Foundations Analysis ,
and Internet Examples
jright © 1999, 2000, 2003, 2004, 2005, 2006, by John Wiley & Sons Inc. All
s reserved
orized. reprint by Wiley India (P.) Ltd., 4435/7, Ansari .Road, Daryagan], New
1110002.
Drint: 2011
3N : 978-81-265-0986-7
Tb my ¿hiidren, Paul, Anna, and Jack
- Michael T Goodrich.
To Isabel
- Ròberto Tamassia
Preface
This book is designed to provide a comprehensive introduction to the design and
analysis of computer algorithms and data structures. In terms M the computer sci-
ence and computer engineering curricula, we have written this book to be primarily
focused on the Junior-Senior level Algorithms (CS7) course, which is taught as a
first-year graduate course in some schools.
Topics
The topics covered in this book are taken from a broad spectrum of discrete algo-
rithm design and analysis, including the following:
Visual justifications (that is, picture proofs), which make mathçmatical ar-
An
guments more understandable for students, appealing to visual learners.
example of visual justifications is. our analysis of bottom-up heap constfuc-
tion. This topic has traditionally been difficult for students to understand;
proof
hence, time coñsuming for instructors to explain. The included visual
is intuitive, ngorous, and quick
Algorithmic design patterns, which provide general techniques for design-
divide-and-conquer, dy-
ing and implementing algonthms Examples include
method pattern
namic programming, the decorator pattern, and the template
in an al-
Use of randomization, which takes advantage of random choices
gorithin to simplify its dsign and analysis. Such usage replaces complex
average-case analysis of sophisticated data structures with intuitive analy-
sis of simple data structures and algorithths. Examples incluçie skip lists,
primality
randomized uick-sort, randomized quick-select, and randomized
testing.
Internet algorithmics topics, which either motivate traditional algonthniic
topics from a new Intérnet. viewpoint or highlight new algorithms that are
derived from Internet applications. Examples include information retrieval,
Web crawling, päcket routing, Web auction algorithms, and Web caching
algorithms. We have found that motivating algorithms topics by their Inter-
of algo-
net applications significantly improves, student interest in. the study
rithms. .
This book is alsd structured to allow the instructor a great deal of freedom in
how to organize and present the material. Likewise, the dependence between chap-
ters is rather flexible, allowing the instructor to customize an algorithms course to
highlight the topics that he or she feels are most important. We .have extensively
quite interesting to stu-
disèussed Internet Algorithmics topics, which should prove
dents. In addition, we have inclúded examples of Internet application of traditional
algorithms topics in several nlaces as well.
Preface
traditional Introduc-
We show in Table 0.1 how this book could be used for a
with some new topics motivated from the
tion to Algorithms (CS7) course, albeit
Internet.
Topics Option
Ch.
Algorithm analysis Experimental analysis -
Dáta structures Heap Java example
Searching Include one of § 3.2-3.5
3
Sorting In-place quick-sort
Algorithmic techniques The IPT
5
Graph algorithms DFS Java example
Weighted graphs Dijkstra Java example
Matching and flow Include at end of course
Text processing (at least one section) Tries
9
computational geometry Include at end of course
.12
NP-completeness Backtracking
13
Frameworks (at least one) Include at end of course
14
Topics Option
Ch.
Algorithm analysis Experimental analysis
i
Quickly review
2 Data structures (inc. hashing)
Searching (inc. § 3.5, skip lists) Search tree Java example
Sorting In-place quick-sort
Algorithmic techniques The WI
5
Graph algorithms DFS Java. example
6
Weighted graphs Skip one MST alg.
Matching and flowS Matching algorithms
Text processing Pattern matching
Security & cryptography Java examples.
19
Network algorithms Multi-cásting
11
NP-completeness include at end of course
13
Frameworks (at least two) Include at end of course
14
Prerequisites
Wehave written this book assuming that the reader comes to it with cértain knowl-
edge In particular, we assume that the reader has a basic understanding of elemen-
tary data structures, such as arrays and linked lists, and is at least vaguely familiar
with a high-level programming language, such as C, C++, or .Java. Even so, all
álgorithms are described in a high-level "pseudo-code," and specific programming
langùage ëonstructs are only used in the optional lava implementation example
sections.
In terms of mathematical background, we assume the reader is familiar with
topiÇs from first-year college mathematics, including exponents, logarithms, sum-
mations, limits, and elementary probability. Even so, we review most of these
¡
fads in Chapter 1, iñcluding exponénts, logarithms, and summations, and we give
t .suthmary of other useful mathematical facts, including elementary probability, in
Appendix A.
I Fundamental Tools i
I Algorithm Analysis 3
1.1 Methodologies for Analyzing Algorithms .
5
1.2 Asymptotic Notation 13
1.3 AQuick Mathematical Review 21
1.4 Case Studies in Algorithm Analysis. .. 31
1.5 Amortization . .. 34
1.6 Experimentation 42
1.7 Exercises . 47
13 NP-Completeness 591
131 PandNP 593
13 2 NP-Completeness 599
13 3 Important NP-Complete Problems 603
13.4 Approximatiòn.Algorithms . .
618
13.5 Backtracking and Branch-and-Bound ... . 627
13.6 Exercises . ............................... 638
cnowledgments
There are a number of individuals who have helpèd us with the contents of this
book. Specifically, we thank Jeff Achter, Ryan Baker, Devin Borland, Ulrik Bran-
des, Stina Bridgeman, Robert Cohen, David Emory, David Gmat, Natasha Gelfañd,
Mark Handy, Benoît Hudson, Jeremy Mullendore, Daniel Polivy, John Schultz, An-
drew Schwerin, Michael Shin, Galina Shubina, and Luca Vismara.
We are grateful to all our former teaching assistants who helped us in develop-
ing exercises, programming assignments, and algorithm animation systems. There
have been a number of friends and colleagues whose comments have lead to im-
provements in the text. We are particularly thankful to Karen Goodrich, Art Moor-
shead, and Scott Smith for their insightful comments. We are also truly indebted
to the anonymous outside reviewers for their detailed comments and constructive
crificism, which were extremely useful.
We are grateful to dur editors, Paul Crockett and Bill Zobrist, for their enthusi-
astic support of this project. The production team at. Wiley has been great Many
thanks go to people who helped us with the book development, including Susannah
Barr, Katherine Hepburn, Bonnie Kubat, Sharon Prendergast, Marc Ranger, Jeri
Warner, and Jennifer Welter.
This manuscript was prepared primarily with ITEX for the text and AdObe
FrameMaker® and Visio ® for the figures. The LGrind system was used to format
Java code fragments into IbTC The CVS version control system enabled smooth
coordination of our (sometimes concurrent) file editing.
Finally, we would like to warmly thank Isabel Crnz, Karen Goodrich, Giuseppe
Di Battista, Franco Preparata, bannis Tollis, and dur parents for providing advice,
encouragement, and support at various stages of the preparation of this book. We
also thank them for reminding us that there are things in life beyond writing books.
Michael T. Goodrich
Roberto Tamassia
Part
I Fundamental Tools
Chapter
i
//
Algorithm....Analysis
Contents
lui Methodologies for Analyzing Algorithms .......5 7
1.1.1 Pseudo-Code
1.1.2 The Random Access Machine (RAM) Model 9
131 Summations 21
1.5 Amortization
151 Amortization Techniques 36
1.5.2 Analyzing an Extendable Array lmplmentation . 39
In this book, we are interested in the design of "good" algorithms and data
structures. Simply put, an algorithm is a step-by-step procedure for performing
some task in a finite amount of time, and a data structure is a systematic way of
organizing and accessing data. These concepts are central to computing, but to
be able to classify some algorithms and data structures as "good:' we must have
precise ways of analyzing them.
The primary analysis tool we will use in this book involves characterizing the
running times of algorithms and data structUre operations, with space usage also
being of interest. Running time is a natural measure of "goodness:' since time is a
precious resource. But focusing on running time as a primary measure of goodness
implies that we will need to use at least a little mathematics to describe running
times and compare algorithms
We begin this chapter by describing the basic framework needed for analyzing
algorithms, which includes the language for describing algorithms, the computa-
tioziál model that language is intended for, and the main factots we count when
considering running time. We also include a brief discussion of how recursive al-
gorithms are analyzed. In Section 1.2, we present the main notation wb use to char-
acterize Pinning timesthe so-called "big-Oh" notation. These tools comprise the
main theoretical tools for designing and analyzing algorithms
In Section 1.3, we take a short break from our development of the framçwork
for algorithm analysis to review some important mathematical facts, including dis-
cussions of summations, logarithms, proof techniques, and basic probability. Givei
this background and our notation for algorithm analysis, we present some case stud-
ies on theoretical algorithm analysis in. Section 1.4. We follow these examples in
Section 1 5 by presenting an interesting analysis technique, known as amortization,
which allows us to account for the group behavior of many individual operations.
Finally, in Section 1.6, we conclude the chapter by discussing an important and
practical analysis tèchnique--experimentation. We discuss both the main princi-
ples of a good experimental framçwork as well as techniques for summarizing and
characterizing data from an experìnental analysis.
1.1. Methodologies for Analyzing Algorithms
..
'
u
50
u.. - .0
40 II.. 40 N..
N. 30-
.:. .
30- u.. - u
u
Nu
20 20 -
. -
10 N 10
- 1,111!
u
LIII l'i 11111111 II!
50 100
n '
0'"' 50 100
n
(a) , (b)
Figure 1 1 Results of an expenmental study on the running time of an algonthm
A dot with coordinates (n, t) indicates that on an input of size n, the running time of
the algorithm is t milliseconds (ms) '(a) The algorithm executed on a fast computer;
(b) the algorithm execuied on a slow computer.
Chapter 1. Algòrithm Anslysis
Experiments can be done only on a limited set of test inputs, andcare múst
- be taken to make sure these are representative.
It is difficult to compare the efficiency of two algonthms unless experiments
on their runmng times have been performed in the same hardware and soft-
wate environments.
It is necessary to implement and execute an algorithm in Order to study its
running time experimentally.. . . .
This methodology aims at associating with each algorithm a function f(n) that
characterizes the runmng time of the algorithm in terms of the mput size n Typical
functions that will be encountered include n and n2 For example, we will write
statements of the type "Algorithm A runs in time proportional to n," meamng that
if we were to perform experiments, we would find that the actual running time of
algorithm A on any input of size n never exceeds cri, where c is a constant that
depends on the hardware and software environment used in the experiment. Given
two algorithms A and B, where A runs in time proportional:. to n and B runs intime
proportional to n2, we will prefer A to B, since the function n grows at a smaller
rate than the function n2.
We are now ready to "roll up our sleeves" and start developing our method-
ology for algorithm analysis.. There are several components to this methodology,
including the following:
1.1;1 Pseudo-Code
Programmers are Often asked to describe algorithms in a way that is intended for
human eyes only. Such descriptions are not computer programs, but are more struc-
tured than usual prose. They also facilitate the high-level analysis of a data structure
or algorithm, We call these descriptions pseudo-code.
An Example of Pseudo-Code
The array-maximum problem is the simple problem of finding the maximum ele-
ment in an array A stonng n integers To solve this problem, we can use an algo-
nthm called arrayMax, which scans through the elements of A using a for loop
The pseudo-code description of algorithm a rrayM ax is. shown iii Algorithm 1.2.
Algorithm arrayMax(A,n):
Input: An array A storing n 1 integers.
Output: The maximum elemeñt in A.
currentMax *- A [O}
fori4-1 ton-1 do
if currentMax <A [i] then
currentMax 'e- A [i]
return currentMax
Note that the pseudo-code is more compact thañ an equivalent actual software
code fragment would be. In addition, the pseudo-code is easier to reád and under-
stand.
What Is Pseudo-Code?
Whqn we write pseudo-code, we must keep in mind that we are writing for a
human reader, not a computer. Thus, we should strive to communicate high-level
ideas, not low-level implementation details At the same time, we should not gloss
over important steps Like many forms of human communication, finding the nght
balance is an important skill that is refined through practice
Now that we have developed a high-level way of describing algorithms, let us
next discuss how wel can analytiOally characterize algorithms written in pseudo-
code.
1.1. Methodologies for Analyzing Algorithms
2+1+n+4(n-1)+'1=5n
and at most,'.
2±1+n+6(n-1)±l=7n-2..
The best case (t(n) = 5n) occurs when A[O] is the maximum element, so that vari-
able ciirrentMax is never reassigned The worst case (t(n) = 7n 2) occurs when
the elements are sorted in increasing order, so that variable currentMax is reas-
signed at each iteration of the for loop
1.1. Methodologies for Analyzing Algorit uns 11
Like the arrayMax meth , an algorithm may run faster on some inputs than it does
on others In such case' ie may wish to express the running time of such an algo-
rithm as an average tal. t over all possible iñputs. AlthoUgh such an average case
analysis would often be valuable, it is typically quite challenging. 1t requires us to
define a probability distribution on the set of inputs, which is typically a difficult
task Figure 1 3 schematically shows how, depending on the input distnbution, the
runmng time of an algonthm can be anywhere between the worst-case lime and the
best-case time For example, what if inputs are really only of types "A" or "D")
An average-case analysis also typically requires that we calculate eipected run-
ning times based on a given input distribution. Such an analysis often requires
heavy mathematics andprobability theory.
Therefore, except for expenmental studies or the analysis of algonthms that are
themselves randomized, we will, for the remainder of this book, typically charac-
terize running times in ternis of the worst case. Wè say, for example, that algorithm
arrayMax executes t(n) = in 2 primitive operations in the worst case, meaning
that the maximum number of primitive Operations executed by the algorithm, taken
over all inputs of size n, is 7n 2.
This type of analysis is much easier than an average-case analysis, as it does
not require probability theory, it just requires the ability to identify the worst-case
input, which is often straightforward In addition, taking a worst-case approach can
actually lead to better algorithms. Making the Standard of success that of having an
algòrithm perform well in the worst case necessarily requires that it perform well on
every input That is, desigmng for the worst case can lead to stronger algonthmic
"muscles," much like a track star who always practices by running up hill.
average-case time?
-- best-case time
C D
i- I 'I-
B
Input Instance
Figure 13: The difference between best-case and worst-case time Each bar repre-
sents the running time of some algorithm on a different possible input.
Chapter 1. Algorithm Analysis
Iteration is not the only interesting way of solving a problem. Another useful tech-
nique, which is emplòyed by many algorithms, is to. use recursion. In this tech-
nique, we define a procedure P thät is allowed to make calls to itself as a subrou-
tine, provided those calls to p are for solving subproblems. of smaller. sue. The
subroutine calls to P on smaller instances are called "recursive calls?' A recur-
sive procedure should always define a base. case, which is small enough that the
algorithm can solve it directly without using recursion
We give a recursive solution to the array maximum problem in Algorithm 1.4,
This algonthm first checks if the array contains just a single item, which in this case
immediately solve
must be the maximum; hence, in this. simple base case we can
the problem Otherwise, the algonthm recursively computes the maximum of the
first n - i elements in the array and then returns the maximum of this value and the
last element in the array.
As with this example, recursive algorithms are oftén quite elegant. Analyzing
the running time of a recursive algorithm takes a bjt of additional work, however*
In particular, to analyze such a running time, we use a recurrence equation, which
defines mathematical statements that the running time of a recursive algorithm must
satisfy. Weintroduce a function T(n) thàt denotes the running time of the algorithm.
example,
on an input of size n, and we wnte equations that T(n) must satisfy For
algorithm as
we can haracterize thé running time, 1(n), of the recursiveMax
J3 ifn=i
T(n)
T(n i)+7 otherwise,
assuming that we count each comparison, array reference, recursive call, max cal
culation, or return as a single primitive operation. Ideally, we would like to char-
acterize a recurrence equation like that above in closed form, where no references
to the funötion T appear on the righthand side; For the recursiveMax algoritIm,
it isn't too hard to see that a closed form wouldbe T(n) = 7(n 1) + 3 = 7n 2.
Ingeneral, determining closed form solutions to. recurrence equations can be much
more challenging than this, and we study some specific examples of recurrence
equations in Chapter 4, when we study some sorting and selection algorithms We
study methods for solving 'recurrence equations of a general form in Section 5.2.
no Input Size
Figure 1.5: illustrating the "big-Oh" notation. The function f(n) is Ü(g(n)), for
f (n) g(n) when n? flo
4 Chapter 1.. Algorithm Analysis
The big-Oh notation allows us to say that a function of n is "less than or equal
to" another function (by the inequality "<"in the defimtion), up to a constant factor
(by the constant c in the definition) and in the asymptotic sense as n grows toward
infinity (by the statement "n n0" in the definition)
The big-Oh notation is used widely to charaeterize running times and space
bounds in terms of some parameter n, which varies from problem to problem, but
is usually defined as an intuitive notion of the "size" of the problem For example, if
we are interested in finding the largest eement in an array of integers (see arrayMax
given in Algorithm 1.2), it would be most nat ral to let n denote the number of
elements of the array. For example, we can write the following precise statement
on the running timeof algorithm arrayMax from Algorithm 1.2.
Theorem 1.2: The running time of algorithm arrayMax for computing the maxi
mum element In an array of n.integers is 0(n).
Proof: As shown in Section 1. 1.3,. the numbér of primitive operations executed
by algorithm arrayMax is at most 7w 2. We may therefore apply the big-Oh
definition with c 7 and n =1 and öonclude that the running time of algorithm
arrayMaxisO(n). . .
Let us consider a few additional examples that illustiate the big-Oh notatioñ.
Example 1.3: 20n3+lOnlogn±5is0(n3).
Proof: 20n3+Ïønlogn+535n3,forn 1.
In fact, any polynomm.1 aknk + ak_ink_i
+ + ao will always be 0(nk)
Example 1.4: 3logn+loglogniso(logn). . . . I
Some functions appear often in the aùalysis of algorithms and data struétures,
and we often usespecial terms to refer to them. Table 1.6 shows some térms cöm-
monly used in algorithm analysis.
which would nìean that there are constants r > O and no i such that f(n) <
g(n) + ch(n) for n no. As in this example, we may sornetimçs wish to give jhe
exact leading term in an asymptotic charactérization.. In that case, we would say
that "f(n) is g(n) + 0(h(n))," where h(n) grows slower than g(n). For example,
we could say that 2îìlogn +4n + lO/i is 2nlogn + 0(n).
this examplè shows that lower order terms are nOt dominant in establishing
lower bounds with the big-Omega notation Thus, as the next example sums up,
lower order terms are not dominant in the big-Theta notation either
Example 11th 3logn+ loglogn is O(logn).
Proof: This follows from Exalfiples 1.4 and 1.9.
There are also some ways of saying that one function is strictly less than or strictly
greater than another asymptotically, but these are not used as often as the big-Oh,
big-Omega, and big-Theta. Nevertheless, for the sake of completeness, we give
theirdefinitionsas well.
Let f(n) and g(n) be funetions mapping integers to real numbers. We say that
f(n) is o(g(n)) (pronounced "f(n) is little-oh df g(n)") if, for any constant c> 0,
there is a constant n0 > O such that f(n) cg(n) for n n0. Likewise, we say that
f(n) is (g(n)) (pronounced "f(n) is little-omega of g(n)") if g(n) is o(f(n)), that
is, if, for any constant c > 0, there is a constant no > O such that g(n) Ç cf(n) for
(.) is analogous to "less than" in anasymptotic. sense, and co()
n n0. Intuitively,
is analogous to "greater than" in an asymptotic sense
Proof: Let us first show that f(n) is o(n3). Let c > O be any constant. If we take
n0 = (12+6)/c, then, forn no, we have
cn3 : 12n2+6n2 12n±6n.
Thus, f(n) is o(n3).
To show thatf(n) is «n), letc>O again be any citant If we take no = c/12,
then, forn> no, we ha s'è
12n2+6n12n2cn.
Thus, f(n) is (n).
For thereader familiar with limits, we note that f(n) is o(g(n)) if and only if
f(n)
um
noc g(n)
provided this limit exists; The main differ6nce between the littlC-oh and big-Oh
notions is that f(n) is O(g(n.)) if there exist constants c> 0 and nij i such that
f(n) Ç cg(n), for n no; wheréas f(n) is o(g(n)) iffor all constants c > O there is
a constant no such that f(n) cg(n), for n no. Intuitively, f(n) is o(g(n)) if f(n)
becomes insignificant compared to g(n) as n. grows toward infinity. As previously
mentioned, asymptotic notation is useful because it allows us to concentrate on the
main factor determiûing a function's growth.
To summarize, the asymptotic notations of big-Oh, big-Omega, and big-Theta,
as. well as little-oh and littlsomega provide a convenient language for us to analyze
data structures ànd algorithms. As mentioned earlier, these notations provide con-
veniènce because they let us concentrate on the "big picture" rather than low-level
details. .
19
1.2. Asymptotic Notation L
Table 1.7: Maximum size of a problem that can be solved in one second, one
minute, and one hour, for various running times measured in microseconds.
The importance of good algorithm design goes beyond just what can be solved
effectively on a given computer, however. As shown in Table 1.8, even if we
achieve a dramatIc speedup in hardware, we still cannot overcome the handicap
f an asymptotically slow algorithm. This table shows the new maximum problem
size achievable for any fixed amount of time, assuming algorithms with the given
running tithes are now run on a compUter 256 times faster than the previous one.
. m+8
Table 1.8:. Increase in the maximum size of a problem that can be solved in a certain
fixed amount of time, by using a computer that is 256 times faster than the previous
one, for various running times of the algorithm Each entry is given as a function
of m, the previous maximum problem size. . .
20 Chapter 1. Algorithm Analysis
n
nlogn
n2
n3
2
Table 1.,: An orderéd list of simple functions Note that, using common terminol-
ogy, one of the above functions is logarithmic, two are pólylogarithmic, three are
sublinear, one is linear, one is quâdratic, one is cubic, and one is exponential.
In Table 110, we illustrate the difference in the growth rate of all but one of the
functions shown in Table 1.9.
n
2
4.
logn
2
1
vtW
1.42
2
n
4
niogn
2
8,
4
i6
.- ..64
.2"
4
16
H
1.3.1 Summations
A notation that appeais again and again in the analysis pf data structures and algo-
rithms is the summation, which is defined as
b
Ef@) =f(a)+f(a+1)+f(a+2)+ +f(b).
i=a
SummationE arise in data structure and algorithm analysis because the tunning
timesôf loops naturally give rise to summations. For example, a summation that
often arises in data structure and algorithm analysis is the gèometric summation.
Theorem 1 12 For any integer n O and any real number O <a $ 1, consider
Ea'=1+a+a2+ +a't
1a
Summations s shown in Theorem 1 12 are called geometric summations, be-
cause each term is geometncally larger than the previous one if a> 1 That is, the
terms in such a geometric summation exhibit exponential growth For example,
eveiyope working in computing should know that
1-(2+4+8+ +2'=2-1,
for this is the largest integer that can be represented in binary notation using n bits.
Another summatiOn that arises in several contexts is
1=1+2+3+ +(n-2)+(n-1)+n
This summation often anses in the analysis of loops in eases where the number of
operations performed inside the loop increases by a fixed, constant amount with
each iteration This summation also has an interesting history In 1787, a German
elementary schoolteacher decided to keep his 9- and 10-year-old pupils occupied
wIth the task of adding up all the numbers from 1 to 100. But almost immediately
after giving this assignment, one of the children claimed to have the answer-5,050.
Chapter 1. Algorithm Analysis
That elementary school student was none. other than Karl Gauss, who would
grow up to be one of the greatest mathematicians of the 19th century It is widely
suspected that young Gauss derived the answer to his teacher's assignment using
the following identity.
Theorem 1.13: Foranyintegern 1, we have
Proof: We give two "visual" justifications of Theorem 1.13 in Figure 1.11, both
of which are based oil computing the area of a collection of rectangles representing
the numbers 1 through n. In Figure 1.1 la we draw a big triangle over an ordering
of the rectangles, noting that the area Of the rectangles is the same as that of the
big triangle (n2 /2) plus that of n small triangles, each of. area 1/2. In Figure 1.1 lb,
which applies when n is even, we note that i plus n is n + 1, as is 2 plus n 1, 3
plus n 2, and so on. There are n/2 such pairings.
n ' 1 2 n12
3
(a) (b)
Figure 1.11: Visual justifications of Theorem 1.13. Both illustrations visualize the
identity in terms of the total area covered by n unit-width rectangles with heights
1,2, ,n In (a) the rectangles are shown to cover a big tnangle of area n2/2 (base
n and height n) plus n small triangles of area 1/2 each (base i and height 1) In
(b), which applies thy when n is. even, the rectangles are shown to cover a big
rectangle of base n/2 and height n + 1.
1.3. A Quick Mathematical Review 23
logac=loga+logc
logy a/c = logo a - logs c
loga'=cloga
loga = (loga)/logb
¿,Iogc a 1og b
5
bab=bI+
b9/fr = ba_c.
Also, as a notational shorthand, we use log" n to denote the function (log n)'
and we use loglogn to denote log(logn). Rather than show how we could derive
eaph of the above identities, which all follow from the definition of logarithms and
exponents, let us instead illustrate these identities with a few examples of their
usefulness.
Examplé 1.15: We illustrate some interesting cases when the base of a logarithm
or exponent is 2. Thé mies cited refer to Theorem 1; 14.
Fig. 9.
DUPLEX AIR-COMPRESSOR IN THE BASEMENT OF THE MAIN POST-
OFFICE.
Fig. 10.
TANKS AND TUBE IN THE BASEMENT OF THE MAIN POST-OFFICE.
Fig. 11.
Larger image (94 kB)
Fig. 13.
RECEIVING AND SENDING APPARATUS IN THE SUB-POST-OFFICE.
Fig. 14.
APPARATUS AT SUB-STATION—PHILA.
Fig. 16.
RECEIVING APPARATUS AT THE MAIN POST-OFFICE.
Fig. 17.
APPARATUS AT THE MAIN OFFICE—PHILA.
When a carrier arrives, after passing the slots that allow the air-
current to flow into the branch pipe, it compresses the air in front of
it against the gate. This compression checks its momentum, and it
comes gradually to rest. The air compressed between the carrier and
the sluice-gate operates to move the piston slide-valve, thereby
admitting air to the gate cylinder under the piston, which rises,
carrying with it the sluice-gate. The tube is now open to the
atmosphere, and there is just sufficient pressure in the tube to push
the carrier out on to a table arranged to receive it. As the carrier
passes out of the tube it lifts a finger out of its path. This finger is
located at E, Fig. 17, and when it is lifted by the passing carrier it
moves the piston slide-valve, and the sluice-gate is closed. A valve is
located in the branch-pipe that conducts the air to the return tank in
the basement. If the pressure in the tube is not sufficient to push
the carrier out on to the table, this valve is partially closed, thereby
increasing the pressure to a desired amount.
Fig. 18.
CARRIER.
Fig. 19.
CARRIER.
Fig. 20.
MAIL CARRIER.—PHILA.