100% found this document useful (11 votes)
500 views

Algorithm Design Foundations Analysis and Internet Examples 1st Edition Michael T. Goodrich Download PDF

Internet

Uploaded by

muftixeda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (11 votes)
500 views

Algorithm Design Foundations Analysis and Internet Examples 1st Edition Michael T. Goodrich Download PDF

Internet

Uploaded by

muftixeda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Get ebook downloads in full at ebookname.

com

Algorithm Design Foundations Analysis and Internet


Examples 1st Edition Michael T. Goodrich

https://ebookname.com/product/algorithm-design-foundations-
analysis-and-internet-examples-1st-edition-michael-t-
goodrich/

OR CLICK BUTTON

DOWNLOAD EBOOK

Explore and download more ebook at https://ebookname.com


Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...

Data Structures and Algorithms in C 2nd Edition Michael


T. Goodrich

https://ebookname.com/product/data-structures-and-algorithms-
in-c-2nd-edition-michael-t-goodrich/

Algorithm Design 1st Edition Jon Kleinberg

https://ebookname.com/product/algorithm-design-1st-edition-jon-
kleinberg/

Mathematics for Algorithm and Systems Analysis Edward


A. Bender

https://ebookname.com/product/mathematics-for-algorithm-and-
systems-analysis-edward-a-bender/

Understanding Community Politics Policy and Practice


2nd Edition Peter Somerville

https://ebookname.com/product/understanding-community-politics-
policy-and-practice-2nd-edition-peter-somerville/
Kim Ki duk 1st Edition Edition Hye Seung Chung

https://ebookname.com/product/kim-ki-duk-1st-edition-edition-hye-
seung-chung/

PowerPoint 2007 Just the Steps For Dummies 1st Edition


Barbara Obermeier

https://ebookname.com/product/powerpoint-2007-just-the-steps-for-
dummies-1st-edition-barbara-obermeier/

The Science of Science Policy A Handbook First Edition


Kaye Husbands Fealing

https://ebookname.com/product/the-science-of-science-policy-a-
handbook-first-edition-kaye-husbands-fealing/

Inheriting the world the atlas of children s health and


the environment 1st Edition Bruce Gordon

https://ebookname.com/product/inheriting-the-world-the-atlas-of-
children-s-health-and-the-environment-1st-edition-bruce-gordon/

Crimes Against Humanity The Struggle for Global Justice


Fourth Edition. Edition Geoffrey Robertson

https://ebookname.com/product/crimes-against-humanity-the-
struggle-for-global-justice-fourth-edition-edition-geoffrey-
robertson/
The Road to Unity in Psychoanalytic Theory 1st Edition
Leo Rangell M.D.

https://ebookname.com/product/the-road-to-unity-in-
psychoanalytic-theory-1st-edition-leo-rangell-m-d/
ALGORITHM
DESIGN

,
Foundations Analysis ,
and Internet Examples

MICHA EL T. GOODRICH I ROBERTO T A M A S S I A


ORITHM DESIGN
idatiolis, Analysis and Internet Examples
nd Edition

jright © 1999, 2000, 2003, 2004, 2005, 2006, by John Wiley & Sons Inc. All
s reserved

orized. reprint by Wiley India (P.) Ltd., 4435/7, Ansari .Road, Daryagan], New
1110002.

ghts reserved. AUTHORIZED REPRINT OF THE EDITION PUBLISHED BY


N WILEY & SONS, INC., U.K No part of this book may be reproduced in
form without the written permission of the publisher.

ntennial logo designer: Richard J. Pacifico

Ls of Liability/Disclaimer of Warranty: The publishr and the author make no


esentation or warranties with respect to the accuracy or completeness of thé
ents ofthis work and specifically diâclaim all wàrranties, including without
ation warranties of fitness for a particular purpose. No warranty may be
ted or extended by sales or promotional materials. The advice and strategies
ained herein may not be suitable for everysituation. This work is sold with
understanding that the publisher is not engaged in rendering legal,
)unting, or other professiohal services. If professional assistance is required,
services of a competent professional person should be sought. Neither the
isher nor the author shall be liable for damages arising herefrom. The fact
an organization or Website if referred to in this work as a citation and/or a
ntial source of further information does not mean that the author or the
lisher endorses the information the organization or Website may provide or
mmendations it may make. Further, readers should be aware that Internet
)sites listed in this work may have changed or disappeared between when
work was written and when it is read. i

y also publishes its books in a variety of electronic formats. Some content


--appéars in print may not be avail'able in electronic books. For more
rmation abOut wiley products, visit our website at www.wiley.com.

Drint: 2011

nted at: Ashim Printline, Sahibabad

3N : 978-81-265-0986-7
Tb my ¿hiidren, Paul, Anna, and Jack
- Michael T Goodrich.

To Isabel
- Ròberto Tamassia
Preface
This book is designed to provide a comprehensive introduction to the design and
analysis of computer algorithms and data structures. In terms M the computer sci-
ence and computer engineering curricula, we have written this book to be primarily
focused on the Junior-Senior level Algorithms (CS7) course, which is taught as a
first-year graduate course in some schools.

Topics
The topics covered in this book are taken from a broad spectrum of discrete algo-
rithm design and analysis, including the following:

Design and analysis of algorithms, including asymptotic notation; worst-


case analysis, amortization, randomization and experimental analysis
Algorithmic design patterns, including greedy method, divide-arid-conquer,
dynamic programming, backtracking and branch-and-bound
Algorithmic frameworks, including NP-completeness, approximation algo-
rithms, on-line algorithms, external-memory algorithms, distributed algo-
rithms, and parallel algorithms
Data structures, including lists, vectors, trees, priority queues, AYL trees, 2-
4 trees, red-black trees, splay trees, B-trees, hash tables, skip-lists, union-find
trees
Combinatorial algorithms, including heap-sort, quick-sort, merge-sort, se-
lection, parallel list ranking, parallel sorting
* Graph algorithms, including traversals (DFS and BFS), topological sorting,
shortest paths (all-pairs and single-source), minimum spanning tree, maxi-
mum flow, minimum-cost flow, and matching
Geometric algorithms, including range searching, convex hulls, segment in-
tersection, and closest pairs
Numerical algorithms, including integér, matrix, and polynomial multipli-
cation, the Fast Fourier Transform (FF1'), extended Euclid's algorithm, mod-
ular exponentiation, and primality testing
Internet algorithms, including packet routing, multicasting, leader election,
encryption, digital signatures, text pattern matching, information retrieval,
data compression, Web caching, and Web auctions
Preface

About the Authors


Professors Öoodrich and Tamassia are well-recognized researchers in data struc-
tures and àlgorithms, having published many papers in this field, with applications
to Internet computing, information visualization, geographic information systems,
and computer security. They have an extensive record of research collaboration and
have served as principal investigators in several joint projects sponsored by the Na-
tional Science Foundation, the Army Research Office, and:the Defense Advanced
Research Projects Agency. They are also active in educational technology research,
with special emphasis on algorithm visualization systems and infrastructure support
for distance learning.
Michael Goodrich received his Ph.D. in Computer Science' from Purdue Uni-
versity in 1987. He is currently a professor in the Dçpartment of Information and
Computer Science at University of California, Irvine. Prior to this service we was
Professor of Compùter Science at Jorns Hopkins University, and director of the
Hopkins Center for Algorithm Engineering. He is an editor for the International
Journal of Computational Geometry & Applications, Journal of Computational and
System Sciences, and Journal of Graph Algorithms and Applications.
Roberto Tamassia received his Ph.D. in Electrical and Computer Engineering
froth the University of Illinois at Urbana-Champaign in 1988. He is currently a
professOr in the Department of Computer Science and the director of the Center
for Geometric Computing at Brown University. He is an editor for Computational
Geometry: Thçory and. Applications and the Journal of Graph Algorithms and Ap-
plications, and he has previously served on the editonal board of IEEE Transactions
on Computers.
In addition to their research accomplishments, the authors also have extensive
experience in the classroom. For example, Dr. Goodrich has taught data struc-
tures and algorithms courses since 1987, including Data Structures as a freshman-
sophomore level coursé and Introduction to Algorithms as a upper level coùrse.
He has earned several teaching awards in this capacity. His teaching style is to
involve the students in lively interactive classroom sessions that bring out the in-
tuition and insights behind data structuring and algorithmic techniques, as well as
in formulating solutions whose analyis is mathematically rigorous; Dr. Tamas-
sia has taught Data Structures and Algorithms as an introduètory freshman-level
course since 1988. He has also attracted, many students (including several under-
graduates) to his advanced course on Computational Geometry, which is à popular
graduate-level CS course at Brown. One thing that has set his teaching style. 'apart
is his effective use of interactive hypennedia presentations, continUing the tradi
tion of.Brown's "electronic classçoom?' The carefully designed Web pages of the
courses taught by Dr. Tamassia have bçen used as reference material by students
and professionals worldwide.
Preface

For the Instructor


Algorithms (CS7)
This book is intended primarily as a textbook for a Junior-Senior
schools. This
.

course, which is also taught as a first-year graduate course. in some


reinforcemént exercises,
book containsmany exercises, which are divided between
Certain aspects of this book were
creativity exercises, and implementation projects.
specifically designed with the instructor in mind, including:

Visual justifications (that is, picture proofs), which make mathçmatical ar-
An
guments more understandable for students, appealing to visual learners.
example of visual justifications is. our analysis of bottom-up heap constfuc-
tion. This topic has traditionally been difficult for students to understand;
proof
hence, time coñsuming for instructors to explain. The included visual
is intuitive, ngorous, and quick
Algorithmic design patterns, which provide general techniques for design-
divide-and-conquer, dy-
ing and implementing algonthms Examples include
method pattern
namic programming, the decorator pattern, and the template
in an al-
Use of randomization, which takes advantage of random choices
gorithin to simplify its dsign and analysis. Such usage replaces complex
average-case analysis of sophisticated data structures with intuitive analy-
sis of simple data structures and algorithths. Examples incluçie skip lists,
primality
randomized uick-sort, randomized quick-select, and randomized
testing.
Internet algorithmics topics, which either motivate traditional algonthniic
topics from a new Intérnet. viewpoint or highlight new algorithms that are
derived from Internet applications. Examples include information retrieval,
Web crawling, päcket routing, Web auction algorithms, and Web caching
algorithms. We have found that motivating algorithms topics by their Inter-
of algo-
net applications significantly improves, student interest in. the study
rithms. .

design methods, object-


Java implementation examples, which cover software
oriented implementation issues, and experimental analysis of algorithms.,
These implementation. examples, provided in separate
sections of various
chapters, are Optional, so that insíructors can either cover them in their. lec-
tures, assign them. as additional reading, or skip them altogether.
N

This book is alsd structured to allow the instructor a great deal of freedom in
how to organize and present the material. Likewise, the dependence between chap-
ters is rather flexible, allowing the instructor to customize an algorithms course to
highlight the topics that he or she feels are most important. We .have extensively
quite interesting to stu-
disèussed Internet Algorithmics topics, which should prove
dents. In addition, we have inclúded examples of Internet application of traditional
algorithms topics in several nlaces as well.
Preface
traditional Introduc-
We show in Table 0.1 how this book could be used for a
with some new topics motivated from the
tion to Algorithms (CS7) course, albeit
Internet.
Topics Option
Ch.
Algorithm analysis Experimental analysis -
Dáta structures Heap Java example
Searching Include one of § 3.2-3.5
3
Sorting In-place quick-sort
Algorithmic techniques The IPT
5
Graph algorithms DFS Java example
Weighted graphs Dijkstra Java example
Matching and flow Include at end of course
Text processing (at least one section) Tries
9
computational geometry Include at end of course
.12
NP-completeness Backtracking
13
Frameworks (at least one) Include at end of course
14

traditional Introduction to Algorithms


Table 0.1: Exámple. syllabus sche4ule for a
(CS7) course, including optional choices. for each chapter.
Algorithmics course,
This book can also. 'be used for a specialized Internet
topics, but in .a new internet-motivated
which r views some traditional àlgorithms derived from Internet ap-
light, while also covering new algorithmic topics that are
for a such a course.
plications. We show in Table 0.2 how this book could be used

Topics Option
Ch.
Algorithm analysis Experimental analysis
i
Quickly review
2 Data structures (inc. hashing)
Searching (inc. § 3.5, skip lists) Search tree Java example
Sorting In-place quick-sort
Algorithmic techniques The WI
5
Graph algorithms DFS Java. example
6
Weighted graphs Skip one MST alg.
Matching and flowS Matching algorithms
Text processing Pattern matching
Security & cryptography Java examples.
19
Network algorithms Multi-cásting
11
NP-completeness include at end of course
13
Frameworks (at least two) Include at end of course
14

Internet AlgorithmiCs course, includ-


Table 0.2: Example syllabus schedule for an
ing optional choices for each chapter. .
Preface
Of course, other optiòns are also possible, including a course that is a mixture of
a traditional Introduction to Algorithms (CS7) course and an Internet Algorithmics
course. We do not belabor this point, however, leaving such creative arrangements
to the interested instructor.

Web Added-Value Education


This book comes accompanied by an extensive Web site:
http://www.wiley.com/college/goodrichi
This Web site includes an extensive collection of educational aids that augment the
topics of this book. Specifically for students we include:
Presentation handouts (four-per-page format) for most topics in this book
A database of hints on selected assignments, indexed by problem number
Interactive applets that animate fundamentl data structures and algorithms
.. Source code for the Java examples in this book
We feel that the hint server should be of particular iñterest, particularly for èreativity
problems that can be quite challenging for some students.
For instructors i.ising this book, there is a dedicated portion of the Web site just
for them, which includes the following additional teaching aids
Solutions to seleòted exercises in this book
A database of additional exercises and their solutions
Presentations (one-per-page format) for most topics covered in this book
Readers interested in the implementation of algorithms and data structures can
download JDSL the Data Structures Library in Java, from
http://www.jdsl.org/

Prerequisites
Wehave written this book assuming that the reader comes to it with cértain knowl-
edge In particular, we assume that the reader has a basic understanding of elemen-
tary data structures, such as arrays and linked lists, and is at least vaguely familiar
with a high-level programming language, such as C, C++, or .Java. Even so, all
álgorithms are described in a high-level "pseudo-code," and specific programming
langùage ëonstructs are only used in the optional lava implementation example
sections.
In terms of mathematical background, we assume the reader is familiar with
topiÇs from first-year college mathematics, including exponents, logarithms, sum-
mations, limits, and elementary probability. Even so, we review most of these
¡
fads in Chapter 1, iñcluding exponénts, logarithms, and summations, and we give
t .suthmary of other useful mathematical facts, including elementary probability, in
Appendix A.
I Fundamental Tools i
I Algorithm Analysis 3
1.1 Methodologies for Analyzing Algorithms .
5
1.2 Asymptotic Notation 13
1.3 AQuick Mathematical Review 21
1.4 Case Studies in Algorithm Analysis. .. 31
1.5 Amortization . .. 34
1.6 Experimentation 42
1.7 Exercises . 47

2 Basic Data Structurés 55


2.1 Stacks and Queues 57
2.2 Vectors Lists, and Sequences 65
2.3 Trees .
. 75
2.4 Priority Queues and Heaps 94
2.5 Dictionaries and Hash Tables 114
2.6 Java Example: Heap .
128
2.7 Exercises .
131

3 Search Trees and Skip Lists 139


3.1 Ordered Dictionaries and Binary Search Trees 141
3.2 AVL Trees . 152
3.3 Bounded-Depth Search Trees 159
3.4 Splay Trees. 185
3.5 Skip Lists . 195
3.6 Java Example: AVL and Red-Black Trees 202
3.7 Exercises . .
212

4 Sorting, Sets, and Selection 217


4.1 Merge-Sôrt . 219
4.2 The Set Abstract Data Type .
225
4.3 Quick-Sort .
. 235
4.4 A Lower Bound on Comparison-Based Sorting 239
4.5 Bucket-Sort and Radix-Sort . 241
4.6 Comparison of Sorting Algorithms .......... . ., . 244
4.7 Selection .
245
4.8 Java Example: In-Place Quick-Sort . . 248
49 Exercises .
251
Contents

5 Fundamental Techniques 257


5.1 The Greedy Method 259
5.2 Divide-and-Conquer 263
5.3 Dynamk Programming 274
5.4 Exercises 282

Graph Algorithms 285


6 Graphs 287
6.1 The Graph Abstract Data Type 289
6.2 Data Structures for Graphs 296
6.3 Graph Traversal 303
6.4 Directed. Graphs 316
6.5 Java Example: Depth-First Search 329
6.6 Exercises 335

7 Weighted Graphs .339


7.1 Single-Source Shortest Paths 341
7.2 Ail-Pairs Shortest Paths 354
7.3 Minimum Spanning Trees ......... .
360
7.4 Java Example: Dijkstra's Algorithm 373
7.5 Exercises 376

8 Network Flow and Matching .


381
8.1 Flows and Cuts 383
8.2 Maximum Flow 387
8.3 Maximum Bipartite Matching . 396
8.4 Minimum-Cost Flow: .
. 398
8.5 Java Example: Minimum-Cost Flow . 1.05
8.6 Exercises .
/ / 412

III Internet Atgorithmics 415


9 Text Processing 417
9.1 Strings and Pattern Matching Algorithms 419
9.2: Tries 429
9.3 Text Compression .
«o
9.4 Text Similarity Testing 443
9.5 Exercises . .
.

10 Number Theory and Cryptography 451


10.1 Fundamental Algorithms Involving Numbers 453
10.2 Cryptographic Cômputations 471
10.3 Information Security Algorithms and Protocols 481
10.4 The Fast Fourier Transform .
488
10.5 Java Example: FFT .
500
10.6 Exercises 508
Contents xi

11 Network Algorithms 511


11.1 Complexity Measures andj Models ' S

11.2 Fundamentél Distribùted Algorithms 517


11.3 Broadcast artd Unicast Routing . .................. 530.
11.4 Multicast RoUting '.. .
. 535
11.5 Exercises 541

IV Additional Topics 545

12 Computational Geometry 547


-
. . .

12.1 Range Trees .


549
12:2 Priority Search Trees .
. S6
12.3 Quadtrees and k-D Trees 51
12.4 The Plane Sweep Technique 55
12.5 Convex Hulls .
.. si
12.6 Java Example: Convex Hull 58
12.7 Exercises .
587.

13 NP-Completeness 591
131 PandNP 593
13 2 NP-Completeness 599
13 3 Important NP-Complete Problems 603
13.4 Approximatiòn.Algorithms . .
618
13.5 Backtracking and Branch-and-Bound ... . 627
13.6 Exercises . ............................... 638

14 Algorithmic Frameworks 643


14.1 External-Memory Algorithns 645
14.2 'Parallel Algorithms .
. 657
14.3 Ohline Algorithms. . .
667
14.4 Exercises . '
. 680

A Useful Mathematical Facts 685


Bibliography . 689
Index ., 698
Contents

cnowledgments
There are a number of individuals who have helpèd us with the contents of this
book. Specifically, we thank Jeff Achter, Ryan Baker, Devin Borland, Ulrik Bran-
des, Stina Bridgeman, Robert Cohen, David Emory, David Gmat, Natasha Gelfañd,
Mark Handy, Benoît Hudson, Jeremy Mullendore, Daniel Polivy, John Schultz, An-
drew Schwerin, Michael Shin, Galina Shubina, and Luca Vismara.
We are grateful to all our former teaching assistants who helped us in develop-
ing exercises, programming assignments, and algorithm animation systems. There
have been a number of friends and colleagues whose comments have lead to im-
provements in the text. We are particularly thankful to Karen Goodrich, Art Moor-
shead, and Scott Smith for their insightful comments. We are also truly indebted
to the anonymous outside reviewers for their detailed comments and constructive
crificism, which were extremely useful.
We are grateful to dur editors, Paul Crockett and Bill Zobrist, for their enthusi-
astic support of this project. The production team at. Wiley has been great Many
thanks go to people who helped us with the book development, including Susannah
Barr, Katherine Hepburn, Bonnie Kubat, Sharon Prendergast, Marc Ranger, Jeri
Warner, and Jennifer Welter.
This manuscript was prepared primarily with ITEX for the text and AdObe
FrameMaker® and Visio ® for the figures. The LGrind system was used to format
Java code fragments into IbTC The CVS version control system enabled smooth
coordination of our (sometimes concurrent) file editing.
Finally, we would like to warmly thank Isabel Crnz, Karen Goodrich, Giuseppe
Di Battista, Franco Preparata, bannis Tollis, and dur parents for providing advice,
encouragement, and support at various stages of the preparation of this book. We
also thank them for reminding us that there are things in life beyond writing books.

Michael T. Goodrich
Roberto Tamassia
Part

I Fundamental Tools
Chapter

i
//

Algorithm....Analysis

Contents
lui Methodologies for Analyzing Algorithms .......5 7
1.1.1 Pseudo-Code
1.1.2 The Random Access Machine (RAM) Model 9

1.1.3 Counting Primitive Operations 10

1.1.4 Analyzing Recursive Algorithms 12

1.2 Asymptotic Notation 13


The "Big-Oh" Notation
1.2.1 13

122 "Relatives" of the Big-Oh 16

1 2 3 The Importance of Asymptotics 19

1 3 A Quick Mathematical Review 21

131 Summations 21

132 Logarithms and Exponents 23

133 Simple Justification Techniques 24

1.3.4 Basic Probability . ...... 28.


14 Case Studies in Algorithm Analysis ....... . . 31
1.4.1 AQuadratic-Time PrefixAverages Algörithm 32

1.4.2 A Linear-Time Prefix Averages Algorithm . .. 33

1.5 Amortization
151 Amortization Techniques 36
1.5.2 Analyzing an Extendable Array lmplmentation . 39

1.6 Experimentation . ....................42


1.6.1 Experimental Setup . .............,. .. . . . 42

162 Data Analysis and Visualization 45

1.7 Exércises ..............................47


Chapter :1. Algorithm Analysis

In a classic story, the famous mathematiéian Archimédes was asked to deter-


mine if a golden crown commissioned by the Icing was indeed pure gold, and not
part silver, as an informant had claimed. Archimedes discovered a way to determine
this while stepping into a (Greek) bath. He noted that water spilled out of the bath
in proportion to the amount of him that went in. Realizing the implications of this
fact, he immediately got out of the bath and ran néked through the city shouting,
"Eureka, eureka!," fdr he had discovered an analysis tool (displacement), which,
when combined with a simple scale, could determine if the king's new crown was
good or not. This discovery was unfortunate for the goldsmith, however, for when
Archimedes did his analysis, the crown displaced more water than an equal-weight
lump of pure gold, indicating that the èrown was not, in fact, pure gold.

In this book, we are interested in the design of "good" algorithms and data
structures. Simply put, an algorithm is a step-by-step procedure for performing
some task in a finite amount of time, and a data structure is a systematic way of
organizing and accessing data. These concepts are central to computing, but to
be able to classify some algorithms and data structures as "good:' we must have
precise ways of analyzing them.
The primary analysis tool we will use in this book involves characterizing the
running times of algorithms and data structUre operations, with space usage also
being of interest. Running time is a natural measure of "goodness:' since time is a
precious resource. But focusing on running time as a primary measure of goodness
implies that we will need to use at least a little mathematics to describe running
times and compare algorithms
We begin this chapter by describing the basic framework needed for analyzing
algorithms, which includes the language for describing algorithms, the computa-
tioziál model that language is intended for, and the main factots we count when
considering running time. We also include a brief discussion of how recursive al-
gorithms are analyzed. In Section 1.2, we present the main notation wb use to char-
acterize Pinning timesthe so-called "big-Oh" notation. These tools comprise the
main theoretical tools for designing and analyzing algorithms

In Section 1.3, we take a short break from our development of the framçwork
for algorithm analysis to review some important mathematical facts, including dis-
cussions of summations, logarithms, proof techniques, and basic probability. Givei
this background and our notation for algorithm analysis, we present some case stud-
ies on theoretical algorithm analysis in. Section 1.4. We follow these examples in
Section 1 5 by presenting an interesting analysis technique, known as amortization,
which allows us to account for the group behavior of many individual operations.
Finally, in Section 1.6, we conclude the chapter by discussing an important and
practical analysis tèchnique--experimentation. We discuss both the main princi-
ples of a good experimental framçwork as well as techniques for summarizing and
characterizing data from an experìnental analysis.
1.1. Methodologies for Analyzing Algorithms

1.1 Methodologies for Analyzing Algorithms


The running time of an algorithm or data structure operation typically depends on
a number of factors, so what should be the proper way of measuring it? If an
algorithm häs been implemeùted, we can study its running time by executing it
on various test, inputs and recording the actual time spent in each execution. Such
measuremeñts can. be taken in an accurate manner 'by using system calls that are
built into the language or operating system for which the algorithm is written. Iii
genei'al, we are interested in determining the dependency of the running time on the
size of the input. In order to determine this, wean perform several experiments
on many different test inputs' of various sizes. We can then visualize the results
of such experiments by plotting the performance of each run of the algorithm as
a point with x-coordinate equal to the input size, n, and y-coordinate equal to the
running time,, t.. (See Figure' 1.1.) To be meaningful, this analysis requires that
we choose good sample inputs and test enough of them to be able to make sound
statistical claims about the algorithm, which is an approach we discuss in more
detailinSectionl.6. '

In genéral, therunning time of an algorithm r data structure method increases


with,the input size; although it may âiso vary for distinct inputs of the same size.
Also, the 'running time is affected by the hardware environment (processor, clock
rate, memory, disk, etc.) and' software environment (operating system, program-
.

ming language, compiler, interpreter, etò.) in which the algorithm is implemented,


compiled, and executed. Ali other' factors being' equal, thé running time of the same
algorithm on the same input data will be smaller if 'the ömputerhas, say, a much
faster processor or if the implementation is done in a program compiled into native
machine code instead of an interpreted implementation run on a virtual machine.
t (ms) t(ms)
60 - 60-

..
'

u
50
u.. - .0
40 II.. 40 N..
N. 30-
.:. .
30- u.. - u
u
Nu
20 20 -
. -
10 N 10

- 1,111!
u
LIII l'i 11111111 II!
50 100
n '

0'"' 50 100
n

(a) , (b)
Figure 1 1 Results of an expenmental study on the running time of an algonthm
A dot with coordinates (n, t) indicates that on an input of size n, the running time of
the algorithm is t milliseconds (ms) '(a) The algorithm executed on a fast computer;
(b) the algorithm execuied on a slow computer.
Chapter 1. Algòrithm Anslysis

Requirements for a General Analysis Methodology


Experimental studies on running times are useful, as we explore in Section 1.6, but
they. have some limitations:

Experiments can be done only on a limited set of test inputs, andcare múst
- be taken to make sure these are representative.
It is difficult to compare the efficiency of two algonthms unless experiments
on their runmng times have been performed in the same hardware and soft-
wate environments.
It is necessary to implement and execute an algorithm in Order to study its
running time experimentally.. . . .

Thus, while experimentation has animportant.role.to play in algorithm analysis,


it alone is ñot sufficient. Therefore, in addition to experimentation, we desire an
analytic framework that

. Takes into accoùnt all possible inputs


Allows us to evaluate the relative efficiency of any two algonthms in a way
that is independent from the hardware and software environmént:
Can be performed by studying a high-level description of the algorithm with-
out.actuallyímplementing it or rünning experiments on it.

This methodology aims at associating with each algorithm a function f(n) that
characterizes the runmng time of the algorithm in terms of the mput size n Typical
functions that will be encountered include n and n2 For example, we will write
statements of the type "Algorithm A runs in time proportional to n," meamng that
if we were to perform experiments, we would find that the actual running time of
algorithm A on any input of size n never exceeds cri, where c is a constant that
depends on the hardware and software environment used in the experiment. Given
two algorithms A and B, where A runs in time proportional:. to n and B runs intime
proportional to n2, we will prefer A to B, since the function n grows at a smaller
rate than the function n2.
We are now ready to "roll up our sleeves" and start developing our method-
ology for algorithm analysis.. There are several components to this methodology,
including the following:

A language for describing algorithms


A computatiOnal model that.algorithms execute within.
A metric for measuring algorithm running time
An approach for charactenzrng running times, mcluding those for recursive
algorithms.

We describe these components in more detail in the remainder of this section.


1.1. Methodologies for Analyzing Algorithms

1.1;1 Pseudo-Code
Programmers are Often asked to describe algorithms in a way that is intended for
human eyes only. Such descriptions are not computer programs, but are more struc-
tured than usual prose. They also facilitate the high-level analysis of a data structure
or algorithm, We call these descriptions pseudo-code.

An Example of Pseudo-Code

The array-maximum problem is the simple problem of finding the maximum ele-
ment in an array A stonng n integers To solve this problem, we can use an algo-
nthm called arrayMax, which scans through the elements of A using a for loop
The pseudo-code description of algorithm a rrayM ax is. shown iii Algorithm 1.2.

Algorithm arrayMax(A,n):
Input: An array A storing n 1 integers.
Output: The maximum elemeñt in A.
currentMax *- A [O}
fori4-1 ton-1 do
if currentMax <A [i] then
currentMax 'e- A [i]
return currentMax

Algorithm 1.2: Algorithm arrayMax..

Note that the pseudo-code is more compact thañ an equivalent actual software
code fragment would be. In addition, the pseudo-code is easier to reád and under-
stand.

Using Pseudo-Code to Prove Algorithm, Correctness

By inspecting the pseudo-code, we can argué about. the correctness of algorithm


arrayMax with a simple argument. Variable currçntMax starts out being equal to
the first element of A. We claim that at the beginning of the ith iteration of the loop,
currentMax is equal to the maximum of the first ¡ ¿lements in A Since we compare
currentMax to A [z] in iteration t, if this claim is true before this iteration, it will be
-
true after it for t + 1 (which is the next value of counter z) Thus., after n i itera-
Pons, currentMax will equal the maximum element in A As with this example, we
want our pseudo-code descriptions to always be detailed enough to fully justify the
correctness of the algorithm they. desóribe, while being simple enough for hùman
readers to understand. .
Chapter L Algorithm Analysis

What Is Pseudo-Code?

Pseudo-code is a mixture of natural language and high-level programming con-


structs that dèscribe the main ideas behind a generic implementation of a data1
strubture or algorithm. There really is no precise definition of the pseudo-code lan-'
guage, however, because of its reliance on natural language At the same time, to
help achievé clarity, pseudo-code mixes natural language with standard program-.
ming language constructs. The programming language constructs we choose are.
thòsè consistent with modem high-level languages such as C, C++, and Java. These
constructs include the following:

Expressions We use standard mathematical symbols to express numenc


and Boolean expressions We use the left arrow sign (E-) as the assignment
operator in assignment statements (equivalent to the = operator in C, C++,
and Javá) and we usé thé equal sign (=) as the equality relation in Boolean
expressions. (equivalent to the "==" relation in C, C++, and Java).
Method declaratioús: Algorithm name(paraml , param2 ,...) declares a new
method "name" and its parameters.
Decision structurés: if condition, then true-actions [else false-actions]. We
use iñdentation to indicate what actions should be included in the true-actions
and false-actions. . i

While-loops: while condition . do actièns. We use indentation to indicaté


what actions should be included in the loop actions
Repeat-loops: repeat actions until condition. We usè indentation to indicate
whit actions should be included in the loop actions
For-loops for variable-increment-definition do actions We use indentation
to indicate what actions should be included among the loop actions
Array iùdexing: A[i] represents the ith 'cell inthearray A The cells' of an
n-celled array A are indexed from A[O] toA[n 1] (consistent with .C, C+-i-,
and Java).
s Method calls: object.method(args) (object is optional if it is understood).
Method returns: return value This operation returns the value specified to
the method that callS this one.

Whqn we write pseudo-code, we must keep in mind that we are writing for a
human reader, not a computer. Thus, we should strive to communicate high-level
ideas, not low-level implementation details At the same time, we should not gloss
over important steps Like many forms of human communication, finding the nght
balance is an important skill that is refined through practice
Now that we have developed a high-level way of describing algorithms, let us
next discuss how wel can analytiOally characterize algorithms written in pseudo-
code.
1.1. Methodologies for Analyzing Algorithms

1.1.2 The Random Access Machine (RAM)ModèI


As we noted above, experimental analysis is valuable, but it has its limitations. If.
we wish to analyze a particular algonthm without perfonrnng expenments on its
running time, we can take the following more analytic approach directly on the
high-leel code or pseüdó-codé. Wé dèfine à sèt of high-level primitive operations
that are largely independent from the programming language used and can be iden-
tified also in the pseudo-code. Primitive operatioñs include the fQllowing: .

Assigning .a value to a variable


Calling a method
Perfonning an arithmetic operation (for example, adding two numbers).
Comparing two numbers
Indexing, into an array
Following an object reference
Returning, from a method.

Specifically, a primitive operation corresponds to a low-level instruction with an


execution time that depends on the hardware and software environment but is, nev-
ertheless constant Instead of trying to determine the specific execution tithe of
each primitive operation, we will simply couñt how many primitive operations are
executed, and use this number t as a high-level estimate of the running time of the
algonthm This operation count will correlate to an actual running time in a spe-
cific hardware and software environment, for each pnmitive operation corresponds
to a constant-time instruction, and there are only a fixed number of pnmitive opera-
tions The implicit assumption in this approach is that the running times of different
pnmitive operations will be fairly similar Thus, the number, t, of pnmitive opera-
tions an algorithm performs will.be proportional to the actual running time of that
algorithm. .

RAM Machine ModelDefinition


This approach of simply counting pnmitive operations gives nse to a computational
model called the Randérn Aécèss Machine (RAM) This model, which should not
be confused with "random access nemory," views a computer simply as a C?U
connected to a bank of memory Oeils. Each memòry cell stores a word, which can
be a number, a character stung, or an address, that is, the value of a base type The
tèrm "random access" refers to the ability of the CPUtö access an arbitrary memory
cell with one primitive peration. To keep the model simple, we do not place
any specific limits on the size of numbers that can be stored in words of memory
We assume the CPU in the RAM model can perform any pnmitive operation in
a constant number of steps, which do not depend on the size of the input. Thus,
an accurate bound on the number of pnmitive operations an algonthm performs
cörresponds directly to the running time of4hat algorithm in the RAM model.
Chapter 1. Algorithm Analysis

113 Counting Primitive Operations


We now show how to count the numbçr of primitive operations executed by an al-
gonthm, using as an example algonthm arrayMax, whose pseudo-code was given
back in Algorithm 1.2. We do this analysIs by focusing on each step of the algo-
rithm and counting the primitive operations that it takes, taking into consideration
that some operations are repeated, because they are enclosed in the body of. a loop.

Initializing the variable currentMwc to A [OJ corresponds to two primitive op-


erations (indexing into an array and assigning a value to a variable) and is
executed only once at the beginning of the algorithm. Thus, it contributes
two units to the count.
At the beginning of the for loop, counter i is initialized to 1. This action corre-.
sponds to exeéuting one primitive operation (assigning a value to a variable).
Before entenng the body of the for loop, condition i <n is verified This
action corresponds th executing one primitive instruction (comparing two
numbers).. Since counter i starts at O and.is incremented, by i at the end of
each iteration of the loop, the comparison i <n is performed n times. .Thus,
it contributes n units to the count.
The body of the for loop is executed n - 1 times (for values 1,2, ,n - i
of the counter) At each iteration, A [i] is compared with currentMax (two
:4
primitive operations, indexing and comparing), [currentMax] is possibly
assigned to currentMax (two primitive operations, indexing and assigning),
and, the counter i is incremented (two primitive operations, summing and
assigmng) Hence, at each iteration of the loop, either four or six pnimtive
operations are performed, depending on whether A [i] currentMax or A [i]>
currentMax. Thérefore, thè body Of the ioop contributes between 4(n 1) -
and 6(n - 1) units to the count.
Returning the value of variable currentMax corresponds to one primitive op-
eration, and is executed only once.

To summarize, the number of primitive operations t(n) executed by algorithm a


rayMax is at least . /

2+1+n+4(n-1)+'1=5n
and at most,'.
2±1+n+6(n-1)±l=7n-2..
The best case (t(n) = 5n) occurs when A[O] is the maximum element, so that vari-
able ciirrentMax is never reassigned The worst case (t(n) = 7n 2) occurs when
the elements are sorted in increasing order, so that variable currentMax is reas-
signed at each iteration of the for loop
1.1. Methodologies for Analyzing Algorit uns 11

Average-Case and Wor -Case Analysis

Like the arrayMax meth , an algorithm may run faster on some inputs than it does
on others In such case' ie may wish to express the running time of such an algo-
rithm as an average tal. t over all possible iñputs. AlthoUgh such an average case
analysis would often be valuable, it is typically quite challenging. 1t requires us to
define a probability distribution on the set of inputs, which is typically a difficult
task Figure 1 3 schematically shows how, depending on the input distnbution, the
runmng time of an algonthm can be anywhere between the worst-case lime and the
best-case time For example, what if inputs are really only of types "A" or "D")
An average-case analysis also typically requires that we calculate eipected run-
ning times based on a given input distribution. Such an analysis often requires
heavy mathematics andprobability theory.
Therefore, except for expenmental studies or the analysis of algonthms that are
themselves randomized, we will, for the remainder of this book, typically charac-
terize running times in ternis of the worst case. Wè say, for example, that algorithm
arrayMax executes t(n) = in 2 primitive operations in the worst case, meaning
that the maximum number of primitive Operations executed by the algorithm, taken
over all inputs of size n, is 7n 2.
This type of analysis is much easier than an average-case analysis, as it does
not require probability theory, it just requires the ability to identify the worst-case
input, which is often straightforward In addition, taking a worst-case approach can
actually lead to better algorithms. Making the Standard of success that of having an
algòrithm perform well in the worst case necessarily requires that it perform well on
every input That is, desigmng for the worst case can lead to stronger algonthmic
"muscles," much like a track star who always practices by running up hill.

----- - - worst-case time

average-case time?

-- best-case time

C D
i- I 'I-
B
Input Instance

Figure 13: The difference between best-case and worst-case time Each bar repre-
sents the running time of some algorithm on a different possible input.
Chapter 1. Algorithm Analysis

1 1.4 Analyzing Récursive Algorithms

Iteration is not the only interesting way of solving a problem. Another useful tech-
nique, which is emplòyed by many algorithms, is to. use recursion. In this tech-
nique, we define a procedure P thät is allowed to make calls to itself as a subrou-
tine, provided those calls to p are for solving subproblems. of smaller. sue. The
subroutine calls to P on smaller instances are called "recursive calls?' A recur-
sive procedure should always define a base. case, which is small enough that the
algorithm can solve it directly without using recursion
We give a recursive solution to the array maximum problem in Algorithm 1.4,
This algonthm first checks if the array contains just a single item, which in this case
immediately solve
must be the maximum; hence, in this. simple base case we can
the problem Otherwise, the algonthm recursively computes the maximum of the
first n - i elements in the array and then returns the maximum of this value and the
last element in the array.
As with this example, recursive algorithms are oftén quite elegant. Analyzing
the running time of a recursive algorithm takes a bjt of additional work, however*
In particular, to analyze such a running time, we use a recurrence equation, which
defines mathematical statements that the running time of a recursive algorithm must
satisfy. Weintroduce a function T(n) thàt denotes the running time of the algorithm.
example,
on an input of size n, and we wnte equations that T(n) must satisfy For
algorithm as
we can haracterize thé running time, 1(n), of the recursiveMax
J3 ifn=i
T(n)
T(n i)+7 otherwise,
assuming that we count each comparison, array reference, recursive call, max cal
culation, or return as a single primitive operation. Ideally, we would like to char-
acterize a recurrence equation like that above in closed form, where no references
to the funötion T appear on the righthand side; For the recursiveMax algoritIm,
it isn't too hard to see that a closed form wouldbe T(n) = 7(n 1) + 3 = 7n 2.
Ingeneral, determining closed form solutions to. recurrence equations can be much
more challenging than this, and we study some specific examples of recurrence
equations in Chapter 4, when we study some sorting and selection algorithms We
study methods for solving 'recurrence equations of a general form in Section 5.2.

Algorithm recu rsiveM ax(A, n):


Input: An array A storing n i integers.
Output: The maximum element in A.
.jfzr1then
returnA[O]
return max{recursiveM.ax(A,n 1),A[n 11}
Algorithm 1.4: Algorithm reçursiveMax.
1.1 Asymptotic Notation 13

1.2 Asymptotic Notation


We have clearly gone into laborious detail for evaluating the running time of such
asimple algorithm as arrayMax and its, recursive coúsin, recursiveMax. Such an
approach would clearly prOve cumbersome if we had to perform it for more compli-
cated algonthms In general, each step in a pseudo-code descnption and each state-
ment in a high-level language implementation corresponds to a small number of
pnmitive operations that does not depend on the input size Thus, we can perform
a simplified analysis that estimates the number of primitive operations 'executed up
to a constant faötor, by counting the steps of the pseudo-code or the statements of
the high-level language executed Fortunately, there is a notation that allows us to
characterize the main factors affecting an algorithm's running time without going
into all the details of exactly how many primitive operations are performed for each
constant-time set of instructions.

1.2.1 The "Big-Oh" Notation.


Let f(n) and g(n) be functions mapping nonnegative integers to real numbers We
say that f(n) is O(g(n)) if there is a real constant c > O and an integer constant
n0 i such that f(n) cg(n) for every integer n nj. This definition is often
referred to as the "big-Oh" notation, for it is sometimes pronounced as "f(n) is big-
Oh of g(n)." Alternatively, we can also say "f(n) is order g(n)." (This definition
is illustrated in Figure 1.5.)

no Input Size

Figure 1.5: illustrating the "big-Oh" notation. The function f(n) is Ü(g(n)), for
f (n) g(n) when n? flo
4 Chapter 1.. Algorithm Analysis

Example 1.1: 7n-2isO(n).


Proof: By the big-Oh definition, we need to find a real constant c> O and an
integer constant 0 i such that7n 2 Ç cn for every integer n no. It is easy to
see that a possible choice is c =7 and no = i Indeed, this is one of infinitely many
choices available because any real number greater than or equal to 7 will work for
c, and any.integer greater than or equal to i will work for no.

The big-Oh notation allows us to say that a function of n is "less than or equal
to" another function (by the inequality "<"in the defimtion), up to a constant factor
(by the constant c in the definition) and in the asymptotic sense as n grows toward
infinity (by the statement "n n0" in the definition)
The big-Oh notation is used widely to charaeterize running times and space
bounds in terms of some parameter n, which varies from problem to problem, but
is usually defined as an intuitive notion of the "size" of the problem For example, if
we are interested in finding the largest eement in an array of integers (see arrayMax
given in Algorithm 1.2), it would be most nat ral to let n denote the number of
elements of the array. For example, we can write the following precise statement
on the running timeof algorithm arrayMax from Algorithm 1.2.
Theorem 1.2: The running time of algorithm arrayMax for computing the maxi
mum element In an array of n.integers is 0(n).
Proof: As shown in Section 1. 1.3,. the numbér of primitive operations executed
by algorithm arrayMax is at most 7w 2. We may therefore apply the big-Oh
definition with c 7 and n =1 and öonclude that the running time of algorithm
arrayMaxisO(n). . .

Let us consider a few additional examples that illustiate the big-Oh notatioñ.
Example 1.3: 20n3+lOnlogn±5is0(n3).
Proof: 20n3+Ïønlogn+535n3,forn 1.
In fact, any polynomm.1 aknk + ak_ink_i
+ + ao will always be 0(nk)
Example 1.4: 3logn+loglogniso(logn). . . . I

Proof: 3logn + loglogn Ç 4logn, for n


definedforn=1. Thatiswhweusen>2.
2.. Note that loglogn is not even
i
Example 1.5: 2100 is 0(1).
Proof' 2100 <2i00 1, for n 1 Note that variable n does not appear in the
inequality, since we are dealing with constant-valued functions. M

Example 1.6: 5/n lis 0(1/n).


Proof: 5/n < 5(1/n), forn i (even though this is actually a decreasing func-
tion). .
1.2.. Asymptotic Notation IS
In general, we should use the big-Oh nitation to characterize a function as
closely as possible. While it is true that f(n) 4n3 + 3n4/3 i 0(n5), it is more
accurate to say that f(n) is 0(n3). Consider, by Way of analogy, a scenario where
a hungry traveler driving along a long country road happens upon a local farmer
walking home from a market. If the traveler àsks the farmer how much longer he
must drive before he can find some food, it may be truthful for the farmèr to say,
"certainly no longer than 12 hours," but it is muòh more aècurate (and helpful) for
him to say, "you can find a market just a few minutes' drive up this road."
Instead of always applying the big-Oh definition directly to obtain a big-Oh
characterization, we can use the following rules to simplify notation.
Theorem 1.7: L et d(n), e(i), fin), and g(n) be functions mapping nonñegative
integers to nonnegative reals. Then
if d(n) is 0(f(n)), then ad(n) ià 0(f(n)), for anyconstanta >0.
ifd(n) is 0(f(n)) and e(n)is 0(g(n)), then d(n) + e(ñ) is 0(f(n) + g(n)).
if d(n) is O(f(n)) and e(n) is 0(g(n)), then d(n)e(n) is 0(f(ñ)g(n)).
If d(n) is 0(f(n)) andf(n) is 0(g(n)), then d(n)is 0(g(ñ)).
if f(n) is a polynomial of degree d (that is, f(n) = ao + à1n ± + adn'),
then f(n)iso(n").
ltisO(au')foranyfixedx>.Øanda>.L
logî? is 0(logn) fotanyfixedx>0.
bC n is 0(nY) for any fixed constants x> O and y > 0.
It is considered poor taste to include constant factors and lower order terms in
the big-Oh notation. For example, it is not fashionable to say that the function 2n1
is 0(4n2 + 6nlogn), although this is completely correct. We shoùld strive instead
to describe the function in the big-Oh in simplest terms.
Example 1.8: 2n3 +4n2bogn is 0(n3).
Proof: We can apply the rules of Theorem 1.7 as follows:
logzis0(n)(Rule8).
4n2bogn is 0(4n3) (Rule 3).
2n3 ± 4n2 logn is 0(2n3 + 4n3) (Rule 2).
2n3 +4n3 is 0(n3) (Rule S orRulel).
.2n3 +4n2logn is 0(n3) (Rule 4).

Some functions appear often in the aùalysis of algorithms and data struétures,
and we often usespecial terms to refer to them. Table 1.6 shows some térms cöm-
monly used in algorithm analysis.

logarithmic linear quadratic polynomial exponential


bogn) 0(n) 0(n2) :Ü(nc)(k>1) O(aa>1)
Thble 1.6:: Terminology for classes f functions.
Chapter 1. Algorithm Analysis

Using the Big-Oh Notation


It is considered poor taste, in general, to say "f(n) O(g(n))," since the big-Oh
already denotes the "lçss-than-or-equal-to" concept. Likewise,. although common,
it is not completely'correct to say "f(n) = O(g(n))" (with the usual understanding
of the "=" relation), and it is actually incorrect to say "f(n) O(g(n))" or "f(n) >
O(g(n))." It is best to say "f(n) is Of/g(n))?' For the more, mathematically inclined,
it is also correct to say, t
'7(n)EO(g(n)),"
for the big-Oh notation is, technically speaking, denoting a whole collection of
functions.
Even with this interpretation, there is considerable freedom in how we can use
arithmetic operations with the big-Oh notation, provided the connection to the def-
inition of the big-Oh is clear. For instance, we can lay,

"f(n) is g(n) + O(h(n)),"

which would nìean that there are constants r > O and no i such that f(n) <
g(n) + ch(n) for n no. As in this example, we may sornetimçs wish to give jhe
exact leading term in an asymptotic charactérization.. In that case, we would say
that "f(n) is g(n) + 0(h(n))," where h(n) grows slower than g(n). For example,
we could say that 2îìlogn +4n + lO/i is 2nlogn + 0(n).

1.2.2 "Relatives" of the Big-Oh


Just as the big-Oh notation provides an asymptOtic way of saying that a function
is "less thañ nE equal to" another function, there are other notations that povide
asymptotic ways of making other types of comparisons.

Big-Omega and Big-Theta


Let f(n) and g(n) be functions mapping integers to real numbers. We say that f(n)
is Q(g(n)) (pronounced '7(n) is big-Omega of g(n)") if g(n) is O(f(n)); that is,
there is a real constant c > O and an integer constant no i such that f(n) cg(n),
for n no. This definitioa allows us to say asymptotically that one function is
greater than or equal to another, up to a constant factor Likewise, we say that f(n)
is O(g(n)) (pronounced "f(n) is big-Theta of g(n)") if f(n) is 0(g(n)) and f(n) is
Q(g(n)); 'that is, there are real constants ¿ > O and ¿' > O, and an integer constant
no i such that ¿g(n) Ç f(n) Ç c"g(rz), fOr n no.
The big-Theta allows us to say that two functions are asymptotically equal, up
to a constant factor. We cohsider some examples of these notations below.
1.2. Asymptotic Notation. 17

Example 19: 310g n + log log n is Q(logn).


Proof 3logn+loglogn 3logn, forn 2

this examplè shows that lower order terms are nOt dominant in establishing
lower bounds with the big-Omega notation Thus, as the next example sums up,
lower order terms are not dominant in the big-Theta notation either
Example 11th 3logn+ loglogn is O(logn).
Proof: This follows from Exalfiples 1.4 and 1.9.

Sçme Words of Caution


A few words of caution about asymptotic notation are in order at this point First,
note that the use of the big-Oh and related notations can be somewhat misleading
should the constant factors they "hide" be very large For example, while it is true
that the function 1O«n is 0(n), if this is the running time of an algonthm being
compared to one whose running time is lOnlogn, we should prefer the e(n log n)
time algonthm, even though the hnear-time algorithm is asymptotically faster This
preference is because the constant factdr, 10i00 which is called "one googol," is
believed by many astronomers to be an upper bound on the number of atoms in
the observable universe So we are unlikely to ever have a real-world problem that
has this number as its input size Thus, even when using the big-Oh notation, we
should at least be somewhat mindful of the constañt factors and lower order terms
we are "hdiríg."
The aboveobservanon raises the issue of what constitutes a 'tfast" algorithm.
Generally speaking, any algorithm running in 0(nlogn) time (with a reasonable
constanifactor) should be considered efficient. Even an 0(n2) time méthod may be
fast enough in sorné contexts, that is, when n is small. But an algorithm running in
0(r) time should never be considered efficient This fact is illustratedby a famous
story about the inventor of thé game of chess. He asked only that his king pay him
i grain of rice for the first square on the board, 2 grains for the second 4 grains
2M grains
for the third, 8 for the fourth, and so on. But try to imagine the sight of
stacked on the last square! In fact, this number cannot even be represented as a
standàfd long integer in most programming languages.
Therefore, if we. must draw 1a line between efficiçnt and inefficient algorithms,
it is natural to make this distinction be that between those algorithms running in
polynomial time and those equiring exponential time That is, make the distinction
between algorithms with a running time that is 0(n'<), for sorne constant k 1, and
those with a running time that is 0(c"), for some constant c> 1. Like so-many
notions we have discussed in this section, this too should be taken with a "grain of
salt," for an algonthm running in 0(n'°°) time shonild probably not be considered
"efficient" Even so, the distinction between pólynomial-time and exponential-time
algorithms is considered a robust measure of tractability.
Chapter 1. Algorithm Analysis

"Distant Cousins" of the Big-Oh:. Little-Oh and Little-Omega

There are also some ways of saying that one function is strictly less than or strictly
greater than another asymptotically, but these are not used as often as the big-Oh,
big-Omega, and big-Theta. Nevertheless, for the sake of completeness, we give
theirdefinitionsas well.
Let f(n) and g(n) be funetions mapping integers to real numbers. We say that
f(n) is o(g(n)) (pronounced "f(n) is little-oh df g(n)") if, for any constant c> 0,
there is a constant n0 > O such that f(n) cg(n) for n n0. Likewise, we say that
f(n) is (g(n)) (pronounced "f(n) is little-omega of g(n)") if g(n) is o(f(n)), that
is, if, for any constant c > 0, there is a constant no > O such that g(n) Ç cf(n) for
(.) is analogous to "less than" in anasymptotic. sense, and co()
n n0. Intuitively,
is analogous to "greater than" in an asymptotic sense

Example 1.11: The function f(n) 12n2 ± 6n is o(n3) and 0(n).

Proof: Let us first show that f(n) is o(n3). Let c > O be any constant. If we take
n0 = (12+6)/c, then, forn no, we have
cn3 : 12n2+6n2 12n±6n.
Thus, f(n) is o(n3).
To show thatf(n) is «n), letc>O again be any citant If we take no = c/12,
then, forn> no, we ha s'è
12n2+6n12n2cn.
Thus, f(n) is (n).

For thereader familiar with limits, we note that f(n) is o(g(n)) if and only if
f(n)
um
noc g(n)
provided this limit exists; The main differ6nce between the littlC-oh and big-Oh
notions is that f(n) is O(g(n.)) if there exist constants c> 0 and nij i such that
f(n) Ç cg(n), for n no; wheréas f(n) is o(g(n)) iffor all constants c > O there is
a constant no such that f(n) cg(n), for n no. Intuitively, f(n) is o(g(n)) if f(n)
becomes insignificant compared to g(n) as n. grows toward infinity. As previously
mentioned, asymptotic notation is useful because it allows us to concentrate on the
main factor determiûing a function's growth.
To summarize, the asymptotic notations of big-Oh, big-Omega, and big-Theta,
as. well as little-oh and littlsomega provide a convenient language for us to analyze
data structures ànd algorithms. As mentioned earlier, these notations provide con-
veniènce because they let us concentrate on the "big picture" rather than low-level
details. .
19
1.2. Asymptotic Notation L

1.2.3 The Importance ¿f Asymptotics


Asymptotic notation has many important benefits, which might not be immediately
obvious. Specifically, we illustrate one important aspect of the asymptotic view-
point in Table 1 7 This table explores the maximum size allowed for an input
instanòe for various running times to be solved in i second, 1 minute, and 1 hour,
assuming each operation can be processed in i microsecond (1 jis). It also shows
the importance of algorithm design, because an algorithm with an asymptotically
show, running time. (for example, one that is 0(n2)) is beaten in the long. run by
an algorithm with an asymptotically faster running time (for example, one that is
O(nlogn)), even if the constant factor for the faster algorithm is worse.

Running Maximum Problem Size (n)


Time 1 second :1 minute 1 hour
400n 2,500 150,Q00 9,000,000
20n [log n] 4,096 166,666 7,826,087
2112 707 5,477 42,426
n 31 '88 244
r. .
19 25 31

Table 1.7: Maximum size of a problem that can be solved in one second, one
minute, and one hour, for various running times measured in microseconds.

The importance of good algorithm design goes beyond just what can be solved
effectively on a given computer, however. As shown in Table 1.8, even if we
achieve a dramatIc speedup in hardware, we still cannot overcome the handicap
f an asymptotically slow algorithm. This table shows the new maximum problem
size achievable for any fixed amount of time, assuming algorithms with the given
running tithes are now run on a compUter 256 times faster than the previous one.

Running New Maximum


Time . Problem Site
400n 256m
20n [16g nl approx. 256((logm) / (7 ± log m))m
2n2 16m
4m
r
n4 .

. m+8

Table 1.8:. Increase in the maximum size of a problem that can be solved in a certain
fixed amount of time, by using a computer that is 256 times faster than the previous
one, for various running times of the algorithm Each entry is given as a function
of m, the previous maximum problem size. . .
20 Chapter 1. Algorithm Analysis

Ordering Functions by Their Growth Rates


Suppose two algorithms solving the same problem. are available an a1orithm A,
which has a running timé of 9(n), and an algorithm B, which has a running time
of 9(n2). Which one is better7 The little-oh notation says that n is o(n2), which
implies that algorithm A is asymptotically better than algorithm B, although. for a
given (small) value of n, it is possible for algorithm B to have lower running time
than algorithm A. Still, in the long run, as shown in .the above tables, the benefits
of algorithm A over algorithm B will become clear.
In general, we can use the little-oh notation to order classes of functions by
asymptotic growth rate. In Table 1:9, we show a list of functions ordered by in,-.
creasing growth rate, that is, if a function f(n) precedes a function g(n) in the list,
then f(n) is o(g(n)).

Functions Ordered by Growth Rate


logn
log2 n

n
nlogn
n2
n3
2

Table 1.,: An orderéd list of simple functions Note that, using common terminol-
ogy, one of the above functions is logarithmic, two are pólylogarithmic, three are
sublinear, one is linear, one is quâdratic, one is cubic, and one is exponential.

In Table 110, we illustrate the difference in the growth rate of all but one of the
functions shown in Table 1.9.

n
2
4.
logn

2
1
vtW
1.42
2
n

4
niogn
2
8,
4
i6
.- ..64
.2"
4
16
H

8 3 2,8 8 24 64 512 256


16 4 4 16. 64 256 4,096 65,536
32 5 5.7 32 160 1,024 32,768 4,294,967,296
64 6 8 64 384 4,096 262,144 1.84x iø'
128 7. ii 128 896 i6,384 2,097,152 3,40x1038
256 $ 16 256 2,048 65,536 .. 16,777,216 1.15 x
512 9 23 512 4,608 262,144 134,217,728 134x10154
1,024 10 32 1024 16,240 Ï,048,576 1,073,741,824: . 1.79x1038

Table 1.10: Growth of several functions.


1.3. A Quick Mathematical Review 21

1.3 A Quick Mathematical Reviçw


In this section, we briefly review some of the fundamental concèpts from discrete
mathematics that will arise in several of our discussions. In addition tè these fun-
damental coñcepts, Appendix A includes a list of other useful mathematical facts
that apply in the context of data structure and algorithm analysis.

1.3.1 Summations
A notation that appeais again and again in the analysis pf data structures and algo-
rithms is the summation, which is defined as
b
Ef@) =f(a)+f(a+1)+f(a+2)+ +f(b).
i=a
SummationE arise in data structure and algorithm analysis because the tunning
timesôf loops naturally give rise to summations. For example, a summation that
often arises in data structure and algorithm analysis is the gèometric summation.

Theorem 1 12 For any integer n O and any real number O <a $ 1, consider

Ea'=1+a+a2+ +a't

(remembering that a0 = i ifa > O) This summation is equal to

1a
Summations s shown in Theorem 1 12 are called geometric summations, be-
cause each term is geometncally larger than the previous one if a> 1 That is, the
terms in such a geometric summation exhibit exponential growth For example,
eveiyope working in computing should know that
1-(2+4+8+ +2'=2-1,
for this is the largest integer that can be represented in binary notation using n bits.
Another summatiOn that arises in several contexts is

1=1+2+3+ +(n-2)+(n-1)+n
This summation often anses in the analysis of loops in eases where the number of
operations performed inside the loop increases by a fixed, constant amount with
each iteration This summation also has an interesting history In 1787, a German
elementary schoolteacher decided to keep his 9- and 10-year-old pupils occupied
wIth the task of adding up all the numbers from 1 to 100. But almost immediately
after giving this assignment, one of the children claimed to have the answer-5,050.
Chapter 1. Algorithm Analysis

That elementary school student was none. other than Karl Gauss, who would
grow up to be one of the greatest mathematicians of the 19th century It is widely
suspected that young Gauss derived the answer to his teacher's assignment using
the following identity.
Theorem 1.13: Foranyintegern 1, we have

Proof: We give two "visual" justifications of Theorem 1.13 in Figure 1.11, both
of which are based oil computing the area of a collection of rectangles representing
the numbers 1 through n. In Figure 1.1 la we draw a big triangle over an ordering
of the rectangles, noting that the area Of the rectangles is the same as that of the
big triangle (n2 /2) plus that of n small triangles, each of. area 1/2. In Figure 1.1 lb,
which applies when n is even, we note that i plus n is n + 1, as is 2 plus n 1, 3
plus n 2, and so on. There are n/2 such pairings.

n ' 1 2 n12
3
(a) (b)

Figure 1.11: Visual justifications of Theorem 1.13. Both illustrations visualize the
identity in terms of the total area covered by n unit-width rectangles with heights
1,2, ,n In (a) the rectangles are shown to cover a big tnangle of area n2/2 (base
n and height n) plus n small triangles of area 1/2 each (base i and height 1) In
(b), which applies thy when n is. even, the rectangles are shown to cover a big
rectangle of base n/2 and height n + 1.
1.3. A Quick Mathematical Review 23

1.3.2 Logarithms and Exponents


One of the interesting and sometimes even surpnsing aspects of the analysis of data
structures and algonthms is the ubiquitous presence of loganthms and exponents,
where we say
loga=c if a=rbC.
As is the custom in the computing literature, we omit writing the base b of the
logarithm when b 2. For example, log 1024 lo.
There are a number of important rules for logarithms and exponents, including
the following:
Theorem 1.14: Let a, b, and c be positive real numbers. We have:

logac=loga+logc
logy a/c = logo a - logs c
loga'=cloga
loga = (loga)/logb
¿,Iogc a 1og b
5

bab=bI+
b9/fr = ba_c.

Also, as a notational shorthand, we use log" n to denote the function (log n)'
and we use loglogn to denote log(logn). Rather than show how we could derive
eaph of the above identities, which all follow from the definition of logarithms and
exponents, let us instead illustrate these identities with a few examples of their
usefulness.

Examplé 1.15: We illustrate some interesting cases when the base of a logarithm
or exponent is 2. Thé mies cited refer to Theorem 1; 14.

log(2nlogn) = i +logn +loglogn, by raie L(twice)


log(n/2) = logn log2 logn 1, by rule 2
log/i= log(n)"2 = (logn) 12, by rule 3
loglog.J = log(logn)/2 =loglogn 1, by rules 2 and 3
log4n = (logn)/log4 = (logn)/2, by rule 4
log2=n,byrule3
2b0I2=n,byrule5
!22J0gn (2b0g)2 = ¡2 by rules Sand 6
4=(22)=22",bymle6
23logn = n2 n3 = n5, by rules 5, 6, and 7
s 4'/T 22n/2n 22nn byrules6and.8
Other documents randomly have
different content
CHAPTER II.

THE PNEUMATIC TRANSIT COMPANY AND THE FIRST PNEUMATIC


TUBES FOR THE TRANSPORTATION OF UNITED STATES MAIL.

Organization.—Early in the year 1892 several Philadelphia


gentlemen organized a corporation and obtained a charter in the
State of New Jersey to construct, lay, and operate pneumatic tubes
for the transmission of United States mail, packages, merchandise,
messages, etc., within the States of New Jersey and Pennsylvania.
The corporation was styled the Pneumatic Transit Company. Mr.
William J. Kelly was elected president, and the company is still under
his management.
WM. J. KELLY,
President of the Pneumatic Transit Co.
Fig. 4.
Fig. 5.
SIX-INCH PNEUMATIC TUBES IN PROCESS OF BORING AT THE
SHOP OF A. FALKENAU, PHILADELPHIA, PA.

Aim and Object of the Company.—When the Pneumatic Transit


Company was formed, it was the aim and object of its promoters to
construct an extensive system of underground tubes in the City of
Philadelphia which would serve, first, for the rapid transmission of
mail, second, for the quick delivery of merchandise from the large
retail stores, third, for the transmission of telegrams or messages
within the city limits, and, fourth, to conduct a general local express
business with greater speed than can be done in any other manner.
To accomplish this result sub-stations were to be located six or eight
blocks apart throughout a large portion of the city, and a central
station was to be established in the centre of the business section.
Stations were also to be established in the more important retail
stores and large office buildings, and all of the stations were to be
connected by tubes forming one large system.
For the transmission of mail it was planned to connect the main
post-office with the sub-post-offices by tubes of a size large enough
to carry all of the first-class and most of the other classes of mail
matter. The sub-post-offices would be divided into groups, all of the
offices in one group being connected to the same line, which would
terminate at the main post-office. Most of the business would be
between the main and individual sub-offices; in addition to this there
would be some local mail sent between the sub-offices which, for
offices in the same group, could be despatched directly without
passing through the main office. The advantages to be gained by the
use of these tubes over the present wagon service are very
apparent. It places all the sub-post-offices in almost instant
communication with the main office and with each other.
It was a part of the general plan to lay tubes from the main post-
office to the railway stations, thereby hastening the despatch and
receipt of mails to and from the trains.
It was expected that the bulk of the business would consist in the
delivery of parcels from the retail stores to the private houses in the
residence sections of the city. Of course it would not be practicable
to lay a tube to each house, but with a station not more than four or
five blocks away, the parcels would be sent through the tube to the
nearest station, and then delivered by messengers to the houses
with a minimum loss of time. Ladies could do their shopping and find
their purchases at home when they returned.
The same tubes used for parcel delivery would also be used for a
district messenger service. With numerous public stations in
convenient locations, all the advantages of the European system
would be realized in the quick despatch of letters and telegrams.
Every one knows how much time is consumed by district messenger-
boys in the delivery of messages, especially when they have to go
long distances, and no argument is required to show that this time
would be very much reduced by the use of pneumatic tubes, besides
prompt delivery would be made much more certain.
The tubes of this system were to be six or eight inches in
diameter, with a few small tubes in localities where the message
service is very heavy.
Without going more into detail, such were in brief the plans of the
promoters of this new company; but before launching such an
enterprise, involving a large amount of capital, there were many
engineering and mechanical problems to be solved. It was not
simply a question of obtaining tubes and laying them in the streets,
but ways and means for operating them must be devised. Up to this
time only small tubes had been used for the transmission of
telegrams, messages, cash, and other light objects. Now it was
proposed to transmit heavy and bulky material. There was no
experience for a guide.
The Clay-Lieb Patents.—The Pneumatic Transit Company at this
time turned to the Electro-Pneumatic Transit Company, of New
Jersey, a national company that had been in existence since 1886,
and which claimed to own valuable patents, for the ways and means
to carry out its new enterprise. The patents were those of Henry
Clay and Charles A. Lieb, and the rights to use them in the State of
Pennsylvania were procured by the Pneumatic Transit Company,
under a contract entered into between the two companies. The
patents claimed to cover a practical working system by which a large
number of stations could be connected to a system of main and
branch tubes, with electrically-operated switches at the junctions of
the branches with the main lines. Any person who gives the subject
a little thought will at once see the advantages of such a system if it
could be made to operate. Up to the present time only single- or
double-line tubes have been used, without branches. In the
European systems, frequently several stations are located along a
line, but the carriers must stop at each station, be examined, and if
they are destined for another station, they must be redespatched.
The cash systems used in many of our large stores have
independent tubes running from the central cashier’s desk to each
station about the store. It is plain to be seen that, if several of these
stations could be connected by branches to a main tube, a large
amount of tubing would be saved—a most desirable result. The
advantages of such a system would be still greater for long lines of
tube laid under the pavements, extending to stations located in
different parts of a large city. It was such a result that the patents of
Clay and Lieb aimed to accomplish.
In order to demonstrate the practicability of the system, the
Electro-Pneumatic Transit Company had constructed in the basement
of the Mills Building, on Broad Street, New York, a short line of small
brass tubing, about two or three inches in diameter, with one
branch, thus connecting three stations together. The tube was very
short, probably not more than two hundred feet in length. The air-
pressure required was very slight, probably not more than an ounce
or two, being supplied by a small blower run by an electric motor.
At the junction of the branch and the main tube was located a
switch that could be moved across the main tube and so deflect the
approaching carrier into the branch. This switch was moved by an
electro-magnet, or solenoid, that could be excited by pressing a
button at the station from which the carrier was sent. When the
carrier passed into the branch tube it set the switch back into its
normal position, so that a second carrier, following the first, would
pass along the main tube, unless the switch was again moved by
pressing the button at the sending station.
This tube in the Mills Building worked well, but it was of a size
only suited to the transmission of cash in a store or other similar
service. It could not be said, because this tube worked well, that a
larger and longer tube with numerous branches would work equally
well. In fact, there are several reasons why such a tube would not
operate satisfactorily. The method of operating the switches was
impracticable. Suppose the branch tube had been located two miles
away from the sending station and that it would take a carrier four
minutes to travel from the sending station to the junction of the
branch tube. Again, suppose that we have just despatched a carrier
destined for a station on the main line beyond the junction, and that
we wish to despatch the second carrier to be switched off into the
branch tube, we must wait at least four minutes, until the first
carrier has passed the junction, before we can press the button and
set the switch for the second carrier which may be on its way. How
are we to know when the first carrier has passed the junction, and
when the second will arrive there, in order that we may throw the
switch at the proper time? Must we hold our watch and time each
carrier? It is plain that this is not practical. I take this as an
illustration of the impracticability of the Clay-Lieb System as
constructed in the Mills Building when extended to practical
dimensions. I will not describe the mechanism and details of the
system, which are ingenious, but will say in passing that the
automatic sluice-gates, which work very well in a three-inch tube
with carriers weighing an ounce or two and air-pressures of only a
few ounces per square inch, would be useless and could not be
made to operate in a six-inch tube with carriers weighing from eight
to twenty-five pounds and an air-pressure of from five to twenty-five
pounds per square inch. For further information the reader is
referred to the patents of Clay and Lieb.
Franchises and First Government Contract.—In the spring of
1892 an ordinance was passed by Common and Select Councils, and
signed by the Mayor of the City of Philadelphia, permitting the
Pneumatic Transit Company to lay pneumatic tubes in the streets of
that city. At the time this franchise was granted negotiations were in
progress with the post-office department, in Washington, for the
construction of a six-inch pneumatic tube, connecting the East
Chestnut Street sub-post-office, at Third and Chestnut Streets, with
the main post-office, at Ninth and Chestnut Streets, for the
transmission of mail. This sub-post-office was selected because more
mail passes through it daily than any other sub-office in the city, it
being located near the centre of the banking district. Negotiations
were delayed by various causes, so that the contract with the
government was not signed until October, 1892.
Search for Tubes.—It was at this time that the writer was first
employed by the Pneumatic Transit Company, as engineer, to
superintend the construction of this line. The company commenced
at once to carry out its contract with the United States government,
both the post-office department and the company being very
desirous of having the work completed before winter. The time was
very short for such an undertaking, but wrought-iron tubes had
already been ordered of a well-known firm who manufacture pipe
and tubing of all kinds. After waiting four or five weeks the first lot
of tubes were finished, but upon inspection it was found that they
were not sufficiently accurate and smooth on the interior to permit
of their being used for the purpose intended. The next thing that
suggested itself was seamless drawn brass tubes. While they would
be very expensive, the process of manufacture makes them
eminently suited for the purpose, but they could not be obtained in
time. A city ordinance prohibits the opening of the streets of
Philadelphia during the winter months except in extreme cases.
Accurate tubes must be had, and had quickly. It then occurred to the
writer that it might be possible to bore a sufficient quantity of
ordinary cast-iron water-pipe and fit the ends accurately together to
answer our purpose. Inquiry was made at nearly all the machine-
shops in the city, to ascertain how many boring-machines could be
put upon this work of boring nearly six thousand feet of six-inch
pipe. It was found impossible to get the work done in time, if it was
to be done in the usual manner of boring with a rigid bar. At last a
man was found in Mr. A. Falkenau, engineer and machinist, who was
prepared to contract for the construction of twelve special boring-
machines and to bore all the tubing required. Suffice it to say, that
the machines were built, and about six thousand feet of tubes were
bored, between November 8 and December 31.
Fig. 6.
PIPE BORING APPARATUS.

Larger image (92 kB)

Method of Manufacturing Tubes.—The process of boring was


novel in some respects, and might be termed reaming rather than
boring. Figs. 4 and 5 show the interior of the shop and the twelve
machines. Fig. 6 is a drawing of one of the machines. A long flexible
bar rotated the cutter-head, which was pulled through the tube, in
distinction from being pushed. In order to give the feeding motion, a
screw was attached to the cutter-head and extended through the
tube in advance of it. The feed-screw was drawn forward by a nut
attached to a hand-wheel located at the opposite end of the tube
from which the boring began. Since it was not necessary that the
tubes should be perfectly straight, a method of this kind was
permissible, in which the cutters could be allowed to follow the
cored axis of the tube. Air from a Sturtevant blower was forced
through the tubes during the process of boring, for the double
purpose of clearing the chips from the cutters and keeping them
cool. After the tubes were reamed, each piece had to be placed in a
lathe, have a counter-bore turned in the bottom of the bell, and
have the other end squared off and turned for a short distance on
the outside to fit the counter-bore of the next section.
Laying and Opening the Tubes for Traffic.—The first tubes were
laid about the middle of November, but December 1 came before the
work was completed and special permission had to be obtained from
the city to carry on the work after that date. All work was suspended
during the holidays in order not to interfere with the holiday trade of
the stores on Chestnut Street. Severe frosts prevailed at that season,
so that when the work was begun again, after the holidays, bonfires
had to be built in the streets to thaw out the ground in order to take
up the paving-stones and dig the trench for the tubes. Several times
the trench was filled with snow by unusually heavy storms.
Notwithstanding all these delays and annoyances, the work was
pushed forward, when a less determined company would have given
it up, and was finally completed. A formal opening took place on
February 17, 1893, when Hon. John Wanamaker, then Postmaster-
General, sent through the tube the first carrier, containing a Bible
wrapped in the American flag.
It was certainly a credit to the Pneumatic Transit Company and its
managers that they were able to complete this first line of tubes so
quickly and successfully under such trying circumstances. The tubes
have been in successful operation from the opening until the present
time, a period of nearly four years, and the repairs that have been
made during that time have not required its stoppage for more than
a few hours.
In the summer of 1895 the sub-post-office was removed from
Chestnut Street to the basement of the Bourse (see Fig. 7). This
required the abandonment of a few feet of tube on Chestnut Street
and the laying of a slightly greater amount on Fourth Street, thus
increasing the total length of the tubes a little. Wrought-iron tube,
coated with some alloy, probably composed largely of tin or zinc,
was used for this extension. The wrought-iron tube is not as good as
the bored cast iron.
Fig. 7.
BOURSE BUILDING, PHILADELPHIA.
Fig. 8.
PNEUMATIC TUBES SUSPENDED IN THE BASEMENT OF THE MAIN
POST-OFFICE.

Description of the Tubes, Method of Laying, etc.—This


Chestnut Street line consists of two tubes, one for despatching
carriers from and the other to the main post-office. The distance
between the two stations is two thousand nine hundred and
seventy-four feet, requiring five thousand nine hundred and forty-
eight feet of tube. The inside diameter of the tube is six and one-
eighth inches, and it was made in sections each about eleven feet
long, with “bells” cast upon one end, in order to join the sections
with lead and oakum, calked in the usual manner of making joints in
water- and gas-pipes, with this exception, that at the bottom of the
bell a counter-bore was turned to receive the finished end of the
next section. By thus machining the ends of each section of tube
and having them fit accurately together, male and female, a
practically continuous tube was formed with no shoulders upon the
interior to obstruct the smooth passage of the carriers. Joints made
in this way possess another great advantage over flanged and bolted
joints, in that they are slightly yielding without leaking, and so allow
for expansion and contraction due to changes of temperature. Each
joint takes care of the expansion and contraction of its section,
which is very slight, but if all were added together would amount to
a very large movement. Another advantage of the “bell” joint is that
it permits slight bends to be made in the line of tube without
requiring special bent sections. Where short bends had to be made,
at street corners, in entering buildings, and other similar places,
brass tubes were used, bent to a radius of not less than six feet, or
about twelve times the diameter of the tube. (One of the brass
bends may be seen in Fig. 10.) The bends were made of seamless
tubing, bent to the desired form and radius in a hydraulic machine.
To prevent them from being flattened in the process of bending,
they were filled with resin, which was afterwards melted out.
Flanges were screwed and soldered to the ends of the bent brass
tubes, and they were bolted to special flanged sections of the iron
tube.
The tubes were laid in the trench and supported by having the
ground firmly tamped about them. Usually one tube was laid above
the other, with an iron bracket between, but frequently this
arrangement had to be departed from in order to avoid obstructions,
such as gas- and water-pipes, sewers, man-holes, etc. The depth of
the tubes below the pavement varied from two to six feet, and in
one place, in order to pass under a sewer, the extreme depth of
thirteen feet was reached. At the street crossings it was frequently
difficult to find sufficient space to lay the tubes. At the intersection
of Fifth and Chestnut Streets a six-inch water-main had to be cut
and a bend put in. A seven-strand electric cable, used for
telephoning and signalling, was laid upon the top of one of the
tubes, protected by a strip of “vulcanized wood,” grooved to fit over
the cable. The cable and protecting strip of wood were fastened to
the tube by wrought-iron straps and bolts.
The tubes enter the main post-office on the Chestnut Street side,
through one of the windows, and are suspended from the ceiling
along the corridor in the basement for a distance of nearly two
hundred feet. Fig. 8 shows the tubes thus suspended. They
terminate upon the ground floor about the centre of the building,
and near the cancelling machines.

Fig. 9.
DUPLEX AIR-COMPRESSOR IN THE BASEMENT OF THE MAIN POST-
OFFICE.
Fig. 10.
TANKS AND TUBE IN THE BASEMENT OF THE MAIN POST-OFFICE.

Air-Compressor—Method of Circulating the Air.—The current


of air that operates the tubes is supplied by a duplex air-compressor
located in the basement of the main post-office. This machine is
shown in Fig. 9, and requires no detailed description, as it does not
differ materially from air-compressors used for other purposes. The
stroke is twenty-four inches, the diameter of the steam-cylinders ten
inches, and the air-cylinders eighteen inches. The air-cylinders are
double acting, with poppet-valves, and have a closed suction. The
speed of the machine varies slightly, being controlled by a pressure-
regulator that maintains a practically constant pressure in the tank
that feeds the tube. The engines develop a little over thirty horse-
power under normal conditions. The pressure of the air as it leaves
the compressor is usually six or seven pounds per square inch.
Compressing the air heats it to about 156° F., but this is not
sufficient to require water-jackets about the air-cylinders. From the
compressor the air flows to a tank, shown on the right in Fig. 10,
where any oil or dirt contained in the air is deposited. The principal
purpose of the tank is, however, to form a cushion to reduce the
pulsations in the air caused by the periodic discharge from the
cylinders of the compressors, and make the current in the tube more
steady. From this tank the air flows to the sending apparatus on the
ground floor of the post-office and thence through the outgoing tube
to the sub-post-office. At the sub-post-office, after flowing through
the receiving and sending apparatus, it enters the return tube and
flows back to the main office, passing through the receiving
apparatus there and then to a tank in the basement,—the left tank
in Fig. 10. The air-compressor draws its supply from this tank, so
that the air is used over and over again. This return tank has an
opening to the atmosphere, which allows air to enter and make up
for any leakage or escape at the sending and receiving apparatus,
thereby maintaining the atmospheric pressure in the discharge end
of the tube and in the suction of the compressor. The tank serves to
catch any moisture and dirt that come out of the tube. Fig. 11 is a
diagram showing the direction and course of the air-current. It will
be noticed that both the out-going and return tube are operated by
pressure, in distinction from exhaust. The air is forced around the
circuit by the air-compressor. There is no exhausting from the return
tube. The pressure of the air when it enters the tube at the main
post-office is, say, seven pounds per square inch; when it arrives at
the sub-post-office the pressure is about three and three-quarters
pounds, and when it gets back to the main office and enters the
return tank, the pressure is zero or atmospheric. Thus it will be seen
that the pressure becomes less and less as the air flows along the
tube. This is not the pressure that moves the carriers, but the
pressure of the air in the tube, a pressure that exists when there are
no carriers in the tube. It is the pressure that would be indicated if
you should drill a hole into the tube and attach a gauge.

Fig. 11.
Larger image (94 kB)

Terminal Apparatus.—When the construction of this line was


begun, it was the intention of the Pneumatic Transit Company to use
the apparatus of the Electro-Pneumatic Transit Company, at both
stations, for sending and receiving carriers, and so-called working-
drawings were obtained for this purpose. The sending apparatus was
constructed according to the designs furnished, but, upon
examination of the drawings of the receiving apparatus, it was so
apparent that it would not work as intended that it was never
constructed.
The writer was asked to design an automatic receiver to stop the
carriers without shock upon their arrival at the stations, and to
deliver them upon a table without appreciable escape of air,—
something that would answer the requirements of the present plant.
Fig. 12.
TRANSMITTER.—PHILA.
SENDING APPARATUS.
Larger image (143 kB)

The Sender.—The sending apparatus is for the purpose of enabling


the operator to place a carrier in the tube without allowing the air to
escape. In other words, it is a means of despatching carriers. The
apparatus for this purpose, already referred to, is simply a valve. A
side view and section of it are shown in Fig. 12. Fig. 15 is a view of
the apparatus in the main post-office. The sending apparatus is seen
on the left. Fig. 13 is a view of the sub-post-office apparatus, and
here a man may be seen in the act of despatching a carrier.
Referring to the section, Fig. 12, it will be seen that the sending
apparatus consists of a short section of tube supported on trunnions
and enclosed in a circular box. Normally this short section of tube
stands in line with the main tube, and the air-current passes directly
through it. It is shown in this position in the figure. When a carrier is
to be despatched, this short section of tube is rotated by a handle
until one end comes into coincidence with an opening in the side of
the box. In this position the air flows through the box around the
movable tube. A carrier can then be placed in the short section of
tube and be rotated by the handle into line with the main tube. The
carrier will then be carried along with the current of air. A circular
plate covers the opening in the box where the carrier is inserted
when the sending apparatus is closed.
At the sub-post-office this sending apparatus is placed in a
horizontal position, but its operation is the same.

Fig. 13.
RECEIVING AND SENDING APPARATUS IN THE SUB-POST-OFFICE.
Fig. 14.
APPARATUS AT SUB-STATION—PHILA.

Larger image (151 kB)


Fig. 15.
TERMINALS OF THE TUBE IN THE MAIN POST-OFFICE.
Larger image (397 kB)

Sub-Post-Office Receiver.—We have already explained that the


air-pressure in the tube at the sub-post-office is about three and
three-quarters pounds per square inch. With such a pressure we
cannot open the tube to allow the carriers to come out. They must
be received in a chamber that can be closed to the tube after the
arrival of a carrier and then opened to the atmosphere. Furthermore,
this chamber must act as an air-cushion to check the momentum of
the carriers. Fig. 13 shows the sub-post-office apparatus when a
carrier is being delivered from the receiving apparatus, or, as we will
name it for convenience, the receiver. Fig. 14 is a drawing of the
same apparatus, partly in section, that shows more clearly its
method of operation. This drawing shows the sending apparatus in a
different position from Fig. 13, but that is immaterial. The receiver
consists of a movable section of tube, about twice the length of a
carrier, closed at one end, supported upon trunnions, and normally
in a position to form a continuation of the main tube from which the
carriers are received. When a carrier arrives it runs directly into the
receiver, which being closed at the end forms an air-cushion that
stops the carrier without shock or injury. Just before reaching the
receiving chamber the current of air passes out through slots in the
walls of the tube into a jacket that conducts it to the sending
apparatus, as shown in Fig. 14. At the closed end of the receiving
chamber, or air-cushion, is a relief valve, normally held closed by a
spring. As the carrier compresses the air in front of it, this valve
opens and allows some of the air to escape, which prevents the
carrier from rebounding into the tube. Under the outer end of the
receiving chamber is a vertical cylinder, E, Fig. 14, supported upon
the base-plate containing a piston. The piston of this cylinder is
connected by a piston- and connecting-rod to the receiving chamber.
When air is admitted to the cylinder under the piston, the latter rises
and tilts the receiving chamber to an angle of about forty degrees,
which allows the carrier to slide out. The receiving chamber carries a
circular plate, C, that covers the end of the main tube when it is
tilted. A small piston slide-valve, F, located near the trunnion of the
receiving chamber, controls the admission and discharge of air to
and from the cylinder E, upon the arrival of a carrier. When a carrier
arrives and compresses the air in the air-cushion or receiving
chamber, a small portion of this compressed air is forced through
pipe G, to a small cylinder containing a piston and located just above
the piston slide-valve F. The increased pressure acting on the piston
moves it downward, and it in turn moves the slide-valve F. Thus it
will be seen that the stopping of the carrier causes the receiving
chamber to be tilted and the carrier slides out on to an inclined
platform, K. This platform is hinged at one end, and supported at
the angle seen in the figure by a counterweight. When a carrier rests
upon it, the weight of the carrier is sufficient to bear it down into a
horizontal position; in this position the carrier rolls off on to a table
or shelf. The platform, K, is connected by rods, bell-cranks, etc., to
the piston slide-valve, so that when it swings downward by the
weight of a carrier, the slide-valve is moved upward into its normal
position, and this causes the receiving chamber to tilt back into a
horizontal position ready to receive the next carrier. The time that
elapses from the arrival of a carrier until the receiving chamber has
returned to its horizontal position is not more than three or four
seconds. Nothing could operate in a more satisfactory manner.
Main Post-Office Receiver.—At the main post-office we have a
receiver of a different type. It will be remembered that the pressure
in the return tube at the main post-office is nearly down to zero or
atmospheric, so that we can open the tube to allow the carriers to
pass out without noise or an annoying blast of air. Figs. 15 and 16
show the main-office apparatus, and Fig. 17 is a drawing of the
same. Here the receiver consists of a section of tube closed by a
sluice-gate, located at B, Fig. 17. The air-current passes out through
slots in the tube into a branch pipe leading to the return tank in the
basement. These slots are located about four feet back of the sluice-
gate, so that the portion of the tube between the slots and the
sluice-gate forms an air-cushion to check the momentum of the
carriers. The sluice-gate is raised and lowered by a piston moving in
a cylinder located just above the gate. The movement of this piston
is controlled by a piston slide-valve in a manner similar to the
apparatus at the sub-post-office. Air for operating the piston is
conveyed through the pipe D, Fig. 17, from the pipe leading from the
air-compressor to the sending apparatus. This air is at about seven
pounds pressure per square inch.

Fig. 16.
RECEIVING APPARATUS AT THE MAIN POST-OFFICE.
Fig. 17.
APPARATUS AT THE MAIN OFFICE—PHILA.

Larger image (83 kB)

When a carrier arrives, after passing the slots that allow the air-
current to flow into the branch pipe, it compresses the air in front of
it against the gate. This compression checks its momentum, and it
comes gradually to rest. The air compressed between the carrier and
the sluice-gate operates to move the piston slide-valve, thereby
admitting air to the gate cylinder under the piston, which rises,
carrying with it the sluice-gate. The tube is now open to the
atmosphere, and there is just sufficient pressure in the tube to push
the carrier out on to a table arranged to receive it. As the carrier
passes out of the tube it lifts a finger out of its path. This finger is
located at E, Fig. 17, and when it is lifted by the passing carrier it
moves the piston slide-valve, and the sluice-gate is closed. A valve is
located in the branch-pipe that conducts the air to the return tank in
the basement. If the pressure in the tube is not sufficient to push
the carrier out on to the table, this valve is partially closed, thereby
increasing the pressure to a desired amount.
Fig. 18.
CARRIER.
Fig. 19.
CARRIER.

The Carrier.—We have frequently spoken of the carrier, which


contains the mail and other parcels that are transported from one
office to the other. In Fig. 13, showing the sub-post-office apparatus,
we see one of these carriers being despatched by the attendant and
another being delivered from the tube. In Fig. 15 several carriers
may be seen standing on the floor. Fig. 18 shows a carrier with the
lid open, ready to receive a charge of mail, and Fig. 19 shows the
same closed, ready for despatching. The construction of the carrier
is shown by the drawing, Fig. 20. The body of the carrier is steel,
about one-thirty-second of an inch in thickness. It is made from a
flat sheet, bent into a cylinder, riveted, and soldered. The length
outside is eighteen inches, and the inside diameter is five and one-
quarter inches. The front end is made of a convex disk of steel,
stamped in the desired form, and secured to the body of the carrier
by rivets, with the convex side inward. It is necessary to have a
buffer upon the front end of the carrier to protect it from blows that
it might receive, and this buffer is made by filling the concave side of
the front head with felt, held in place by a disk of leather and a
central bolt. The leather disk is made of two pieces, riveted together,
with a steel washer between. The steel washer is attached to the
head of the bolt. The carrier is supported in the tube on two
bearing-rings, located on the body of the carrier a short distance
from each end. The location of these rings is so chosen that it
permits a carrier of maximum length to pass through a bend in the
tube of minimum radius without becoming wedged. This is a very
important feature in the construction of carriers, but does not
appear to have been utilized in other systems.

Fig. 20.
MAIL CARRIER.—PHILA.

Larger image (57 kB)

The bearing-rings are made of fibrous woven material, especially


prepared, and held in place by being clamped between two metal
rings, one of which is riveted to the body of the carrier. Of course
these rings wear out and have to be replaced occasionally, but their
usual life is about one thousand miles. The rear end of the carrier is
closed by a hinged lid and secured by a special lock. The lock
consists of three radial bolts that pass through the body of the
carrier and the rim of the lid. These bolts are thrown by three cams,
attached to a short shaft that passes through the lid and has a
handle or lever attached to it upon the outside of the lid. This cam-
shaft is located out of the geometrical centre of the lid in such a
position that when the lever or handle is swung around in the
unlocked position, it projects beyond the periphery of the lid, and in
this position the carrier will not enter the tube. When the lid is
closed and locked, the lever lies across the lid in the position shown
in Fig. 19, and when the carrier is in the tube it cannot become
unlocked, for the lever cannot swing around without coming in
contact with the wall of the tube. This insures against the possibility
of the carriers opening during transit through the tube. The empty
carriers weigh about nine pounds, and when filled with mail, from
twelve to fifteen pounds. They have a capacity for two hundred
ordinary letters, packed in the usual manner.
Operation of the Tubes.—The tubes are kept in constant
operation during the day, and six days of the week. The air-
compressor is started at nine o’clock in the morning and runs until
seven in the evening, except during the noon hour, the air flowing in
a constant steady current through the tubes. When a carrier is
placed in the tube it is carried along in the current without
appreciably affecting the load on the compressor. Carriers may be
despatched at six-second intervals, and when they are despatched
thus frequently at each office, there will be eighteen carriers in the
tube at the same time. If ten carriers per minute are despatched
from each office, and each carrier contains two hundred letters, the
tube has a carrying capacity of two hundred and forty thousand
letters per hour, which is far beyond the requirements of this office.
About five hundred carriers a day are despatched from each office.
This varies considerably on different days and at different seasons of
the year. Experience has taught that a certain period of time should
elapse between the despatching of carriers, in order that they may
not come in contact with each other, and that the receivers may
have time to act. With the present plant this period is made about
six seconds. In order to make it impossible for carriers to be
despatched more frequently than this, time-locks are attached to the
sending apparatus. One of these locks may be seen in Fig. 13,
connected to the handle of the sending apparatus. It is so arranged
that when a carrier is despatched a weight is raised and allowed to
fall, carrying with it a piston in a cylinder filled with oil. While the
weight is rising and falling the sending apparatus is locked, but
becomes unlocked when the weight is all the way down. A by-pass

You might also like