100% found this document useful (6 votes)
175 views

PDF System Modeling and Simulation An Introduction 1st Edition Frank L. Severance Download

ebook

Uploaded by

kedsiaolti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (6 votes)
175 views

PDF System Modeling and Simulation An Introduction 1st Edition Frank L. Severance Download

ebook

Uploaded by

kedsiaolti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Full download ebook at ebookgate.

com

System modeling and simulation an


introduction 1st Edition Frank L. Severance

https://ebookgate.com/product/system-modeling-and-
simulation-an-introduction-1st-edition-frank-l-
severance/

Download more ebook from https://ebookgate.com


More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Introduction to Digital Systems Modeling Synthesis and


Simulation Using VHDL 1st Edition Mohammed Ferdjallah

https://ebookgate.com/product/introduction-to-digital-systems-
modeling-synthesis-and-simulation-using-vhdl-1st-edition-
mohammed-ferdjallah/

Electromagnetic Modeling and Simulation 1st Edition


Levent Sevgi

https://ebookgate.com/product/electromagnetic-modeling-and-
simulation-1st-edition-levent-sevgi/

Introduction to Law and the Legal System 11th Edition


Frank August Schubert

https://ebookgate.com/product/introduction-to-law-and-the-legal-
system-11th-edition-frank-august-schubert/

Introduction to Law and the Legal System Tenth Edition


Frank August Schubert

https://ebookgate.com/product/introduction-to-law-and-the-legal-
system-tenth-edition-frank-august-schubert/
solid fuels combustion and gasification modeling
simulation and equipment operation 1st Edition Marcio
L. De Souza-Santos

https://ebookgate.com/product/solid-fuels-combustion-and-
gasification-modeling-simulation-and-equipment-operation-1st-
edition-marcio-l-de-souza-santos/

Modeling and Simulation in Python Allen B. Downey

https://ebookgate.com/product/modeling-and-simulation-in-python-
allen-b-downey/

Underwater acoustic modeling and simulation 3rd ed


Edition Etter

https://ebookgate.com/product/underwater-acoustic-modeling-and-
simulation-3rd-ed-edition-etter/

Underwater acoustic modeling and simulation 4th ed


Edition Etter

https://ebookgate.com/product/underwater-acoustic-modeling-and-
simulation-4th-ed-edition-etter/

Chromatographic Processes Modeling Simulation and


Design 1st Edition Roger-Marc Nicoud

https://ebookgate.com/product/chromatographic-processes-modeling-
simulation-and-design-1st-edition-roger-marc-nicoud/
SYSTEM MODELING
and SIMULATION
This page intentionally left blank
SYSTEM MODELING
and SIMULATION
An Introduction
Frank L. Severance, Ph.D.
Professor of Electrical and Computer Engineering
Western Michigan University

JOHN WILEY & SONS, LTD


Chichester • New York • Weinheim • Brisbane • Singapore • Toronto
Copyright T 2001 by John Wiley & Sons Ltd
Baffins Lane, Chichester,
West Sussex, PO19 1UD, England

National 01243 779777


International (+44) 1243 779777

e-mail (for orders and customer service enquiries): cs-books@wiley.co.uk

Visit our Home Page on http://www.wiley.co.uk

All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under
the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright
Licensing Agency, 90 Tottenham Court Road, London W1P 9HE, UK, without the permission in writing of the
Publisher.

Other Wiley Editorial Offices


John Wiley & Sons, Inc., 605 Third Avenue,
New York, NY 10158–0012, USA
Wiley-VCH Verlag GmbH, Pappelallee 3,
D-69469 Weinheim, Germany
John Wiley and Sons Australia, Ltd, 33 Park Road, Milton,
Queensland 4064, Australia
John Wiley & Sons (Asia) Pte Ltd, 2 dementi Loop #02–01,
Jin Xing Distripark, Singapore 129809
John Wiley & Sons (Canada) Ltd, 22 Worcester Road,
Rexdale, Ontario, M9W 1L1, Canada

Library of Congress Cataloging-in-Publication Data


Severance, Frank L.
System modeling and simulation: an introduction / Frank L. Severance.
p. cm
Includes bibliographical references and index.
ISBN 0–471–49694–4
1. System theory. I. Title.

Q295.S48 2001
003'.3–dc21

British Library of Cataloguing in Publication Data


A catalogue record for this book is available from the British Library
ISBN 0471–49694–4
Typeset in Times Roman by Techset Composition Limited, Salisbury, Wiltshire
Printed and bound in Great Britain by Biddies Ltd, Guildford and King's Lynn
This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two
trees are planted for each one used for paper production.
Contents

Preface ix

J DESCRIBING SYSTEMS

1.1 The Nature of Systems 1


1.2 Event-Driven Models 10
1.3 Characterizing Systems 13
1.4 Simulation Diagrams 15
1.5 The Systems Approach 24

DYNAMICAL SYSTEMS 32

2.1 Initial-Value Problems 32


• Euler's Method 34
• Taylor's Method 38
• Runge—Kutta Methods 40
• Adaptive Runge—Kutta Methods 42
2.2 Higher-Order Systems 44
2.3 Autonomous Dynamic Systems 46
2.4 Multiple-Time-Based Systems 57
2.5 Handling Empirical Data 67
vi Contents

STOCHASTIC GENERATORS 84

3.1 Uniformly Distributed Random Numbers 84


3.2 Statistical Properties of U[0,l] Generators 88
3.3 Generation of Non-Uniform Random Variates 92
• Formula Method 92
• Rejection Method 94
• Convolution Method 98
3.4 Generation of Arbitrary Random Variates 101
3.5 Random Processes 107
3.6 Characterizing Random Processes 110
3.7 Generating Random Processes 118
• Episodic Random Processes 119
• Telegraph Processes 119
• Regular Random Processes 123
3.8 Random Walks 123
3.9 White Noise 127

SPATIAL DISTRIBUTIONS 141

4.1 Sampled Systems 141


4.2 Spatial Systems 151
4.3 Finite-Difference Formulae 160
4.4 Partial Differential Equations 168
4.5 Finite Differences for Partial Derivatives 172
4.6 Constraint Propagation 177

STOCHASTIC DATA REPRESENTATION 184

5.1 Random Process Models 184


5.2 Moving-Average (MA) Processes 193
5.3 Autoregressive (AR) Processes 198
5.4 Big-Z Notation 206
5.5 Autoregressive Moving-Average (ARMA) Models 209
5.6 Additive Noise 214
Contents vii

MODELING TIME-DRIVEN SYSTEMS 224

6.1 Modeling Input Signals 225


6.2 Nomenclature 231
6.3 Discrete Delays 239
6.4 Distributed Delays 243
6.5 System Integration 250
6.6 Linear Systems 257
6.7 Motion Control Models 264
6.8 Numerical Experimentation 268

282

7.1 Disturbance Signals 283


7.2 State Machines 287
7.3 Petri Nets 293
7.4 Analysis of Petri Nets 305
7.5 System Encapsulation 317

ft
2 MARKOV PROCESSES 334

8.1 Probabilistic Systems 334


8.2 Discrete-Time Markov Processes 336
8.3 Random Walks 346
8.4 Poisson Processes 353
• Properties of the Poisson Process 357
8.5 The Exponential Distribution 360
8.6 Simulating a Poisson Process 363
8.7 Continuous-Time Markov Processes 365

EVENT-DRIVEN MODELS 380

9.1 Simulation Diagrams 380


9.2 Queuing Theory 391
9.3 M/M/1 Queues 395
viii Contents

9.4 Simulating Queuing Systems 403


9.5 Finite-Capacity Queues 405
9.6 Multiple Servers 410
9.7 M/M/c Queues 415

SYSTEM OPTIMIZATION 426

10.1 System Identification 426


10.2 Non-Derivative Methods 435
10.3 Sequential Searches 439
10.4 Golden Ratio Search 442
10.5 Alpha/Beta Trackers 445
10.6 Multidimensional Optimization 452
10.7 Non-Derivative Methods for Multidimensional Optimization 454
10.8 Modeling and Simulation Methodology 468

APPENDICES 476

A Stochastic Methodology 477


• Determining the Density Function 477
• Estimation of Parameters 483
• Goodness of Fit 486
B Popular Discrete Probability Distributions 490
• Discrete Uniform 490
• Binomial 491
• Geometric 492
• Poisson 492
C Popular Continuous Probability Distributions 493
• Continuous Uniform 493
• Gamma 493
• Exponential 494
• Chi-Square 494
• m-Erlang 495
• Gaussian 495
• Beta 496
D The Gamma Function 497
E The Gaussian Distribution Function 498
F The Chi-Square Distribution Function 499

INDEX 500
PREFACE

It is unlikely that this book would have been written 100 years ago. Even though there
was a considerable amount of modeling going on at the time and the concept of signals
was well understood, invariably the models that were used to describe systems tended to be
simplified (usually assuming a linear response mechanism), with deterministic inputs.
Systems were often considered in isolation, and the inter-relationships between them were
ignored. Typically, the solutions to these idealized problems were highly mathematical and
of limited value, but little else was possible at the time. However, since system linearity
and deterministic signals are rather unrealistic restrictions, in this text we shall strive for
more. The basic reason that we can accomplish more nowadays is that we have special help
from the digital computer. This wonderful machine enables us to solve complicated
problems quickly and accurately with a reasonable amount of precision.
For instance, consider a fairly elementary view of the so-called carbon cycle with the
causal diagram shown on the next page. Every child in school knows the importance of
this cycle of life and understands it at the conceptual level. However, "the devil is in the
details", as they say. Without a rigorous understanding of the quantitative (mathematical)
stimulus/response relationships, it will be impossible to actually use this system in any
practical sense. For instance, is global warming a fact or a fiction? Only accurate modeling
followed by realistic simulation will be able to answer that question.
It is evident from the diagram that animal respiration, plant respiration, and plant and
animal decay all contribute to the carbon dioxide in the atmosphere. Photosynthesis affects
the number of plants, which in turn affects the number of animals. Clearly, there are
feedback loops, so that as one increases, the other decreases, which affects the first, and so
on. Thus, even a qualitative model seems meaningful and might even lead to a degree of
understanding at a superficial level of analysis. However, the apparent simplicity of the
diagram is misleading. It is rare that the input—output relationship is simple, and usually
each signal has a set of difference or differential equations that model the behavior. Also,
the system input is usually non-deterministic, so it must be described by a random process.
Even so, if these were linear relationships, there is a great body of theory by which closed-
form mathematical solutions could, in principle, be derived.
We should be so lucky! Realistic systems are usually nonlinear, and realistic signals
are noisy. Engineered systems especially are often discrete rather than continuous. They
are often sampled so that time itself is discrete or mixed, leading to a system with multiple
PREFACE

/'^'^atmospheric
carbon dioxide

time bases. While this reality is nothing new and people have known this for some time, it
is only recently that the computer could be employed to the degree necessary to perform
the required simulations. This allows us to achieve realistic modeling so that predictable
simulations can be performed to analyze existing systems and engineer new ones to a
degree that classical theory was incapable of.
It is the philosophy of this text that no specific software package be espoused or used.
The idea is that students should be developers new tools rather than simply users of
existing ones. All efforts are aimed at understanding of first principles rather than simply
rinding an answer. The use of a Basic-like pseudocode affords straightforward implemen-
tation of me many procedures and algorithms given throughout the text using any standard
procedural language such as C or Basic. Also, all algorithms are given in detail and
operational programs are available on the book's Website in Visual Basic.
This book forms the basis of a first course in System Modeling and Simulation in
which the principles of time-driven and event-driven models are both emphasized. It is
suitable for the standard senior/first-year graduate course in simulation and modeling that
is popular in so many modern university science and engineering programs. There is ample
material for either a single-semester course of 4 credits emphasizing simulation and
modeling techniques or two 3-credit courses where the text is supplemented with
methodological material. If two semesters are available, a major project integrating the
key course concepts is especially effective. If less time is available, it will likely be that a
choice is necessary - either event-driven or time-driven models. An effective 3-credit
PREFACE xi

course stressing event-driven models can be formed by using Chapter 1, the first half of
Chapter 3, and Chapters 7-9, along with methodological issues and a project. If time-
driven models are to be emphasized, Chapters 1—6 and 10 will handle both deterministic
and non-deterministic input signals. If it is possible to ignore stochastic signals and Petri
nets, a course in both time-driven and event-driven models is possible by using Chapters 1
and 2, the first half of Chapter 3, Chapter 4, and Chapters 8-10.

ACKNOWLEDGEMENTS

As with any project of this nature, many acknowledgments are in order. My students
have been patient with a text in progress. Without their suggestions, corrections, and
solutions to problems and examples, this book would have been impossible. Even more
importantly, without their impatience, I would never have finished. I thank my students,
one and all!

Frank L. Severance
Kalamazoo, Michigan
This page intentionally left blank
CHAPTER 1

Describing Systems

1.1 THE NATURE OF SYSTEMS


The word "system" is one that everyone claims to understand - whether it is a
physiologist examining the human circulatory system, an engineer designing a transporta-
tion system, or a pundant playing the political system. All claim to know what systems are,
how they work, and how to explain their corner of life. Unfortunately, the term system
often means different things to different people, and this results in confusion and problems.
Still there are commonalities. People who are "system thinkers" usually expect that
systems are (1) based on a set of cause—effect relationships that can be (2) decomposed
into subsystems and (3) applied over a restricted application domain. Each of these three
expectations require some explanation.
Causes in systems nomenclature are usually referred to as inputs, and effects as
outputs. The system approach assumes that all observed outputs are functions only of the
system inputs. In practice, this is too strong a statement, since a ubiquitous background
noise is often present as well. This, combined with the fact that we rarely, if ever, know
everything about any system, means that the observed output is more often a function of
the inputs and so-called white noise. From a scientific point of view, this means that there
is always more to discover. From an engineering point of view, this means that proposed
designs need to rely on models that are less than ideal. Whether the system model is
adequate depends on its function. Regardless of this, any model is rarely perfect in the
sense of exactness.
There are two basic means by which systems are designed: top-down and bottom-up.
In top-down design, one begins with highly abstract modules and progressively decom-
poses these down to an atomic level. Just the opposite occurs in bottom-up design. Here
the designer begins with indivisible atoms and builds ever more abstract structures until
the entire system is defined. Regardless of the approach, the abstract structures encapsulate
lower-level modules. Of course there is an underlying philosophical problem here. Do
atomic elements really exist or are we doomed to forever incorporate white background
noise into our models and call them good enough? At a practical level, this presents no
Chapter 1: Describing Systems

problem, but in the quest for total understanding no atomic-level decomposition for any
physically real system has ever been achieved!
The power of the systems approach and its wide acceptance are due primarily to the
fact that it works. Engineering practice, combined with the large number of mathematically
powerful tools, has made it a mainstay of science, commerce, and (many believe) western
culture in general. Unfortunately, this need for practical results comes at a price. The price
is that universal truth, just like atomic truth, is not achievable. There is always a restricted
range or zone over which the system model is functional, while outside this application
domain the model fails. For instance, even the most elegant model of a human being's
circulatory system is doomed to failure after death. Similarly, a control system in an
automobile going at 25 miles per hour is going to perform differently than one going at
100 miles per hour. This problem can be solved by treating each zone separately. Still there
is a continuity problem at the zone interfaces, and, in principle, there needs to be an infinite
number of zones. Again, good results make for acceptance, even though there is no
universal theory.
Therefore, we shall start at the beginning, and at the fundamental question about just
what constitutes a system. In forming a definition, it is first necessary to realize that
systems are human creations. Nature is actually monolithic, and it is we, as human beings,
who either view various natural components as systems or design our own mechanisms to
be engineered systems. We usually view a system as a "black box", as illustrated in Figure
1.1. It is apparent from this diagram that a system is an entity completely isolated from its
environment except for an entry point called the input and an exit point called the output.
More specifically, we list the following system properties.
P1. All environmental influences on a system can be reduced to a vector of m real
variables that vary with time, x(t) = [x1(t), . . . , xm(t)]. In general, x(t) is called the
input and the components xi(t) are input signals.
P2. All system effects can be summarized by a vector of n real variables that vary
with time, z(t) = [z 1 (t), . . . , zn(t)]. In general, z(t) is called the output and the
components zi(t) are output signals.
P3. If the output signals are algebraic functions of only the current input, the system
is said to be of zeroth order, since there can be no system dynamics. Accordingly,
there is a state vector y(t) = [v1](t), • • • yp(t)], and the system can be written as

input output
System

state
y(t)
environment

FIGURE 1.1 System block diagram.


1.1 The Nature of Systems

two algebraic equations involving the input, state, and output:

1.1

for suitable functions f1 and f2. Since the state y(t) is given explicitly, an
equivalent algebraic input—output relationship can be found. That is, for a suitable
function g,

(1.2)

P4. If the input signal depends dynamically on the output, there must also be system
memory. For instance, suppose that the system samples a signal every
t = 0, 1,2, . . . seconds and that the output z(t) depends on input x(t — 1). It
follows that there must be two memory elements present in order to recall
x(t — 1) and x(t — 2) as needed. Each such implied memory element increases the
number of system state variables by one. Thus, the state and output equations
comparable to Equations (1.1) and (1.2) are dynamic in that f1 and f2 now depend
on time delays, advances, derivatives and integrals. This is illustrated diagram-
matically in Figure 1.2.

input output

xo

FIGURE 1.2 Feedback system.


Chapter 1: Describing Systems

o VW

FIGURE 1.3 Electrical circuit as a SISO system, Example 1.1.

EXAMPLE 1.1
Consider the electrical resistive network shown in Figure 1.3, where the system is
driven by an external voltage source vs(t). The output is taken as the voltage vR(t)
across the second resistor R2.
Since there is a Single Input and a Single Output, this system is called SISO.
The input—output identification with physical variables gives
x(t) = vs(t),
,;
z(f) = vR(t).
(1.3)
Since the network is a simple voltage divider circuit, the input—output relationship
is clearly not dynamic, and is therefore of order zero:

:«• (1.4)

In order to find the state and output equations, it is first necessary to define the
state variable. For instance, one might simply choose the state to be the output,
y(t) = z(t). Or, choosing the current as the state variable, i.e., y(t) = i(t), the state
equation is y(t) = x(t)/(R1 + R2) and the output equation is z(t) = R2y(t).
Clearly, the state is not unique, and it is therefore usually chosen to be
intuitively meaningful to the problem at hand. O

EXAMPLE 1.2
Consider the resistor—capacitor network shown in Figure 1.4. Since the capacitor
is an energy storage element, the equations describing the system are dynamic.
As in Example 1.1, let us take the input to be the source voltage vs(t) and the
output as the voltage across the capacitor, vc(t). Thus, Equations (1.3) still hold.
Also, elementary physics gives RCdvc/dt + vc = vs. By defining the state to be
the output, the state and output relationships corresponding to Equations (1.1) are
: 1
^MO-XOL (15)
1.1 The Nature of Systems

FIGURE 1.4 Electrical RC circuit as a first-order SISO system.

As will be customary throughout this text, dotted variables denote time deriva-
tives. Thus,

Electrical circuits form wonderful systems in the technical sense, since their voltage-
current effects are confined to the wire and components carrying the charge. The effects of
electrical and magnetic radiation on the environment can often be ignored, and all system
properties are satisfied. However, we must be careful! Current traveling through a wire
does affect the environment, especially at high frequencies. This is the basis of antenna
operation. Accordingly, a new model would need to be made. Again, the input—output
signals are based on abstractions over a certain range of operations.
One of the most popular system applications is that of control. Here we wish to cause
a subsystem, which we call a plant, to behave in some prescribed manner. In order to do
this, we design a controller subsystem to interpret desirable goals in the form of a
reference signal into plant inputs. This construction, shown in Figure 1.5, is called an
open-loop control system.
Of course, there is usually more input to the plant than just that provided by the
controller. Environmental influences in the form of noise or a more overt signal usually
cause the output to deviate from the desired response. In order to counteract this, an
explicit feedback loop is often used so that the controller can make decisions on the basis
of the reference input and the actual state of the plant. This situation, shown in Figure 1.6,
is called a feedback control or closed-loop control system.
The design of feedback control systems is a major engineering activity and is a
discipline in its own right. Therefore, we leave this to control engineers so that we can
concentrate on the activity at hand: modeling and simulating system behavior. Actually, we

als j : response
r ' >
! 2(0
Controller Plant

k ^ ^ ^

FIGURE 1.5 An open-loop control system.


Chapter 1: Describing Systems

disturbance

goals response
r ^ r ^
u,(0
Controller Plant
^ ^ L. ^

FIGURE 1.6 A closed-loop control system.

will still need to analyze control systems, but we will usually just assume that others have
already designed the controllers.
The electrical circuit systems described in Examples 1.1 and 1.2 are cases of
continuous time-driven models. Time-driven models are those in which the input is
specified for all values of time. In this specific case, time t is continuous since the
differential equation can be solved to give an explicit expression for the output:

t>c(0 = M'o) KL
(1.6)
7^Jtfn

Where vc(t0)is the initial voltage accross the capacitor at time t = t0. Thus, as time
"marches on", successive output values can be found by simply applying Equation (1.6).
In many systems, time actually seems to march as if to a drum; system events occur
only at regular time intervals. In these so-called discrete-time-based systems, the only
times of interest are tk = t0 + hk for k = 0, 1, . . . . As k takes on successive non-negative
integer values, tk begins at initial time t0 and the system signal remains unchanged until h
units later, when the next drum beat occurs. The constant length of the time interval
tk+1 = tk = h is the step size of the sampling interval.
The input signal at the critical event times is now x(tk) = x(t 0 4+hk). However, for
convenience, we write this as x(tk) = x(k), in which the functional form of the function x is
not the same. Even so, we consider the variables t and k as meaning "continuous time"
and "discrete time", respectively. The context should remove any ambiguity.

EXAMPLE 1.3
Consider a continuous signal x(t) = COS(TT/), which is defined only at discrete
times tk = 3 +\k. Clearly the interval length is h = | and the initial time is
1.1 The Nature of Systems

— 3. Also,
x(tk} = cos[n(3 + ±
= — cos(| nk]
0, k = odd,
(1.7)

Thus, we write
k = odd,
(1.8)

and observe the significant differences between the discrete form of x(k) given in
Equation (1.8) and the original continuous form x(t) = cos(nt). O

EXAMPLE 1.4

Consider a factory conveyor system in which boxes arrive at the rate of one box
each 10 seconds. Each box is one of the following weights: 5, 10, or 15 kg.
However, there are twice as many 5 kg boxes and 15 kg boxes as 10 kg boxes. A
graphic of this system is given in Figure 1.7. How do we model and simulate this?
Solution
From the description, the weight distribution of the boxes is

w Pr[W = w]

5 0.4
10 0.2
15 0.4
1.0

conveyor

arrived boxes
incoming boxes

FIGURE 1.7 A deterministic conveyor system, Example 1.4.


Chapter 1: Describing Systems

where W is a "weight" random variable that can take on one of the three discrete
values W e {5, 10,15}. The notation Pr[W = w] is read "the probability that the
random variable W is w". The set {5,10,15} is called the sample space of W, and
is the set of all possible weights.
According to the description, these boxes arrive every 10 seconds, so
t = 10k gives the continuous time measured in successive k-values, assuming
the initial time is zero. However, how do we describe the system output? The
problem statement was rather vague on this point. Should it be the number of
boxes that have arrived up to time t? Perhaps, but this is rather uninteresting.
Figure 1.8 graphs N(t) = number of boxes that have arrived up to and including
time t as a function of time t.
A more interesting problem would be the weight of the boxes as they arrive.
Unlike N(t), the weight is a non-deterministic variable, and we can only hope to
simulate the behavior of this variable W(k) = weight of the kth event. This can be
accomplished by using the RND function, which is a hypothetical random
number generator that provides uniformly random distributed variates such that

10 20 30 40

FIGURE 1.8 State N(t) for constant inter-arrival times.


1.1 The Nature of Systems

0 < RND < 1. The following routine provides output w(l), w(2), . . . , w(n),
which is the weight of the first n boxes:

for k=l to n
r=10*RND
if r<4 then w(k)=5
if 4 ^ r<6 then w(k)=10
if r ^ 6 then w(k)=15
next k

While this routine simulates the action, useful results must be statistically
analyzed since the input was non-deterministic in nature. After a sufficiently
large simulation run, frequencies of each type of box arrival can be tallied and
graphed. Figure 1.9 shows a run of 100 simulated arrivals compared against the
ideal as defined by the problem statement. O

The system described in Example 1.4 has just a single state variable. By
knowing w(k), we know all there is to know about the system. For instance, we need
not know any history in order to compute the output since only the kth signal is important.
To see the significance of the state concept as exemplified by memory, consider one further
example.

FIGURE 1.9 Histogram.


Chapter 1: Describing Systems

EXAMPLE 1.5

Consider a factory system with two conveyors: one brings in boxes as described
in Example 1.4, but this time arriving boxes are placed on a short conveyor that
holds exactly 3 boxes. Arriving boxes displace those on the conveyor, which
presumably just fall off! In this case, we do not care about the individual box
weight, rather we care about the total weight on the conveyor of interest. Thus,
the input x(k) in this example can be simulated by the output of Example 1.4.
However, in order to determine the output g(k), the system must remember the
two previous inputs as well. Characterize this system.
Solution
Since the system input is a random variable, the output must be non-deterministic
as well. At the same time, after the conveyor has been loaded, two of the three
spots are known. Thus,

z(k} = (1.9)
x(k) + x(k-l)+x(k- 2), k > 2.
Mathematically Equation (1.9) is a second-order difference equation, z(k) =
x(k) + x(k — 1) +x(k — 2), subject to the two initial conditions z(l) = x(l) and
z(2) = x(l) + x(2). This corresponds to the two memory elements required. A
complete simulation of this model is shown in Listing 1.1. The first four
statements inside the loop are identical to the single-conveyor system described
in Example 1.4. The last three statements describe the second conveyor of this
example. O

for k=l to n
r=10*RND
if r<4 then x(k)=5
if 4 ^ r<6 then x(k)=10
if r ^ 6 then x(k)=15
if k=l then z(k)=x(l)
if k=2 then z(k)=x(1)+x (2)
if k>2 then z(k)=x(k)+x(k-1)+x(k-2)
next k
LISTING 1.1 Simulation of the two-conveyor system, Example 1.5.

""^ EVENT-DRIVEN MODELS


Examples 1.1, 1.2, and 1.5 of the previous section were examples of time-driven
systems. This can be seen from the program sketch given in Listing 1.1, in which
successive k-values in the for—next loop compute the response to equally spaced time
1.2 Event-Driven Models 11

events. In fact, the program structure is analogous to a microprocessor-based polling


system where the computer endlessly loops, asking for a response in each cycle. This is in
contrast to an interrupt approach, where the microprocessor goes about its business and
only responds to events via interrupts. These interrupt-like programs create so-called
event-driven models.
In an event-driven model, the system remains dormant except at non-regularly
scheduled occurring events. For instance, in modeling the use of keyboard and mouse
input devices on a computer, a user manipulates each device on an irregular basis. The
time between successive events k and k+1, tk+1, — tk, is called the inter-arrival time.
Unlike the case of time-driven models, where this difference is constant, here this
difference generally varies and is non-deterministic. Thus, event-driven models and
simulations are often based on stochastic methods.
In models of this type, there are two most interesting questions to be asked. First, we
might ask how many n events occur in a fixed interval, say [0, t]. For a great number of
problems, this number depends only on the length of the interval. That is, the expected
number of events over the interval [0, t] is the same as the number expected on [T, T + t]
for any t ^ 0. When this is true, the probability distribution thus defined is said to be
stationary. Specifically, the answer is in the form of a probability statement:
P n (t) = "probability there are n events during interval [0, t]", where n = 0, 1, 2, . . . is
the sample space. Since the sample space is countably infinite, Pn(t) is a discrete
probability mass function.
A closely related question to the one above is the expected inter-event time. Even
though the inter-event time is not constant, its statistical description is often known a
priori. Denoting the inter-event time by the random variable T = tk+1 — tk, it should be
clear that T is continuous. Thus, we define the probability density function fT(t) rather than
a probability mass function as we would for a discrete random variable. Recall that
continuous random variables are more easily specified by their distribution function
FT = Pr[T ^ t]. The density function follows immediately since fT(f) = dfT(i}/dt.
The probability distributions defined by Pn(t) and fT(t) are most important, and will be
considered in detail in the next section. But first let us see how straightforward the
application of event-based models is to simulate. The problem is to calculate a sequence
of event times tk = "time at which the kth event occurs". For simplicity, let us simply
assume that the inter-arrival times are uniformly distributed over [0,1). In other words, we
can use RND for our random intervals. Note that the statement tk+l — tk = RND is
equivalent to tk. — t k _ 1 = RND. It follows that a code sequence to produce n random event
times is

From the sequence thus generated, answers to the P n (t) and FT(t) questions can be
found.
Chapter 1: Describing Systems

EXAMPLE 1.6

Consider a conveyor system similar to the one described in Example 1.4.


However, this time, all boxes are the same but they arrive with random (in the
RND sense for now) inter-arrival times. This time, let us investigate the number
of boxes N(t) that have arrived up to and including time t. Our results should
resemble those shown graphically in Figure 1.10.
Solution
The simulation input is the set of critical event times discussed earlier. The output
is the function N(t), which is the number of events up to and including time t.
But, in order to compute the number of events at time t, it is necessary to check if
t falls within the interval [t k , tk+1]. If it does, N(t) = k; otherwise, a next interval
must be checked. Listing 1.2 shows code producing the required pairs t, N from
which the graph in Figure 1.10 can be created. Notice that while the input times
are continuous, the event times are discrete. Also, the output N(t) is a mono-
tonically increasing function taking on non-negative integral values. O

Modeling and simulating system behavior is fundamentally a statistical problem.


Whether by Heisenberg's uncertainty principle on the nanoscale or simply by experimental
error, the deterministic formulae given in traditional texts do not precisely predict practice.

A(t)

•1 p
V
'o=0 f
J *3 4

FIGURE 1.10 State N(t) for random inter-arrival times.


1.3 Characterizing Systems 13

t 0 =o
for k=l to n
t k =t k _ x +RND
next k
for t=0 to t n step h
for k=l to n
if tk-i ^ t<t k then N=k
next k
print t, N
next t
LISTING 1.2 Simulation for Example1.6.

Even more importantly, the inputs that drive these systems must often be characterized by
their basically stochastic nature. However, even though this input may appear random,
more often than not the input distribution is a random process with well-defined statistical
parameters. Thus, we use a probabilistic approach.
For example, one of the central problems encountered in telephony is that of message
traffic. Engineers need to know the state of the telephone communications network (that is,
the number of callers being serviced) at all times. Over a period of time, callers initiate
service (enter the system) and hang up (depart the system). The number of callers in the
system (arrivals minus departures) describes the state. Thus, if the state is known,
questions such as "over a period of time, just how many phone calls can be expected?"
and "how many messages will be terminated in so many minutes?" can be addressed. The
answer to these questions is not precisely predictable, but, at the same time, average or
expected values can be determined. Thus, while not deterministic, the state is statistically
remarkably regular and is true regardless of message rate. That is, even though an operator
in New York City will experience far more traffic than one in Kalamazoo, Michigan,
relatively speaking their probability distributions are nearly identical.

1.3
Models are characterized by their system behavior and the type of input accepted by
the system. Once both the input and system behavior is known, the output can be found.
This is known as the analysis problem. For instance, a model might be a simple function
machine that doubles the input and adds one as follows:

f: z=2x+l

Such a model, which simply algebraically transforms the input, is a zeroth-order, time-
independent system, since there are no states and the formula relating input and output is
independent of time. Regardless of what the input type is and regardless of the historical
record, the transformation f is always the same.
14 Chapter 1: Describing Systems

Since the above system is a zeroth-order model, a natural question is just what
constitutes a first- or second-order model. Higher-order models all implicitly involve time.
For instance, the input—output relationship defined by z(k) + 4z(k — 1) = x(k) charac-
terizes a first-order discrete system, because the output at any discrete time k depends
not only on the input but on the output at the previous time as well. Thus, there is an
implied memory or system state. Similarly, the input—output relationship defined by
z(k) + 4z(k — 1) + 3z(k — 2) = x(k) is a second-order system, since the history required is
two epochs.
There are two subtle problems raised in defining higher-order systems like this. First,
if it is necessary to always know the history, how does one start? That is, the output z(n) for
an nth-order system is easy to find if one already knows z(0), z(l), . . . , z(n — 1). There-
fore, these initial states need to be given a priori. This is a good news-bad news sort of
question. It turns out that if the system is linear and stable, the initial conditions become
irrelevant in the long run or steady state. Of course, the bad news is that not all systems are
linear or stable; indeed, sensitivity to initial conditions tends to be a hallmark of nonlinear
systems, which may even be chaotic.
The other subtlety is the nature of time. The discrete-time structure implied by the
variable k above is akin to that of a drum beating out a regular rhythm. In this case, time
"starts" at k = 0 and just keeps going and going until it "ends" at time k = n. This is a
useful concept for models such as a game of chess where z(k) represents the state of a
match on the kth move. Time k has little or no relationship to chronological time t. In
contrast, real or chronological time tends to be continuous, and is most familiar to us as
humans in that we age over time and time seems to be infinitely divisible. The variable t is
commonly used to denote chronological time, and dynamical systems describing contin-
uous-time phenomena are represented by differential equations rather than the difference
equations of the discrete-time case. In this case, the differential equation
z + 4z + 3z = x(t) is a second-order system, because we recognize that it requires two
initial conditions to define a unique solution. In general, an nth-order linear differential
equation will define an nth-order system, and there will be n states, each of which will
require an initial condition z(0), z(0), . . . , z(n–1)(0).
Whether by algebraic, difference, or differential equations, continuous and discrete
models as described above are called regular, since time marches on as to a drum beat.
Assuming that the inter-event time is a constant d time unit for each cycle, the frequency of
the beat is f = 1/6 cycles per time unit. In the limiting case of continuous time, this
frequency is infinity, and the inter-event time interval is zero. However, not all models are
regular.
As we have seen earlier, some models are defined by their inter-event (often
called inter-arrival) times. Such systems lie dormant between beats, and only change
state on receipt of a new event; thus they are event-driven models. Event-driven models
are characterized by difference equations involving time rather than output variables.
By denoting the time at which the kth event occurs by tk, the system defined by
*k+\ ~ h = k2 has ever-increasing inter-event intervals. Similarly, the system defined
by tk+1 — tk = 2 is regular, since the inter-event time is constant and the system
defined by tk+l — tk = RND, where RND is a random number uniformly distributed on
the interval [0,1], is stochastic. Stochastic systems are especially rich, and will be
considered in detail later.
1.4 Simulation Diagrams 15

'"^ SIMULATION DIAGRAMS


As in mathematics generally, equations give precise meaning to a model's definition,
but a conceptual drawing is often useful to convey the underlying intent and motivation.
Since most people find graphical descriptions intuitively pleasing, it is often helpful to
describe systems graphically rather than by equations. The biggest problem is that the
terms system and model are so very broad that no single diagram can describe them all.
Even so, models in this text for the most part can be systematically defined using
intuitively useful structures, thus making simulation diagrams a most attractive approach.
System diagrams have two basic entities: signals - represented by directed line
segments - and transformations - represented as boxes, circles, or other geometric shapes.
In general, signals connect the boxes, which in turn produce new signals in a meaningful
way. By defining each signal and each transform carefully, one hopes to uniquely define
the system model. By unique it is meant that the state variables of the model match the
state variables of the underlying system, since the drawing per se will never be unique.
However, if the states, signals, and transforms coincide, the schematic is said to be well
posed and the system and model are isomorphic. Of course obtaining the schematic is
largely an art form, and one should never underestimate the difficulty of this problem.
Even so, once accomplished, the rest of the process is largely mechanical and simulations
can be straightforward.
There are several different types of signals. It is always important to keep precise track
of which signal type is under discussion, since many system models are heterogeneous.
Perhaps the primary distinction is whether a signal is an across variable or a through
variable. An across signal has the same value at all points not separated by a transformer.
In contrast, a through signal has the same value at all points not separated by a node. Thus,
an across variable is just a value of the signal produced by an input or transformer. It can
be thought of as akin to voltage in an electrical circuit where every point on the same or
connected arcs (physically, wires) has the same signal value.
A through signal takes an entirely different view. Rather than thinking of a function at
a given time, we envision various events occurring within the system. For instance, the
conveyor in Example 1.3 would be such a system, and the times at which the boxes enter
constitute the signal. Here the signal description is given by a formula for tk, which is the
time at which event k occurs. That is, given an event number, the time at which that
particular event occurs is returned. A through signal can be visualized as the messages in a
communications link where each message flows through various conduits. As messages
come to a junction, they go to only one of several direction choices. So, rather than the
same signal throughout the wire as in an across signal, a through signal has a conservation
principle: the number of messages leaving a junction equals the number of messages
entering the junction.
An across signal represents a common value (of voltage), while a through signal
represents an amount (of messages). Notice how the through description is the inversion of
the across description:

across signal: given time, return level


through signal: given a level, return time
16 Chapter 1: Describing Systems

All this is illustrated in Figure 1.11. Within connected arcs, an across signal is always
the same. For instance in Figure 1.11 (a), even though the paths split, the values along each
path are the same and x(t) = y(t) = z(t). In contrast, in Figure l.11(b), as the through
signal tk comes to the junction, some of the messages (or people, boxes, events, or
whatever) go up and the rest go down. This requires some sort of distributor D to select
which object goes where. There are a number of choices here, but in the end the number
that enter a node is the sum of those going up and those going down. However, our
description gives times, not objects. Thus we consider the each explicit listing of each time
sequence, [tk] = [t1, t2 • • • , tm+n], [rk] = [r1, r2, • • •, rm] and [sk] = [s1, s2, . . . , sn], where
each vectored sequence is listed in ascending order by convention. It follows from this that
[tk] = [rk] U [sk], where the operation U is called the merge union.

EXAMPLE 1.7
Consider the system below, in which two through signals are combined at a
collector junction C. Find and graph each signal as a function of chronological
time.

[rj-[0.2. 2.4.6.1]

*J-[1.5,4.1,5.5, 8.6J

Solution
The merge union of sequences rk and sk is [tk] = [0.2, 1.5,2.4,4.1, 5.5,6.1, 8.6].
Each graph is shown in Figures 1.12-1.14. Notice that each graph is actually an
integral function of time, k = k(t), and that there is no direct relationship between
tk, which is simply the kth component of the vector [tk], and /, which is continuous
time. O

The names across and through signals are motivated by voltage and current as
encountered in electrical circuits. Voltage is a potential difference between two points. As
such, any two points in the same wire not separated by a component (such as a resistor or
capacitor) are electrically equivalent. Thus, within the same or connected arcs, a voltage
signal is the same. From an electrical perspective, the way by which voltage is measured is
across an element between two points. This contrasts with current, which is a through
signal. Current is an absolute (charge per unit time) rather than a relative difference, and
thus conservation principles apply. It is much like water in a plumbing system: what goes
1.4 Simulation Diagrams 17

y(0
-KD -KD

xfl)
O

z(0
-K) -K3

(a) across (b) through


signal: signal:

FIGURE 1.11 Across (a) versus through (b) signals.

in must come out. In measuring current, it is necessary not to measure between two points,
but at a single point. Thus we speak of current through a point or, in the more general
sense, a through signal.
The nature of a dynamic system is that it will evolve over time, and, therefore,
interesting results tend to be state-versus-time graphs. In cases where the signal is discrete,
some correlation is also made with continuous time as well. For instance, in studying
queuing systems in which customers line up at a service counter, the input signals are often
characterized by inter-arrival times. As an example, the mathematical statement
tk — t k _ 1 = RND can be interpreted as "the inter-arrival time between two successive
events is a random number uniformly distributed on the interval [0, 1]". A useful
simulation to this model will be able to find N(t), the number of events that have occurred
up to time t. How to go from a statement regarding signals in one form to another is
important.

v.- 3
0)
.a
f 2

I1
0)

4
time, rk
FIGURE 1.12 Graph of [rk], Example 1.7.
18 Chapter 1: Describing Systems

2
I

4 6 10

time, sk
FIGURE 1.13 Graph of [sk], Example 1.7.

8 :

0 2 4 6
time, tk
FIGURE 1.14 Graph of [tk], Example 1.7.

Two other important classes of contrasting signals are those that use continuous time
versus those that use discrete time. Continuous time is the time that we as human beings
are used to. Given any two time instants [t1, t2], we can always conceptualize another time
instant t in between: t1, < t < t2. In this so-called chronological time, even time concepts
such as t = >/2 seconds are not unreasonable. However, not all systems are dynamic with
respect to chronological time. Biological systems are dynamic in that they change over
time, but different organisms see their physiological times quite differently. For that matter.
1.4 Simulation Diagrams 19

not all systems are even dynamic. In modeling a chess game, the only time that makes
sense is the number of moves into the game. How long it takes to decide on the move is
rather irrelevant. This, of course, is an example of discrete time. There are consecutive
times instants, and in between any two consecutive moments there is nothing.
Just as there is a relationship between chronological time and physiological time, there
are often relationships between continuous time and discrete time. In an engineered system
involving microprocessors that acquire, process, and control signals, it is common to
sample the continuous signal presented by the world. This sampled signal is stored and
processed by a computer using the sampled value. Presumably, the cost of losing
information is compensated by the power and versatility of the computer in processing.
However, if control is required, the discrete signal must be converted back into a
continuous signal and put back into the continuous world from which it came. This
process is called desampling. Sampled signals are called regular discrete signals, since
their instances are chronologically similar to a drum beat - they are periodic and
predictable.
In contrast to regularly sampled signals, there are event-driven discrete signals where
the signal itself is predictable, but the time at which it occurs is not. For example, the
signal N(t) that is "the number of customers waiting in line for tickets to a football game"
has a predictable state (consecutive integers as arrivals come), but the times at which
arrivals occur is random. Assuming that the times for consecutive arrival are tk,
N(t k + 1 ) = N(tk) + 1. However, tk+1 — tk= RND is not a regular sampled signal, since
the next continuous time of occurrence is not predictable. Even so, from the system's point
of view, nothing happens between event occurrences, so it is both discrete and regular.
Here the event instances occur at discretized time k, but the exact time of the next
occurrence is not predictable, even though we know that the next event will bump up the
event counter by one. In short, sampling only makes sense from the external continuous
time's view. From an internal view, it is just discrete time, and there is no worry about any
reference to an external view.
There are other contrasting signal types as well, the most notable being deterministic
versus random signals. Random signals are extremely important, since it is impossible to
model all aspects of a system. Unless we know all there is to know about every process that
impinges on a system, there will be some cause—effect relationships that are unaccounted
for. The logical means of dealing with this uncertainty is by incorporating some random
error into the system. Of course this isn't as trivial as it might seem at first. Suppose we
incorporate a set of effects into our model, but we know there are still others unaccounted
for. Rather than search endlessly for the remaining (as if there are only a finite set of effects
anyway!), we statistically analyze the residuals and simply deal with them as averages
using a set of simulation runs. In this way, the effects that we do know will be validated
and our model is useful, even if it is only for a limited reality. Of course, all models are
only of limited value, since our knowledge is of limited extent. In any case, random signals
and statistical methods are essential to good system modeling.
As signals meander throughout a modeling schematic, they encounter different
components, which act as transformers. There are a number of specific transformers,
but in general there are only four basic types, each of which will be developed further in
this text. They are algebraic, memory, type converter, and data flow. If we were to be
extremely pedantic, all transformers could be developed axiomatically. That is, beginning
20 Chapter 1: Describing Systems

Transform
Block

FIGURE 1.15 Generic data flow transform block.

with only an adder and subtracter, all algebraic transformers could be produced. By
including a memory unit in our list of primitives, the memory types could all be created.
This could continue up the ladder until all transformer types are characterized. This will
not be done here; rather, we will build them throughout the text as necessary.
Consider the generic transformer shown in Figure 1.15. Note that there can be
several inputs, but there is only a single output. If a multiple-output unit is required,
it can always be created by combining several single-output transformers within a
single module.
The simplest of these is the data flow transformer, which essentially acts like a
function machine. The data flow transformer changes the input into an output according to
some well-defined formula that depends on only the input and perhaps time. For example,
z = x1 + 3x2 — x3 + sin x4 is a time-independent, nonlinear algebraic transform, since the
formula is obviously nonlinear and there is no explicit mention of time in the equation.
Usually the formulas aren't so involved. More typical algebraic transforms include
summation and multiplication blocks.
One step up from the algebraic block are blocks that require memory. These are
called memory transformers, since they apply the historical record and must therefore
also require state descriptions. This permits difference and differential equations along
with integrators and integral equations. For instance, the transform z(k) = z(k— 1) +
2xl(k) + 2x3(k — 2) + sink is a linear, time-dependent, second-order memory transform,
since the output depends on only previous values of both the input and output along
with a nonlinear reference to discrete time k.
There are a number of very useful memory transformers, including delays, accumu-
lators and filters. In their purest form, there are only a memory unit and algebraic
transformers from which to build the larger memory transformer family. These, plus
ubiquitous time, can make the set much richer.
1.4 Simulation Diagrams 21

EXAMPLE 1.8
Consider the transformer defined by the following input—output difference
equation: z(k) = z(k — 1) + 2x l (k) + 2x^(k — 2) + sin k. Create, using only
memory units, algebraic transformers, and a ubiquitous time reference, a
system representation.
Solution
In the spirit of academic exercises, it is always possible to create a larger module
from an encapsulation of primitive elements such as +, *, and M (memory). Of
course if there is memory, there also must be initial conditions so as to begin the
simulation. Such a diagram is given in Figure 1.16. O

There is no primitive to explicitly handle derivatives and integrals. Even so, since they
arise so often, it is customary to handle them by encapsulated modules. Later, we will
show difference equation techniques with which to handle these as well.
Type converter transformers change continuous signals to discrete and vice versa. This
is usually done by means of sampling and desampling, but there are also other means.
Regardless of technique, the important thing is to know what type of signal is being used at
each stage of the model. Purely continuous and purely discrete systems are the exception -
most systems are hybrid.
The sampled signal is relatively straightforward. The value of a continuous signal is
sampled every tk=hk +t0 seconds and retained as a function of discrete time k. These
sampled values can then be manipulated using a computer for integral k. On the other
hand, in desampling there are many options. The simplest is the so-called zero-order hold
(ZOH), where the sampled value is simply held constant until the next sample is taken. For
speedy real-time computing systems where sampling frequencies are on the order of

FIGURE 1.16 Encapsulation of primitive elements.


22 Chapter 1: Describing Systems

t I
A

FIGURE 1.17 Sampled and desampled signals.

milliseconds, this is rarely a problem. But, for very long sampling intervals (equivalently,
very small sampling frequencies), some means of anticipating the continuous signal value
is usefid. For instance, the Dow Jones Industrial Averages are posted on a daily basis, but it
would be of obvious benefit to estimate the trading behavior throughout the day as well.
Thinking of the opening Dow posting as the sampled signal, the desampler would be a
formula for estimating the actual average for any time of day. This is illustrated in Figure
1.17, where we see a continuous signal x(t) entering a sampler. The sampler transforms x(t)
into the sampled signal x(k). The sampled signal enters a desampler, creating another
continuous signal x*(t), which is only approximately x(t). Even though the signals are
given the same name, the continuous, sampled, and desampled signals are not defined by
hnctions of the same form. For example, recall Example 1.3.
The final transformation type is a data flow. Data flow transformations are usually
used in event-driven rather than time-driven systems. Thinking of signals as discrete
messages rather than continuous hnctional values, signal arcs are more like pipelines
carrying each message to one destination or another. Data flow transformations usually
execute flow control. In this model, messages might queue up in a transformation box and
form a first-in-first-out (FIFO) queue or maybe a last-in-first-out (LIFO) stack. Typically, a
data flow box only allows a message to exit after it has received an authorization signal
from each of its inputs. In this way, models can be created to demonstrate the reasoned ebb
and flow of messages throughout the system.
The simplest data flow transforms occur in state machines. These systems are akin to
having a single message in an event-driven system; where the message resides corresponds
to the system state. This is best illustrated by an example.

EXAMPLE 1.9
Model a 2-bit binary counter using a finite state machine. If the input is x = 0
then the counter stops counting, and if x = 1 then the counter continues on from
where it last left off. The output of the counter should produce the sequence 3, 1,
5, 2, 3, 1, 5, 2, . . . . Since there are four different output values, there are four
1.4 Simulation Diagrams 23

different state values too. Clearly there are only two input values. Model this state
machine using bit vectors.
Solution
Since there are at most 22 = 4 different states, this is a 2-bit binary counter. A
suitable block diagram is shown in Figure 1.18. The state is shown in binary form
as the vector y = (y1, y2). The input is x and the output is variable z. Each of the
variables x, y1, and z have two values, 0 and 1. Output z is not shown as a bit
vector, but takes on one of the values {1,2,3,5}.
The actual working of the system is best understood using a transition
diagram as shown in Figure 1.19. In this diagram, states are shown as vertical
lines with a different output associated with each one. Transitions from one state
to another are represented by horizontal arrows with the input required to achieve
each. In this case, the counter progresses through the states in a binary form: 00,
01, 10, 11, 00, etc. However, the output that is actually observable is 3, 1, 5, 2, 3,
etc., as required. This is a state machine, since for the same input (1) there are
different outputs depending on which state the system is in.
In general, there is not simply one input, but a sequence of inputs x(k). For
instance, if the input string is x = [1, 1, 0, 0, 1, 0, 1, 0, 1 , 1 , 1 ] and the initial state
is y = 10, the output string is z = [5, 2, 2, 2, 3, 3, 1, 1, 5, 2, 3]. If the output z is
also required to be a bit vector, z = [z1, z2, z3] can be used. O

State machines are important, since their response to an input can vary. Unlike
function machines, which use the same formula under all circumstances, state machine
responses depend on both the input and the current state. The archetypical finite state
machine (FSM) is the digital computer. It is a state machine, since its output depends on
the memory configuration as well as user initiative. The FSM is finite since there are a
finite number of memory cells in any computer. Engineers design and program using state
machine methods, and they will be explored later in this text.

state
machine

state (y1, y2)

FIGURE 1.18 Block diagram of state machine.


24 Chapter 1: Describing Systems

state: 00 state: 01 state: 10 state: 11

x=1

x=1

x=1

x=0 x=0 x=0 x=0

z=3 z=1 z=2


FIGURE 1.19 Transition diagram for state machine.

However, state machines are restrictive in the sense that there can be only one "token"
in the system at a time. By token we mean a generic entity such as a process in a
multitasking operating system or a message in communication network. Describing
systems with multiple, often autonomous, tokens, requires a more general structure than
an FSM diagram. This is especially true when the tokens have a need for inter-process as
well as intra-process communication. Modeling and simulation of such systems with
parallel processes requires devices called Petri nets. These will also be discussed later in
the text.

1.5 THE SYSTEMS APPROACH


Every system has three basic components: input, output, and the system description. If
any two of these are specified, the remaining one follows. Each possibility results in a
different problem, analysis, design, and management, with a different view. We list these
possibilities below.

Specified entity Unspecified entity Name

input system output analysis


input output system design
system output input control
Bibliography 25

A scientist will tend to use analysis. From this perspective, the system is part of nature
and is given a priori. It only remains to discover just what the system actually is. By
bombarding (sometimes literally in particle physics) with a number of different inputs and
analyzing the results, it is hoped that the system description will reveal itself. Supposedly
after much study and many trials, the scientist gets an idea that can be described as a
mathematical formulation. Using the scientific method, a hypothesis is conjectured for a
specific input. If the output matches that predicted by the system model, our scientist gets
to publish a paper and perhaps win a grant; otherwise he is relegated to more testing, a
revision of the mathematical model, and another hypothesis.
An engineer takes a different view. For him, inputs and outputs are basically known
from engineering judgement, past practice, and specifications. It remains for him to design
a system that produces the desired output when a given input is presented. Of course,
designing such a thing mathematically is one thing, but creating it physically using non-
ideal concepts adds a number of constraints as well. For instance, an electrical rather than
mechanical system might be required. If a system can be designed so that the input—output
pairs are produced and the constraints are met, he gets a raise. Otherwise, another line of
work might be in order.
A manager takes another view. Whether a manager of people, a computer network, or
a natural resource system, the system in already in place. Also, the output is either
specified directly (a number of units will be manufactured in a certain period of time) or
indirectly (maximize profit and minimize costs). It is the manager's duty to take these
edicts and provide inputs in such a way as to achieve the required ends. If he is unable to
satisfy his goals, the system might need adjusting, but this is a design function. He is only
in charge of marshaling resources to achieve the requirements.
Each of these views is correct, and in fact there are books written on each of them.
However, most are discipline-specific and lack generality. Therefore, the system scientist
will address specific problems with each view in mind. After mastering the basic
mathematics and system tools described in this text, it is only natural to look to literature
addressing each problem. For instance, the system identification problem studies how best
to "discover" a correct system description. This includes both the mathematical form and
parameter values of the description.
System optimization, on the other hand, assumes that the mathematical form is
known, but strives to find the parameter values so that a given objective function is
optimized. In practice, system designers have to know not only how to design systems but
also how to identify them and do so in an optimal manner. Thus, even though design is but
one facet, designers are usually well versed in all systems aspects, including analysis and
management as well as design. Each of these views will be investigated throughout this
book.

BIBLIOGRAPHY
Andrews, J. G. and R. R. McLone, Mathematical Modeling. Butterworth, 1971.
Aris, R., Mathematical Modeling, Vol. VI. Academic Press, 1999.
Aris, R., Mathematical Modeling Techniques. Dover, 1994.
26 Chapter 1: Describing Systems

Close, C. M. and Frederick, D. K., Modeling and Analysis of Dynamic Systems, 2nd edn. Wiley,
1994.
Cundy, H. H. and A. P. Rollett, Mathematical Models. Oxford University Press, 1952.
Director, S. W. and R. A. Rohrer, Introduction to Systems Theory. McGraw-Hill, 1988.
Gernshenfeld, The Nature of Mathematical Modeling. Cambridge University Press, 1999.
Law, A. and D. Kelton, Simulation, Modeling and Analysis. McGraw-Hill, 1991.
Ljung, L., System Identification - Theory for the User, 2nd edn. Prentice-Hall, 1999.
Profozich, P. M., Managing Change with Business Process Simulation. Prentice-Hall, 1997.
Roberts, N., D. Andersen, R. Deal, M. Garet, and W. Shaffer, Introduction to Computer Simulation.
Addison-Wesley, 1983.
Sage, A. P., Systems Engineering. Wiley, 1995.
Sage, A. P., Decision Support Systems Engineering. McGraw-Hill, 1991.
Sage, A. P. and Armstrong, An Introduction to Systems Engineering. Wiley, 2000.
Sandquist, G. M., Introduction to System Science. Prentice-Hall, 1985.
Thompson, Simulation: a Modeler's Approach. Wiley Interscience, 2000.
Vemuri, V, Modeling of Complex Systems. Academic Press, 1978.
Watson, H. J., Computer Simulation, 2nd edn. Wiley, 1989.
White, H. J., Systems Analysis. W.B. Saunders, 1969.
Zeigler, B. P., Theory of Modeling and Simulation, 2nd edn. Academic Press, 2000.

EXERCISES
1.1 Consider the RL circuit shown, in which the input is a source voltage vs(t) and the output is the
voltage across the inductor vL(t). Assuming that the state variable is the current i(t). find state and
output equations analogous to Equation (1.5).
Exercises 27

1.2 Consider a linear, time-independent, first-order SISO system described by the following input output
relationship:

+ az — bx(t).

(a) Derive a general explicit solution for z(t) in terms of the initial output z(0).
(b) Apply the results of part (a) to the following differential equation:

z(0) = 2,

where u(t) is the unit step function: u(t) — 1, t > 0; 0, t < 0.

u(t)

1
0

(c) Apply the results of part (a) to the following differential equation:

+ 4z = r(
z(0) = 0,

where r(t) is a rectangular wave: r ( t ) = 1 ,


k = 0, 1, 2 , . . . (see the graph on the next page).
28 Chapter 1: Describing Systems

r(0

*
6 6-

1.3 Consider the RC circuit shown, in which there are two source voltage inputs: v1, (t) and v2(t). Use the
voltage across the capacitor vc(t) as the state variable and find the state and output equations
analogous to Equation (1.5).
(a) Assume the output is the current going through the capacitor, ic(t).
(b) Assume there are three outputs: the voltage across each resistor and the capacitor.

R2
AA/V A/W

1.4 Consider a system of two inputs x1, , x2 and two outputs z1,, z2 described by the following input—output
relationships:

z1 + 3z1, + 2z1, = x1 + 3x2

z2 + 4 z 2 + 3 z 2 = —x 1+
linear form. That is for matrices A, B, C, and D, find

y = Ay + Bx,
z = Cy + Dx.
Exercises 29

1.5 Consider a system with input x(k) and outputs z(k} described by the following difference equation
relating the input to the output:
z(k) + 3z(k — 1 ) + 2z(k — 2) = x(k) + 3x(k — 1 ).
Define the column vector y for the state variable so that this system is in standard linear form. That
is, for matrices A, B, C, and D,

z(k) = Cy(k) + Dx(k).

1.6 Consider a linear, time-independent, first-order SISO discrete system described by the difference
equation z(k) = az(k — 1) + x(k).
(a) Show that the explicit solution in terms of the initial output is given by

(b) Apply the results of part (a) to the system description z(k) — z(k — 1 ) + 2k; z(0) = 1 .
(c) Apply the results of part (a) to the system description z(k) + z(k — 1 ) = 2k; z(0) = 1 .
1.7 Consider the continuous signal x(t) — 5 sin(2;t) In t, which is sampled every tk = 1|t seconds. Find
an explicit equation for the sampled signal x(k).
1.8 Recall the model and simulation outlined in Example 1.4. Implement this simulation and perform it
for n = 10, n = 100, n = 1000, and n = 10000 iterations. Compare your results against the
theoretical expectations. Present your conclusions in the form of a graph of error versus number
of iterations. Notice that the exponential form of n implies a logarithmic scale for the iteration axis.
1.9 Consider a factory system similar to that of example 1.4 in which there are boxes of three different
weights: 5, 10, and 15 pounds. The probability an incoming box has a given weight is as follows:

w Pr[W = w]

5 0.5
10 0.2
15 0.3
1.0

(a) Create a simulation of 200 boxes being placed on the conveyor and the total weight recorded.
(b) Summarize the total weight distribution so that the relative number of times each wieght (15, 20,
25, 30, 35, 40 or 45 pounds) occurs.
(c) Calculate the theoretical distribution corresponding to the simulation of part (b). Compare the
two distributions by forming a distribution and a histogram.
(d) Using a Chi-square test at the 98% confidence level, determine whether or not the simulation is
valid.
1.10 Implement and perform the simulation outlined in Example 1.5 for various values of n.
(a) Make a study (note Exercise 1.8) by which a reasonable result can be guaranteed.
(b) Using the suitable n found in part (a), compute the mean and standard deviation of this random
process after stationarity is achieved.
1.11 Consider the sequence of inter-event times generated by the formula tk = tk—1 — In (RND).
(a) Using this formula, create a simulation similar to that in Example 1.6 where the number of
events N(t) is graphed as a function of time.
30 Chapter 1: Describing Systems

(b) Using the simulation, compute an average inter-event time.


(c) Using the simulation, compute the standard deviation of the inter-event rimes.
(d) Repeat parts (a)—{c) for the inter-event sequence defined by tk = t k _ 1 — 2 In(RND) — 3 In(RND).
1.12 Write a program that will find the merge union of two event-time sequences [tk] = [rk] U [sk], where
fa] = [r1,r2 rm] and [sk] = [ s 1 , s 2 sn] are the inputs and [tk] = [t1, t2 t m+n ] is the
output.
(a) Using your merge union program, create a simulation that generates two event sequences
rk = r k _ 1 + 2 1n(RND) and sk = s k _ 1 + 3 In(RND), and generates the results as a sequence [tk].
(b) Create graphs of your results similar to Figures 1.12–1.14 of Example 1.7.
1.13 Consider transformers defined by the following input-output relations. Implement each at the atomic
level using only + (addition), * (multiplication), and M (memory) units. Try to use as few memory
units as possible.
(a) z(k) + 2z(k – 1) + 4z(k – 2) = x(k);
(b) z(k) + 4z(k –2) = x(k) + 3x(k – 1);
(c) z(k) = x(k) + x ( k – l ) + x ( k – 2 ) .
1.14 Consider the most general input-output relationship for a linear, discrete-time, time-invariant SISO
system:

z(k) + £ ajZ(k –j) = £ bjX(k – j ) .


7=1 i=0

(a) Show that it is possible to create a simulation diagram for this system using only max(m.«)
memory units.
(b) Apply the technique of part (a) to the two-input, one-output system defined in Example 1.8:
z(k) = z(k — 1) + 2x1(k) + 2x3(k — 2) + sink, thereby making a simulation diagram with
max(2, 1) = 2 memory units.
1.15 Consider the finite state machine defined in Example 1.9, but this time with three possible inputs:
if x = 00, the machine stays in place;
if x = 01, the machine goes sequences forward;
if x = 10, the machine goes backward;
the input x = 11 is disallowed.
Create the transition diagram of the new state machine.
1.16 Consider a finite state machine that works like the one in Example 1.9, except that it has two outputs
instead of one. The first output z1 behaves exactly like z and generates the sequence 3, 1, 5, 2, 3, 1, 5,
2, The second output z2 also behaves like z, but generates the sequence 3, 1,5, 2, 7, 8, 3, 1,5. 2,
7, 8 , . . . .
(a) Create the transition diagram of this state machine.
(b) Generalize this result.
1.17 Two stations, P1 and P2, located on the x axis at points P 1 (d 1 , 0) and P2(d2,0), sight a target whose
actual location is at point P(x, y). However, as a result of an angular quantization error that is
uniformly distributed over [—5,5], this pair of stations calculate an apparent position P(x'.y').
Specifically, the observed angles are given by 01 + n6 and 02 + /*<5, where 01 and 92 are the actual
angles, respectively, n is a random variate that is uniformly distributed on [—1. 1]. Simulate this
system mathematically and analyze the results.
Exercises 31

Write a program, with inputs S and n (the number of points to be sampled), that reads the n
actual points (x,y); these are tabulated below. The program should calculate the apparent points
(x',y') as seen by each of the two stations Pl and P2. Using the points defining the actual trajectory
defined in the table below, compute and tabulate the apparent coordinates. Graph the actual and
apparent trajectories for several different quantization error sizes 6.

X y X y X y
20.6 13.2 19.3 10.3 8.3 7.8
17.2 7.6 14.5 5.5 11.4 11.6
10.7 4.9 8.6 6.7 9.5 5.7
8.1 10.6 10.8 12.3 3.5 1.6
11.5 9.3 9.9 6.3 18.4 8.9
7.6 3.9 4.3 1.9 12.5 4.8
20.3 12.4 18.9 9.6 8.1 8.7
16.6 6.9 13.5 5.2 11.6 10.8
10.1 5.2 10.8 7.6 8.9 5.1
8.4 11.4 6.0 2.8 2.7 1.5
11.2 8.4 19.7 11.0 17.8 8.2
6.8 3.4 15.2 5.9 11.6 4.8
20.0 11.7 9.0 6.2 8.0 9.6
15.9 6.4 9.9 12.5 11.6 10.0
9.4 5.7 10.4 6.9 8.2 4.5
9.1 12.2 5.2 2.4
CHAPTER

Dynamical Systems

Mathematical models of continuous systems are often defined in terms of differential


equations. Differential equations are particularly elegant, since they are able to describe
continuous dynamic environments with precision. In an ideal world, it would be possible to
solve these equations explicitly, but unfortunately this is rarely the case. Even so,
reasonable approximations using numerical difference methods are usually sufficient in
practice. This chapter presents a series of straightforward numerical techniques by which a
great many models can be approximated using a computer.

2.1 INITIAL-VALUE PROBLEMS


One general class of models is that of dynamical systems. Dynamical systems are
characterized by their system state and are often described by a set of differential equations.
If the differential equations, combined with their initial conditions, uniquely specify the
system, the variables specified by the initial conditions constitute the system state variables.
In general, suppose there are m differential equations, each of order ni. There are
n = Y^iLi ni initial conditions. Equivalently, there are n first-order differential equations.

Equation Order

1 n1
2 n2

The output variables of each of these first-order differential equations, each along with
their single initial condition, comprise a set of system state variables. However, since the
equations are not unique, neither are the state variables themselves. Nonetheless, there are
exactly n of them. Therefore, we begin by considering the first-order initial-value problem.

— -f(t,x),
dt~ ' (2.1)
2.1 Initial-Value Problems 33

where x(t) = [x1 (t),x2(t), ... ,xn(t)] is the system state vector and x(0) = [x 1 (0)
x 2 (0), . , . ,x n (0)] are the corresponding initial conditions.

EXAMPLE 2.1
Consider a system defined by the following differential equations:

'd + 2Bx + B2x = cost,


(2.2)
= 4,

subject to the initial conditions

a(0) = 2,
a(0) = -l, (2.3)
0(0) =1.

Since there are two dynamic state variables (those involving derivatives) and first-
and second-order differential equations, this is a third-order system. Therefore, it
is possible to re-define this as a system of three first-order differential equations.
Letting x1, = x(t), x2 = a(t), x3 = /?(/), Equations (2.2) can be re-written as

x2 + 2x2x3 + x 3l 2 x l — cos t,
x3 +xlx3 = 4.

Noting that x2 is the derivative of x1, Equations (2.2) and (2.3) may be redefined
as the following system of first-order differential equations:

X2,
2
x2 — —2x
–2x2x23x3–x— x1x3 + cost, (2.4a)
4;

(2.4b)

Defining the three-component state vector as x = [x 1 ,x 2 ,x 3 ], this system may be


written in the form of Equation (2.1), where

f = [x2, —2x2x3 — x1x32 + cos t, — x1x2 + 4],

It is straightforward to generalize this technique to a great number of systems.


34 Chapter 2: Dynamical Systems

Euler's Method
Since the technique of Example 2.1 is so general, we now consider only first-order
initial-value problems. Further, without loss of generality, we consider the scalar version of
Equation (2.1): x =f(t,x). Using the definition of derivative,
x(t - x(t)
lim = f(t,x).

Therefore, for small h, x(t + h) % x(t) + hf(t, x). It is convenient to define a new discrete
variable k in place of the continuous time t as t = tk — hk + t0, where t0 ^ t ^ tn and
k = 0, 1,2,...,n.
This linear transformation of time may be thought of as sampling the independent
time variable t at n + 1 sampling points, as illustrated in Figure 2.1. Formally,
x(h(k + 1) + t0) % x(hk + t0) + hf[hk + tQ, x(hk + tQ)]. (2.5)
We also introduce a new discrete dependent variable x(k) as
x(k+1)= x(k) + hf[t(k), x(k)], (2.6)
for k = 0, 1 , 2 , . . . , « . Whether we are discussing continuous or discrete time, the sense of
the variable x should be clear from the context. If the time variable is t, the signal is taken
as continuous or analog and the state is x(t). Or, if the time is discrete, the state variable is
x(k). The process of replacing continuous time by discrete time is called discretization.
Accordingly, x(k) % x(tk) = x(t) for small enough step size h.
Solving Equation (2.5) iteratively using Equation (2.6) will approximate the solution
of Equation (2.1). Notice that all variables on the right-hand side of Equation (2.6) are at
time k, whereas those on the left-hand side are at time k + 1. Therefore, we refer to this
expression as an update of variable x and often do not even retain the index k. For instance,

0 1 2 n - 1 n
FIGURE 2.1 Relationship between continuous time t and discrete sampled time k.
2.1 Initial-Value Problems 35

in writing a computer program, a typical assignment statement can express this recursively
as an update of x, x = x + hf(t, x), followed by an update of t, t — t + h. The variables x
and t on the right-hand sides of the assignments are called "old" while those on the left-
hand sides are called "new". This technique, called Euler's method, owes its popularity to
this simplicity.

EXAMPLE 2.2
Consider the system described by
x - x2t,
(2.7)
x(l) = 3.
Using elementary techniques, it is easy to show that the exact solution of this
system is

(2.8)

However, assuming that the explicit solution is unknown, Euler's method


proceeds as follows. Arbitrarily letting the integration step size be h = 0.05,
the equivalent discrete system is characterized by the initial conditions

x(0) = 3
and the difference equations
t
t — t _!_ _L
k+l — t k + 20'

for k = 1,2,...,«. Notice that the old value of t is required to update x.


However, x is not required in the update of t, Therefore, by updating x before
t, there is no need to use subscripts to maintain the bookkeeping details. This
example is solved algorithmically as in Listing 2.1.

t=1
x=3
print t, x
for k=l to n
x=x+hx2t
t=t+h
print, t,
next k
LISTING 2.1 Euler's method applied to the system (2.7).

The exact solution given by Equation (2.8) and the approximate solution
generated by Listing 2.1 for n = 6 are tabulated in Table 2.1.
36 Chapter 2: Dynamical Systems

TABLE 2.1 Exact Solution x(t) and Euler Approximation x(k) to System (2.7)

k 0 1 2 3 4 5 6
tk 1.00 1.05 1.10 1.15 1.20 1.25 1.30
x(t) 3.00 3.55 4.38 5.81 8.82 19.20 -85.71
x(k) 3.00 3.45 4.07 4.99 6.42 8.89 13.83

These data are also reproduced graphically in Figure 2.2, where it should be
noticed that there is a compounding effect on the error. Although the approximate
solution starts correctly at t0 = 1, each successive step deviates further from the
exact. Therefore, in applying the Euler method, it is important to not stray too far
from the initial time. It is also necessary to choose the integration step size h
wisely.
The solution given by Equation (2.8) reveals a singularity at
t
cm = Jf % 1 .29. Therefore, as t approaches the neighborhood of tcrit, the
numerical results become increasingly precarious. This leads to increasingly
large deviations between x(tk) and x(k) as t approaches tcrit from the left. It is clear
from Figure 2.2 that the Euler approach leads to poor results in this neighbor-
hood. Continuing past tcrit (see k = 6 in Table 2.1), the error is even more obvious
and the values obtained meaningless. O

exact solution
Euler (1st order)
Tayor (2nd order)

1.1 1.15 1.25


time, t
FIGURE 2.2 Relative accuracy of different integration techniques.
2.1 Initial-Value Problems 37

20

• exact solution
15 • h=0.02
h=0.04
h=0.08
10

1.05 1.1 1.15 1.2 1,25


t

FIGURE 2.3 Effect of varying step size h on the system (2.7).

One way to improve the accuracy is to decrease the step size h. The effect of this is
illustrated in Figure 2.3. Even so, reducing h has two major drawbacks: first, there will
necessarily be many more computations to estimate the solution at a given point. Secondly,
due to inherent machine limitations in data representation, h can also be too small. Thus, if
h is large, inherent difference approximations lead to problems, and if h is too small,
truncation and rounding errors occur. The trick is to get it just right by making h small
enough.

EXAMPLE 2.3

Rather than printing the results of a procedure at the end of every computation, it
is often useful to print the results of a computation periodically. This is
accomplished by using a device called a control break. A control break works
by means of nested loops. After an initial print, the outside loop prints n times
and, for each print, the inside loop produces m computations. Thus, there are mn
computations and n + 1 prints in total.
For example, suppose it is necessary to print the n + 1 computations for
Example 2.2, m iterations for each print until the entire simulation is complete.
The solution is to control the iteration with two loops rather than one. The outer
or print loop is controlled using index i(i= 1, 2 , . . . , n) and the inner or compute
loop uses index j (j = 1, 2 , . . . , m). Figure 2.4 shows this structure. The
implementation of the control break is given in Listing 2.2. O
Another random document with
no related content on Scribd:
Avery, Mary, 188.
Ayer, Edward E., 252.

Babes in the Wood, 195.


Bache, Benjamin Franklin, 197, 198.
Bacon’s Essays, 46, 50.
Badeau, Gen. Adam, 299.
Bancroft, George, 6.
“Bannockburn,” original MS. of Burns’s, 162.
Barker, Robert, 237.
Barmudas, Discovery of the, etc., 265.
Barnes (Berners), Dame Juliana, 30.
Battle Abbey Cartularies, 257, 258.
Baxter, Richard, 43, 79.
Beatty, A. Chester, 225.
Beauregard, Gen. Pierre Gustave
Toutant, 292, 293;
letter of Lee to, 295, 296.
Beauties of the Primer, 199.
Bellomont, Earl of, 64.
Bement, Clarence S., 8, 18, 239.
Bennett, Arnold, MS. of, 262.
Berners (Barnes), Dame Juliana, 30.
Bible, Aitken, 242.
Bible, Bamberg (Pfister), 220, 221, 229.
Bible, Baskett’s, 242.
Bible, Breeches (Genevan), 239, 240, 241.
Bible, Bug, 241.
Bible, Conqueror, 222.
Bible, Coverdale, 234, 236, 237.
Bible, Eggestyn, 231.
Bible, Eliot Indian, 78, 242.
Bible, Genevan (Breeches), 239, 240, 241.
Bible, Great, 231, 237.
Bible, Great French, 231.
Bible, Gutenberg, 17, 28, 83, 84, 89;
in Mazarin Library, 211, 214, 215;
the Melk copy of, 212;
production of, 212, 213, 214;
identification by De Bure, 214, 215;
perfecting the types for, 216;
copy in Eton College library, 217, 218;
from the Vulgate MS., 218;
copies bought by Dr. Rosenbach, 218, 219, 220;
bought by James Lenox, 244.
Bible, He, 237, 239.
Bible, Jenson, 231.
Bible, King James (Authorized), 237, 238, 239.
Bible, Mainz (of 1462), 221.
Bible, Mazarin. See Gutenberg Bible.
Bible, Pfister (Bamberg), 220, 221, 229.
Bible, “R”, 231.
Bible, Saur, 242.
Bible, She, 237.
Bible, Strasburg, 231.
Bible, Sweynheym and Pannartz, 231.
Bible, Vinegar, 241, 242.
Bible, Wicked, 241, 242.
Bible for the Poor, 228, 229.
Bible in English, 232-242.
Bible MSS., Codex Vaticanus, 221;
Codex Alexandrinus, 221;
Codex Sinaiticus, 221, 222;
illuminated copies, 222, 223, 224, 225;
Four Gospels, ninth century, 223;
Liesborn Gospels, 223, 224;
Historiated Bible of fourteenth century, 224;
early Hebrew copy, 224.
Bibles, Thumb, 205.
Biblia Pauperum, 228, 229.
Bibliothèque Nationale (Paris), 215, 229, 245.
Bigelow, John, 141, 142.
Bixby, William K., 40.
Black Giles, 195.
Blandford, Marquis of, 90, 91, 128.
Block books, 226, 227, 228, 229.
Boccaccio, Giovanni, 29, 90.
Boker, George H., 6.
Book of Hunting and Hawking, The, 30.
Boswell, James, 14, 47, 126, 127, 128.
Botticelli, Sandro, 229.
Bowden, A. J., 81, 82.
Boyle, Elizabeth, Faerie Queene presented to, 148, 149, 150.
Bradford, Thomas, 206.
Bradford, William (Governor), 281.
Bradford, William (printer), 64, 65.
Brailes, W. de, 224, 225.
Brant, Sebastian, 22.
Brawne, Fanny, letter of Keats to, 95, 96, 97.
Brazil, National Library of, 221.
Brevoort, James Carson, 267.
Brewster, Sir David, 118.
Brief and True Relation of the Discovery of the Northern Part of
Virginia, A (Brereton), 277.
Briefe and true report of the new found land of Virginia (Hariot),
275.
Brief Description of New York First Called New Netherlands
(Denton), 282.
Brigham, Clarence S., 288.
Brinley, Dr. George, 19, 267.
British Museum library, 50, 157, 221, 226, 244, 245, 246.
Britwell Court sale, 52, 77, 257.
Brown, Charles Brockden, 5.
Brown, John Carter, 243, 244, 267, 272, 275.
Brummell, George (“Beau”), letter of, 258.
Bry, Théodore de, 275.
Bryant, William Cullen, 6.
Buccaneers of America, The, 206.
Burdett-Coutts, Baroness, 27, 85, 158.
Burns, Robert, Glenriddel MSS. of, 160, 161;
Adam collection of, 161-164.

California, University of Southern, 45.


Call to the Unconverted, 43, 44, 79.
Cambridge University library, 50, 245.
Campbell, Mrs. Patrick, 115.
Canterbury Tales, The, 29.
Capell collection at Trinity College, 52.
Carlos V (Lope de Vega), 77.
“Carroll, Lewis” (C. L. Dodgson), forged autographs of, 111.
Cartier (Jacques) atlas, 275.
Carysfort sale, 219, 221.
Casas, Bartolomé de las, 275.
Catlin, George, 223.
Caxton, William, 29, 62;
History of Troy, 132;
Golden Legend, 232, 233.
Cervantes (Miguel de Cervantes Saavedra), letter of, 100, 101,
102.
Champlain, Samuel de, 66.
Charles V, forged letter of, 118;
genuine signature of, 269.
Chasles, Michel, 117-121.
Chatterton, Thomas, 128, 129, 130.
Chattin, James, 198.
Chaucer, Geoffrey, MSS. of, 252;
contemporary portrait of, 252.
Church, E. Dwight, 142.
Cieza de Leon, Pedro de, 275.
Civil Law (Brown), 20.
Clarissa Harlowe, 204.
Clark, C. W., 256.
Clark, William A., Jr., 45, 80.
Clemens, Samuel L. (“Mark Twain”), 164, 165.
Clements, William L., 45.
Cleopatra, forged letter of, 119.
Cockerell, Sydney C., 224, 225.
Colbert, Jean Baptiste, 21, 24.
Colbreath, William, 289.
Collections of Treaties (Jenkinson), 20.
Columbus, Christopher, 269, 270;
letter of, 271, 272.
Compleat Angler, The, 30.
Condell, Henry, 88, 151.
Confederation, Articles of (U. S. A.), 176, 177.
Confessio Amantis, 252.
Congressional Library, 67, 176, 246.
Connecticut Yankee at King Arthur’s Court, A, 164.
Conrad, Joseph, 143, 144, 145.
Cooper, James Fenimore, 6.
Cortés, Hernando, 269, 274.
Cosmographiæ Introductio, 273.
Coster (Koster), Lourens Janszoon, 216.
Cotton, Rev. John, 188, 189.
Coverdale, Miles, 234, 236.
Crane sale (1913), 198, 199.
Crawford, Lord, 250.
Cries of Philadelphia, The, 195.

Dailey Meditations, or Quotidian Preparations for and


Consideration of Death and Eternity (Johnson, 1668), 77.
Dante (Foligno, 1472), 86.
Dare, Virginia, record of birth in Smith’s Virginia, 280.
Davies, Sir John, 52.
Day, Mahlon, 207.
De Bure, Guillaume-François, 214, 215.
De Puy, Henry F., 173, 174.
Deane, Silas, 176.
Decades of the New World (Martyr), 273.
Decameron (Boccaccio), 29, 90, 91.
Declaration of Independence, Gwinnett a signer of, 53, 54, 286;
certified copy of, 176, 288;
letter of another signer (Rodney), 288.
Defoe, Daniel, 59, 60.
Delaware’s signer of the Declaration, 288.
Denton, Daniel, 282.
Devonshire, Duke of, 89, 248, 256.
Dibdin, Thomas Frognall, 89, 90, 91.
Dickens, Charles, letters about Pickwick Papers, 155, 157;
page from the MS., 156;
a gift to British Museum, 157;
Haunted Man, 158, 159;
MS. of his last letter, 160.
Discovery of the Barmudas, otherwise called the Isle of Devils
(Jourdain), 265.
Divers Voyages Touching the Discoverie of America (Hakluyt), 275.
Dodd, Mead and Company, 81.
Dodd, Robert, 81, 82.
Don Quixote, 100.
Drake, Francis, 275.
Dreer, Ferdinand J., 105.
Drinkwater, John, 26, 27.
Drury Lane Theatre, 14.
Dying Sayings of Hannah Hill, Junior, 182, 186, 187.

Eames, Dr. Wilberforce, 192, 193.


Eaton, John Henry, 6.
Edmunds, Charles, 51.
Electoral Library, Mainz, 215.
Eliot, John, 43, 44, 78.
Elizabeth, Queen, dedication of Genevan Bible to, 240.
Elizabethan Club library, 50, 255.
Elkins, William M., 202.
Ellsworth, James W., 219, 273.
Emancipation Proclamation, first draft of, 291.
Emerson, Ralph Waldo, Whitman’s tribute to, 152, 153, 154.
English Historical Manuscripts Commission, 258.
Enough is as good as a Feast (Wagner), reprint of, 256.
Epigrams and Elegies, 52.
“Epimanes,” (Poe), 167, 168, 169.
Epistles for the Ladies, 20.
Estrées, Gabrielle d’, 68, 70, 72, 74.
Eton College library, 217, 218.

Faerie Queene, The, 148-151.


Fall of Princes (Lydgate), 252.
Fenwick, T. FitzRoy, 223.
Fielding, Henry, 37.
Fillon, Benjamin, 100.
First folio Shakespeare, points on, 86;
printing of, 88, 89.
First Laws of New York, 64.
Fitzgerald, Edward, 175.
“Fitzvictor” (Shelley), 55.
Fleet, T. and J., 201.
Fletcher, John, letter to Countess of Huntingdon, 146.
Fogel, Johann, 218.
Folger, H. C., 53, 88, 256.
Forgeries, 98-133.
Forman, J. Buxton, 42, 95.
Fortescue, Hon. John, 217.
Fortune’s Lottery (Paice), 77.
Foster, John, 281.
Fox, George, 191.
Fox, Joseph M., 18, 74.
Franklin, Benjamin, epitaph of, 66, 67;
letter of, 135;
work book of his printing business, 136-139;
MS. of his Autobiography, 142;
signature to the Declaration, 176;
children’s books, 189, 190, 191;
Story of a Whistle, 195, 196;
New England Primer, 196;
Proposals Relating to the Education of Youth, 196, 197;
letter to Jonathan Williams, 197, 198.
Frederick the Great, 176, 177.
Frederickson sale, 40.
French National Library, 215, 229, 245.
Freneau, Philip, 20.
Frick, Henry C., 80.
Fust, Johann, 216, 217, 221.

Garrick, David, Prologue recited by, 14;


letter of Dr. Johnson to, 48, 49.
General Advertiser (London), 14.
General Historie of Virginia (Smith), 267, 278;
dedication of, 278, 279;
extracts from, 280.
General Laws and Liberties of Massachusetts, collected out of the
Records of the General Court (1648), 63, 64.
Gentleman’s Magazine, 14.
George the Third, 246.
Gerson, Johannes, 28.
Glass of Whiskey, The, title and page of, 193, 194, 195.
Godwin, Mary Wollstonecraft, 40.
Golden Legend (Caxton), 232, 233.
Goldsmith, Oliver, 202, 203.
Goodspeed, Charles, 66, 67.
Goody Two Shoes, 202.
Gower, John, 252.
Grammar, Lindley Murray’s, 6.
Grant, Gen. Ulysses S., letter to his father (1861), 297;
telegram announcing Lee’s surrender, 298, 299.
Gratz, Simon, 119.
Gray, Thomas, 13, 130, 217.
Green, Bartholomew, 200.
Green, T. (New Haven, 1740), 200, 201.
Greene, Belle da Costa, 217.
Greene, Robert, 146.
Grenville, Hon. Thomas, 246.
Gribbel, John, 160.
Grolier, Jean, 21, 22.
Grolier Club library, 50.
Gundulph, Bishop of Rochester, Bible of, 222.
Gutenberg, Johann, 213, 214, 216, 220.
Gwinnett, Button, autographs, 53, 54, 286.
Gwynne, Edward, 88.
Hakluyt, Richard, 275.
Hall, David, 139, 196.
Halsey, Rosalie V., 198, 204.
Hamlet, forged pages of, 126, 127, 128.
Hancock, John, Washington letter to, 9, 10, 11.
Handel, George Frederick, manuscript music of, 69.
Hans Breitmann’s Party, and Other Ballads, 6.
Hariot, Thomas, 275.
Harkness, Mrs. E. S., 220.
Harkness, Mrs. Stephen V., 220.
Harmsworth, Sir R. L., 256, 261, 299.
Harper, Lathrop C., 192, 193.
Harvard University, Widener Library, 45, 46, 66, 85.
Harvey, Gabriel, book given by Spenser to, 149.
Hathaway, Anne, forged letter to, 126.
Haunted Man, The (Dickens), 158, 159.
Hawthorne, Nathaniel, 26;
MS. title-page of Wonder Book, 180;
his copy of Hubbard’s history of Indian wars, 280, 281.
Hawthorne, William, 281.
Haydon, Benjamin Robert, 41.
Heavenly Spirits for Youthful Minds, 193, 194.
Heber, Richard, 61, 62, 248, 252, 257.
Heminge, John, 88, 151.
Henkels, Stan V., 11, 12, 13, 14, 15, 16, 68, 80, 81.
Henry, Prince, Bible of, 239.
Herrick, Robert, 255.
Hesperides (Herrick), 255.
Heywood, Thomas, 256.
Hispanic Society Library, 275.
Historie of New England (Winthrop), 281.
Historie of Plimouth Plantation (Bradford), 281, 282.
History of America (Robertson), 20.
History of America abridged for the use of Children of all
Denominations, 206, 207.
History of Ann Lively and her Bible, 184.
History of Little Fannie, etc., 209.
History of our Lord and Savior, Jesus Christ, Epitomized; for the
Use of Children in the South Parish at Andover, 184, 185.
Hoe, Robert, 99, 176.
Hoe sales, 30, 77, 83, 176, 218.
Hogarth, Samuel Johnson’s epitaph for, 48, 49.
Holford, Robert Stayner, 248.
Holford, Sir George, 31, 53, 86, 249.
Holkham MSS., 222.
Hubbard, William, on the Indian wars in New England, 26, 280,
281.
Huckleberry Finn, The Adventures of, 187.
Hunt, Leigh, 171.
Huntington, Archer M., 100, 275.
Huntington, Henry E., The Book of Hunting and Hawking, 30;
devotion to book-collecting, 33;
Queen Mab, 42;
collection open to the public, 44, 45;
Bacon’s Essays, 50;
Venus and Adonis, 52;
method of buying, 82, 83;
first folio Shakespeare, 89;
MS. of Franklin’s Autobiography, 142;
Arnold letter, 174;
the Royal Primer, 198, 199;
the Conqueror Bible, 222;
block books, 228;
Coverdale Bible, 236;
quoted, 252;
his collection, 253;
Hamlet, second edition, 255;
Columbus letter, 271, 272;
Cartier atlas, 275.
Huth sales (1912), 46, 50;
(1911), 238.

Independence, Declaration of, 53, 54, 176, 286, 288.


Independence Hall and Square, sale of, 177.
Indian wars in New England, William Hubbard’s story of the, 26,
280, 281.
Inland Navigation (Brown), 20.
Instructions for Right Spelling, 191, 192.
Ireland, William Henry, 122-128.
Ives, Gen. Brayton, 40, 99, 267.

Jack Juggler, title page of, 247.


Jaggard, Isaac, 88.
Jaggard, William, 88.
Janeway, Rev. James, 189, 190.
Jansen, Reinier, 191.
Jennings, Samuel, attack on, 65.
Jersey, Earl of, 29.
Johnson, Jacob, 5, 183.
Johnson, Marmaduke, 77.
Johnson, Samuel, Prologue for Drury-Lane Theatre, 14;
A. Edward Newton’s enthusiasm for, 47;
Boswell’s Life, 47;
letter suggesting epitaph for Hogarth, 48, 49.
Johnson, W., 208.
Johnson and Warner, 183.
Jones, H. V., 269, 272.
Jones, John Paul, Life of, 206.
Jonson, Ben, verses on Shakespeare, 88, 89.
Jordan, Mrs. (Dolly), 127, 128.
Jourdain, Sylvester, 265.
Journal of the most Material Occurrences proceeding the Seige of
Fort Schuyler (Colbreath), 289.
Joyce, James, MS. of, 171.
“Judith and Holofernes” (woodcut), 233.

Kane, Grenville, 272.


Keats, John, Amy Lowell on, 39, 40;
his copy of Shakespeare, 40;
MS. of sonnet to Haydon, 41;
Shelley letter referring to, 42;
Wilde’s sonnet on sale of his love letters, 94;
Morley’s sonnet on sale of one letter to Dr. Rosenbach, 95;
MS. letter to Fanny Brawne, 96, 97;
advancing value of, 181.
Kemble, John Philip, 127, 128.
Kennerley, Mitchell, 65, 66.
Kern, Jerome D., 42.
Killigrew, Thomas, 259.
King Edward IV (Heywood), reprint of, 256.
King Lear, forged copy of, 126, 128.
King-Hamy chart, 273.
Kingsborough, Lord, 223.
Kirby, Thomas E., 80.
Koster (Coster), Lourens Janszoon, 216.

Lamport Hall, seat of Ishams, discoveries at, 51.


Laud (William), Archbishop, 242.
Lawler, Percy E., 136-139.
Laws of New York (Bradford, 1694), 19.
Lazarus, forged letter of, 120.
Lee, Gen. Robert Edward, letter resigning commission in U. S.
Army, 294, 295;
letter after close of the war, 295, 296;
Grant’s announcement of his surrender, 298, 299.
Legacy for Children, being Some of the Last Expressions and
Dying Sayings of Hannah Hill, Junr., etc., 186, 187.
Leicester, Earl of, 222.
Leland, Charles Godfrey, 6.
Leningrad Library, 221.
Lenox, James, 84, 243, 244, 267, 271, 272.
Leonard, Zenas, 284.
Lessons for Children from Two to Five Years Old, 197.
Library, Ambrosian, 271;
British Museum, 50, 157, 221, 226, 244, 245, 246;
Cambridge University, 50, 245;
Congressional, 67, 176, 246;
Electoral (Mainz), 215;
Elizabethan Club (Yale), 50, 255;
Eton College, 217, 218;
French National, 215, 229, 245;
Grolier Club, 50;
Hispanic Society, 275;
Leningrad, 221;
Mazarin, 24, 211, 212, 214, 215;
National, of Brazil, 221;
New York Public, 84, 228, 236, 237, 271;
Newberry, 253;
Philadelphia Free, 237, 250;
Record Office, London, 146, 259;
Somerset House, 146;
Sorbonne, 23;
Spanish National, 100;
University of Michigan, 45;
University of Pennsylvania, 197;
University of Southern California, 45;
Vatican, 221;
Widener (Harvard), 45, 46, 66, 85;
Windsor Castle, 217;
Yale University, 220.
See also names of individual collectors.
Life of Samuel Johnson, 14, 47.
Lincoln, Abraham, autographs of, 290;
first draft of Emancipation Proclamation, 291;
of Baltimore address, 291;
other addresses and letters of, 291, 292;
account of his death, 294.
Little Truths, 206.
Lives of Highwaymen, 206.
Lives of Pirates, 206.
Lives of the Twelve Cæsars, 206.
Lord Jim (Conrad), MS. page of, 145.
Louÿs, Pierre, 116.
Lovelace, Richard, 255.
Lowell, Amy, 39, 40.
Lucas, Vrain, 117-121.
Lucasta (Lovelace), 255.
Lufft, Hans, 236.
Luther, Martin, 235.
Lydgate, John, 252.

McCarty and Davis, 6, 183.


MacDonald, Ramsay, 246.
MacGeorge, Bernard Buchanan, 87.
Malone, Edmund, 127.
Malory, Sir Thomas, 29.
Manutius, Aldus, 21.
“Mark Twain,” 164, 165.
Marlowe, Christopher, 52, 146, 261.
Martyr, Peter, 273.
“Marvel, Ik” (Donald G. Mitchell), 6.
Mary Magdalene, forged letter of, 120, 121.
Mason, William, 130.
Mason, William S., 67, 196.
Massingham, H. W., 142, 143.
Mather, Cotton, 185, 190, 191.
Mazarin, Cardinal (Jules), 21, 23, 211.
Mazarin Library, 24, 211, 212, 214, 215.
“Meistersinger, Die,” MS. page of, 73.
Melville, Herman, 6, 26.
Mentelin, 231.
Menzies, William, 267.
“Messiah, The” (Handel), MS. page of, 69.
Michigan, University of, 45.
Millar, Eric G., 225.
Milne, A. A., 28.
Milton, John, 259.
Mitchell, Donald G., 6.
Moby Dick, first edition, 26;
presentation copy, 26, 27.
Molière (Boucher, 1734), 85, 86.
Moll Flanders, 206.
Morgan, J. Pierpont, 238, 239, 245.
See also Morgan Library.
Morgan, Junius Spencer, 238.
Morgan, Pierpont, Library, letter of George Washington, 10;
Malory’s Morte d’Arthur (Caxton), 29;
collection open to public, 45, 263;
Vespucci letter and commonplace book, 56, 57;
librarian of, 217;
MS. Bibles, 222;
Holkham MSS., 222;
block books, 228;
Lufft Bible, 236;
Coverdale Bible, 236;
He Bible, 239.
Morley, Christopher, sonnet on Dr. Rosenbach’s purchase of Keats’s
love letter, 95, 96, 97.
Morrison, Alfred, 100.
Morrison sale, 55.
Morte d’Arthur, Le, 29.
Mostyn, Lord, 256.
Mother Goose’s Melodies, 203.
Motherless Mary, a Young and Friendless Orphan, etc., 206.
Mourt, George, 282.
“Murders in the Rue Morgue, The,” 93.
Murphy, Henry C., 267.
Murray, Lindley, 6.

Narrative of the Troubles with the Indians in New England


(Hubbard), 26, 280, 281.
Natural History and Antiquities of Selborne, MS. page of, 251.
New England Magazine, Poe’s letter to, 170.
New England Primer, 196.
New England’s Prospect (Wood), 282.
New York Cries, 207, 208.
New York Public Library, 84, 228, 236, 237, 271, 272.
Newbery, John, 202.
Newberry Library, 253.
Newton, A. Edward, 47, 67, 237.
Nichols, Charles L., 288.
North, Lord (Frederick), Benedict Arnold’s letter to, 172, 173, 174.

Occleve, Thomas, Poems of, 252.


Ockanickon, an Indian King, The Dying Words of, 81, 82.
Odes, Gray’s, 13.
Old Dame Trudge and Her Parrot, 209.
Omar Khayyám, MS. of, 175.

Paice’s Fortune’s Lottery, 77.


Paine, Philip, Dailey Meditations, 77.
Pallinganius’s Zodyacke of Lyfe, 77.
Palmart, Lamberto, 37.
Pamela, 204.
Paraphrastical Exposition, etc. (1693), 65.
Parker, James, 139.
Pascal, forged letter of, 118.
Passionate Pilgrim, The, 52.
Patty Primrose, 187.
Pellechet, Mademoiselle, 217.
Penn, William, 282, 283.
Pennsylvania, University of, 197.
Pennypacker, Samuel W., 8, 9, 10, 11, 91.
Penrod, 187.
Pentateuch, forged text of, 225, 226;
the Tyndale translation, 236.
Pepys, Samuel, 259.
Perry, Marsden J., 87, 128.
Peter Piper’s Practical Principles of Plain and Perfect Pronunciation,
208, 209.
Pforzheimer, Carl H., 159, 219, 237.
Pfister, Albrecht, 220, 221, 229.
Philadelphia Free Library, 237, 250.
Philes, George P., 3.
Philley, John, 65.
Phillips, Samuel, 184, 185.
Phillipps, Sir Thomas, 222, 223.
Pickwick Papers, The, 155-159.
Pilgrim’s Progress, The, 31, 32, 33.
Pirates, a Tale for Youth, The, 206.
Pizarro, Francisco, chart used by, 274, 275.
Poe, Edgar Allan, 3, 4, 6, 93;
MS. of “Annabel Lee,” 164, 166, 167;
letters, 168, 170;
MS. page of “Epimanes,” 169;
advancing value of, 181.
Poitiers, Diane de, 57, 58.
Political State of Europe, The, 20.
Pollard, Alfred W., 245, 255.
Polock, Moses, 3, 4, 5, 6, 7, 8, 9, 10, 17, 18, 19, 35, 36, 37, 104,
105, 181-184, 264, 265, 266.
Poor, Henry W., 79.
Posthumous Fragments of Margaret Nicholson (Shelley), 55.
Primer Improved, 199.
Pretty Book for Children, A, 203.
Prize for Youthful Obedience, The, 206.
Proceedings of the Convention, 1775, Washington’s copy of, 288.
Prodigal Daughter, or a strange and wonderful relation, etc., 202.
Progressive Primer, 199.
Prologue (Johnson) for opening Drury Lane Theatre, 14.
Prophecies That Remain To Be Fulfilled (Winchester), 20.
Proposals Relating to the Education of Youth in Pensilvania, 196,
197.
Prose Romances (Poe), 93, 95.
Psalter of Fust and Schöffer, 217.
Ptolemy’s Geography, 58.
Pudd’nhead Wilson, 164.
Pug’s Visit to Mr. Punch, 209.
Putnam, Herbert, 246.

Quaritch, Alfred, 63, 64, 84, 85, 218, 238.


Quaritch, Bernard, 99.
Queen Mab, 40, 42.
Quentel, Peter, 235, 236.
Quinn sale (1924), 143.
Record Office, London Public, 146, 259.
Redgrave, G. R., 255.
Reed, John Watson, 89.
Relation or Journal of the Beginning and Proceedings of the
English Plantation settled at Plimouth in New England
(Mourt), 282.
Republican Party, Lincoln’s MS. speech about formation of, 291,
292.
Revere, Paul, 287.
Reynard the Fox, 11.
Ricci, Seymour de, 174.
Richelieu, Cardinal (Armand Jean du Plessia), 21, 22, 23.
Richmond, George H., and Company, 81.
Rio de Janeiro, national library at, 221.
Robertson, William, 20.
Robinson Crusoe, 59, 60.
Rodney, Cæsar, 288.
Rosenbach, A. S. W., purchase of a Washington letter, 10, 11;
Reynard the Fox, 11, 12;
Gray’s Odes, 13, 14;
Johnson’s Drury Lane Prologue, 14, 15;
Gutenberg Bible, 17;
inheritance from Moses Polock, 19;
books from Washington’s library, 20;
first edition Adonais, 25;
Moby Dick, 26, 27;
Shakespeare first folio, 27;
The Book of Hunting and Hawking, 30;
Pilgrim’s Progress, 31, 32, 33;
Keats’s copy of Shakespeare, 40;
Shelley’s own copy of Queen Mab, 40;
Baxter’s Call to the Unconverted in Indian language, 43;
letter of Dr. Johnson to Garrick, 48, 49;
second edition Venus and Adonis, 53;
signature of Button Gwinnett, 53, 54;
letter of Amerigo Vespucci, 55, 56;
commonplace book of Giorgio Vespucci, 56, 57;
first edition Robinson Crusoe, 59, 60, 61;
book from Lamb’s library, 61;
Bradford’s First Laws of New York, 64;
first book printed in New York, 65;
Franklin’s first draft of his own epitaph, 66, 67;
Missal of Gabrielle d’Estrées, 68, 70, 71, 72, 74;
MS. of Handel’s
“Messiah,” 69;
MS. of Wagner’s “Die Meistersinger,” 73;
Lope de Vega’s Carlos V, 77;
first volume of verse printed in North America, 77, 78, 79;
Unpublishable Memoirs, 82;
first folio Shakespeare, 84, 85;
the Holford first folio, 86;
four folios from the Perry sale, 87;
volume of nine Shakespeare plays, 88;
Poe’s Prose Romances, 93;
MS. sonnet by Oscar Wilde on sale of Keats’s love letters, 94;
letter of Keats to Fanny Brawne, 95, 96, 97;
autograph letter of Cervantes, 100, 101, 102;
letter written by George Washington, 106, 107;
illustrated MS. letter of Thackeray, 110;
MS. of Wilde’s Salomé, 112, 113, 114, 116;
MS. dedication of Wilde’s Sphinx, 115;
forgery of a Shakespeare MS. by Ireland, 123;
Ireland’s confession, 128;
autograph letter of Franklin, 135;
Franklin’s work book of his press, 138, 139;
MSS. of Bernard Shaw, 142, 143;
MS. of Conrad’s Victory, 143, 144;
of Lord Jim, 145;
Shakespeare’s Troylus and Cresseida, 147;
presentation copy, first edition of The Faerie Queene, 148, 149,
150;
book given by Spenser to Gabriel Harvey, 149;
MS. of Walt Whitman’s “By Emerson’s Grave,” 152, 153, 154;
Dickens’s letter about beginning Pickwick, 155, 157;
MS. pages of Pickwick, 156, 157;
Dickens’s note in verse, 158, 159;
his last written letter, 160;
MS. poems of Burns, 161, 162, 163, 164;
MSS. of Mark Twain, 164, 165;
Poe’s “Annabel Lee” and “Epimanes,” 166, 167, 168, 169, 170;
MS. page of Joyce’s Ulysses, 171;
letter of Benedict Arnold, 172, 173, 174;
MS. page of the Rubáiyát, 175;
contemporary certified copies Declaration of Independence and
Articles of Confederation (U.S.), 176, 177;
MS. title page of Hawthorne’s Wonder Book, 180;
collection of old-fashioned books for children, 182-209;
Gutenberg Bibles, 212, 213, 218, 219, 220;
Pfister (Bamberg) Bible, 220, 221;
the 1462 Bible (Mainz), 221;
MS. Bible of eleventh century, 222;
Gospels of ninth century, 223, 224;
French illustrated Bible of fourteenth century, 224;
early English Bible pictures, 224, 225;
ancient block books, 227, 228;
Jenson Bible (1479), 231;
Caxton books, 232, 233;
annotated He Bible (1611), 239;
Jack Juggler (1555), 247;
MS. of White’s Selborne, 251;
MSS. of Chaucer, Gower, Lydgate, and Occleve, 252;
the Battle Abbey Cartularies, 257, 258;
letter of Beau Brummell, 258;
MS. of Arnold Bennett, 262;
a tea-ship broadside, 266;
old Spanish MS. concerning Cortés, signed by Charles V, 268,
269;
Columbus letter (1493), German edition, 271, 272;
the King-Hamy chart, 273;
charts used by Cortés and Pizarro, 274, 275;
tailor’s bill to De Soto, 276;
first work of Captain John Smith, 277, 278;
Washington’s autographed copy of Proceedings of the
Convention (Richmond, 1775), 285;
signature of Button Gwinnett, 286;
pass for Paul Revere, signed by Joseph Warren, 287, 288;
Cæsar Rodney letter about signing of the Declaration, 288;
Lincoln letters, 290;
first draft of Emancipation Proclamation, 291;
Lincoln’s Baltimore address, 291;
his speech on formation of Republican Party, 291, 292;
his letter about the flag, 292;
Walker letters about Confederate flag, 292, 293;
notebook describing Lincoln’s death, 294;
Lee’s resignation of commission in U. S. Army, 294, 295;
letter to Beauregard; after the surrender, 295, 296;
letter of Grant to his father, 297;
telegram of Grant to Stanton, announcing Lee’s surrender, 298,
299.
Rosenbach, Philip H., 85, 112, 172, 176, 177, 268, 269.
Rosenbach, Rebecca, 183, 184.
Rosier, James, 277.
Rowley, Thomas (Chatterton), 129, 130.
Roxburghe, Duke of, 89, 90, 248.
Royal Battledoor, The, 203.
Royal Primer, 198, 199.
Rubáiyát of Omar Khayyám, The, 175.
Rule of the New-Creature. To be Practiced every day in all the
Particulars of which are Ten, 188.
Rutland, Duke of, 259.
Rylands (John) Memorial Library, 229, 245, 263.

Santangel, Luis de, Columbus


letters to, 270, 271, 272.
Sassoon, David, 224.
Sauvages, Des (Champlain, 1603), 66.
Savonarola, Girolamo, 229.
Scarlet Letter, The (quoted), 281.
Schelling, Felix E., 259.
Schöffer, Peter, 216, 217, 221, 236.
School of Good Manners Composed for the Help of Parents, etc.,
201.
Schoolmaster Printer, the, 30.
Scolenberg, Baron von, 177.
Scotch Rogue, The, 206.
Scott, Sir Walter (quoted), 252.
Scott, Gen. Winfield, Lee’s letter to, resigning commission, 294,
295.
Search after Happiness, The, 206.
Second Part of the Tragedy of Amboyna, 282.
Sedgwick, Theodore, 54.
Shakespeare, first folio, 27, 84, 85, 86, 87, 88;
second folio, 35;
Keats’s copy of, 40;
Venus and Adonis, 51, 52;
The Passionate Pilgrim, 52;
Poems, 87;
history of folios, 87, 88, 89;
Jonson’s verses in, 88, 89;
early sales of, 89;
forged MSS. of, 122-128;
Vortigern and Rowena (forged), 127, 128;
Troylus and Cresseida, 147;
editorial comment on his MSS., 151, 152;
Trowbridge set of the four folios, 219;
earliest American purchase of a first folio, 244;
Hamlet, second edition, 255, 256;
The Tempest, 265.
Shapira, ----, 225, 226.
Shaw, George Bernard, 142, 143.
Shelley, Adonais, 25;
Queen Mab, 40, 42;
personal correspondence, 42;
notes, 42;
Posthumous Fragments of Margaret Nicholson, 55;
advancing value of, 181.
Sheridan, Richard Brinsley, 128.
Ship of Fools, The, 22.
Short-Title Catalogue of Books Printed in England, Scotland, and
Ireland, 1475-1640, 255.
Singularitez de la France Antartique, autrement nommée
Amérique, 58.
Skinner, Abraham, Washington letter to, 9, 10.
Smith, George D., 52, 83, 198, 199.
Smith, Harry B., 40, 155.
Smith, John, 267, 277, 278, 279, 280.
Some Excellent Verses for the Education of Youth, 200.
Somerset House, 146.
Sorbonne, library of the, and Richelieu collection, 23.
Sotheby sales, 31, 32, 33, 52, 84, 85, 87, 89, 90, 91.
Soto, Hermando de, 276.
Spanish National Library, 100.
Spencer, Earl, 90, 91, 245, 248, 263.
Spenser, Edmund, 148-151.
Sphinx, The (Wilde), dedication of, 115.
Spiritual Milk for Boston Babes. In either England: Drawn out of
the breasts of both Testaments, etc. (1684), 188, 189.
Spitzer chart, 275.
Spring, Robert, 105, 106.
Stanton, Edwin M., 298, 299.
Stevens, Henry, 243, 244.
Story of a Whistle, The, 196.
Strawberry Hill Press, the, 13.
Streeter, Thomas E., 200.
Stuart, John T., 290.
Sussex, Duke of, 248.
Swann, Mrs. Arthur W., 54.
Sweynheym and Pannartz, 231.
Sykes, Sir Mark, 248.

You might also like