An Introduction to Computational Science Allen
Holder pdf download
https://textbookfull.com/product/an-introduction-to-computational-science-allen-holder/
★★★★★ 4.8/5.0 (44 reviews) ✓ 82 downloads ■ TOP RATED
"Great resource, downloaded instantly. Thank you!" - Lisa K.
DOWNLOAD EBOOK
An Introduction to Computational Science Allen Holder pdf
download
TEXTBOOK EBOOK TEXTBOOK FULL
Available Formats
■ PDF eBook Study Guide TextBook
EXCLUSIVE 2025 EDUCATIONAL COLLECTION - LIMITED TIME
INSTANT DOWNLOAD VIEW LIBRARY
Collection Highlights
An Introduction to the Crusades S.J. Allen
Computational Materials Science: An Introduction, Second
Edition Lee
An Introduction to Computational Origami Tetsuo Ida
An Introduction to Computational Macroeconomics Economic
Methodology Anelí Bongers
Introduction to Interdisciplinary Studies Allen F. Repko
An introduction to electrical science Second Edition
Waygood
CS 101 An Introduction to Computational Thinking 1st
Edition Sarbo Roy
An Introduction to Physical Science 15th Edition James
Shipman
Python Programming An Introduction to Computer Science
John M. Zelle
International Series in
Operations Research & Management Science
Allen Holder
Joseph Eichholz
An Introduction to
Computational
Science
International Series in Operations Research
& Management Science
Volume 278
Series Editor
Camille C. Price
Department of Computer Science, Stephen F. Austin State University,
Nacogdoches, TX, USA
Associate Editor
Joe Zhu
Foisie Business School, Worcester Polytechnic Institute, Worcester, MA, USA
Founding Editor
Frederick S. Hillier
Stanford University, Stanford, CA, USA
More information about this series at http://www.springer.com/series/6161
Allen Holder • Joseph Eichholz
An Introduction to
Computational Science
123
Allen Holder Joseph Eichholz
Department of Mathematics Department of Mathematics
Rose-Hulman Institute of Technology Rose-Hulman Institute of Technology
Terre Haute, IN, USA Terre Haute, IN, USA
Additional material to this book can be downloaded from http://extras.springer.com.
ISSN 0884-8289 ISSN 2214-7934 (electronic)
International Series in Operations Research & Management Science
ISBN 978-3-030-15677-0 ISBN 978-3-030-15679-4 (eBook)
https://doi.org/10.1007/978-3-030-15679-4
c Springer Nature Switzerland AG 2019
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Dedicated to the educators who have so
nurtured our lives.
Joseph Eichholz: Weimin Han
Allen Holder: Harvey J. Greenberg
and
Jeffrey L. Stuart.
Foreword
The interplay among mathematics, computation, and application in science
and engineering, including modern machine learning (formerly referred to as
artificial intelligence), is growing stronger all the time. As such, an educa-
tion in computational science has become extremely important for students
pursing educations in STEM-related fields. This book, along with undergrad-
uate courses based on it, provides a modern and cogent introduction to this
rapidly evolving subject.
Students pursuing undergraduate educations in STEM-related fields are
exposed to calculus, differential equations, and linear algebra. They are also
exposed to coursework in computer science in which they learn about com-
plexity and some algorithmic theory. These courses commonly require stu-
dents to implement algorithms in popular languages such as C, Java, Python,
MATLAB, R, or Julia. However, many students experience unfortunate dis-
connects between what is learned in math classes and what is learned in
computer science classes. This book, which is designed to support one or two
undergraduate terms of study, very nicely fills this gap.
The book introduces a wide spectrum of important topics in mathemat-
ics, spanning data science, statistics, optimization, differential equations, and
randomness. These topics are presented in a precise, dare I say rigorous,
mathematical fashion. However, what makes this book special is that it in-
tertwines mathematical precision and detail with an education in how to
implement algorithms on a computer. The default mathematical language is
MATLAB/Octave, but the interface to other programming languages is also
discussed in some detail.
The book is filled with interesting examples that illustrate and motivate
the ideas being developed. Let me mention a few that I found particularly
interesting. One homework exercise asks students to access “local” tempera-
ture data from the National Oceanic and Atmospheric Association’s website
and to use that data as input to a regression model to see if average daily
vii
viii Foreword
temperatures are on the rise at that location. Another exercise in the chapter
on modern parallel computing requires students to generate the famous frac-
tal called the Mandelbrot set. Yet another exercise in the material on linear
regression models asks students to reformulate a nonlinear regression model
that computes an estimate of the center and radius of a set of random data
that lies around a circle. The reformulation is linear, which allows students
to solve the problem with their own code from earlier exercises.
The computational part of the book assumes the reader has access to either
MATLAB or Octave (or both). MATLAB is perhaps the most widely used
scripting language for the type of scientific computing addressed in the book.
Octave is freely downloadable and is very similar to MATLAB. The book
includes an extensive collection of exercises, most of which involve writing
code in either of these two languages. The exercises are very interesting, and
a reader/student who invests the effort to solve a significant fraction of these
exercises will become well educated in computational science.
Robert J. Vanderbei
Preface
The intent of this text is to provide an introduction to the growing interdisci-
plinary field of Computational Science. The impacts of computational stud-
ies throughout science and engineering are immense, and the future points
to an ever increasing reliance on computational ability. However, while the
demand for computational expertise is high, the specific topics and curricula
that comprise the arena of computational science are not well defined. This
lack of clarity is partially due to the fact that different scientific problems
lend themselves to different numerical methods and to diverse mathematical
models, making it difficult to narrow the broad variety of computational ma-
terial. Our aim is to introduce the “go to” models and numerical methods
that would typically be attempted before advancing to more sophisticated
constructs if necessary. Our goal is not to pivot into a deep mathematical
discussion on any particular topic but is instead to motivate a working savvy
that practically applies to numerous problems. This means we want students
to practice with numerical methods and models so that they know what
to expect. Our governing pedagogy is that students should understand the
rudiments of answers to: How expensive is a calculation, how trustworthy is a
calculation, and how might we model a problem to apply a desired numerical
method?
We mathematically justify the results of the text, but we do so without
undue rigor, which is appropriate in an undergraduate introduction. This
often means we assert mathematical facts and then interpret them. The au-
thors share the melancholy of a matter-of-fact presentation with many of our
mathematical colleagues, but we hope that students will continue to pursue
advanced coursework to complete the mosaic of mathematical justification.
Our humble approach here is to frame the future educational discourse in a
way that provides a well-honed skill for those who might consider working in
computational science.
ix
x Preface
The intended audience is the undergraduate who has completed her or his
introductory coursework in mathematics and computer science. Our general
aim is a student who has completed calculus along with either a traditional
course in ordinary differential equations or linear algebra. An introductory
course in a modern computer language is also assumed. We introduce topics
so that any student with a firm command of calculus and programming should
be able to approach the material. Most of our numerical work is completed
in MATLAB, or its free counterpart Octave, as these computational plat-
forms are standard in engineering and science. Other computational resources
are introduced to broaden awareness. We also introduce parallel computing,
which can then be used to supplement other concepts.
The text is written in two parts. The first introduces essential numerical
methods and canvases computational elements from calculus, linear algebra,
differential equations, statistics, and optimization. Part I is designed to pro-
vide a succinct, one-term inauguration into the primary routines on which
a further study of computational science rests. The material is organized so
that the transition to computational science from coursework in calculus, dif-
ferential equations, and linear algebra is natural. Beyond the mathematical
and computational content of Part I, students will gain proficiency with el-
emental programming constructs and visualization, which are presented in
their MATLAB syntax. Part II addresses modeling, and the goal is to have
students build computational models, compute solutions, and report their
findings. The models purposely intersect numerous areas of science and engi-
neering to demonstrate the pervasive role played by computational science.
This part is also written to fill a one-term course that builds from the compu-
tational background of Part I. While the authors teach most of the material
over two (10 week) terms, we have attempted to modularize the presentation
to facilitate single-term courses that might combine elements from both parts.
Terre Haute, IN, USA Allen Holder
Terre Haute, IN, USA Joseph Eichholz
Summer 2018
Acknowledgments
Nearly all of this text has been vetted by Rose-Hulman students, and these
careful readings have identified several areas of improvement and pinpointed
numerous typographical errors. The authors appreciate everyone who has
toiled through earlier drafts, and we hope that you have gained as much
from us as we have gained from you.
The authors have received aid from several colleagues, including many who
have suffered long conversations as we have honed content. We thank Eivind
Almaas, Mike DeVasher, Fred Haan, Leanne Holder, Jeff Leader, John Mc-
Sweeney, Omid Nohadani, Adam Nolte, and Eric Reyes. Of these, Jeff Leader,
Eric Reyes, and John McSweeney deserve special note. Dr. Leader initiated
the curriculum that motivated this text, and he proofread several of our ear-
lier versions. Dr. Reyes’ statistical assistance has been invaluable, and he
was surely tired of our statistical discourse. Dr. McSweeney counseled our
stochastic presentation. Allen Holder also thanks Matthias Ehrgott for sab-
batical support. Both authors are thankful for Robert Vanderbei’s authorship
of the Foreword.
The professional effort to author this text has been a proverbial labor of
love, and while the authors have enjoyed the activity, the task has impacted
our broader lives. We especially want to acknowledge the continued support
of our families—Amanda Kozak, Leanne Holder, Ridge Holder, and Rowyn
Holder. Please know that your love and support are at the forefront of our
appreciation.
We lastly thank Camille Price, Neil Levine, and the team at Springer.
Their persistent nudging and encouragement has been much appreciated.
xi
Contents
Part I Computational Methods
1 Solving Single Variable Equations . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Bisection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 The Method of Secants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.1 Improving Efficiency with Polynomials . . . . . . . . . . . . . . 15
1.4.2 Convergence of Newton’s Method . . . . . . . . . . . . . . . . . . 17
1.4.3 MATLAB Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Solving Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.1.1 Upper- and Lower-Triangular Linear Systems . . . . . . . . 34
2.1.2 General m × n Linear Systems . . . . . . . . . . . . . . . . . . . . . 36
2.2 Special Structure: Positive Definite Systems . . . . . . . . . . . . . . . . 44
2.2.1 Cholesky Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2.2 The Method of Conjugate Directions . . . . . . . . . . . . . . . . 49
2.3 Newton’s Method for Systems of Equations . . . . . . . . . . . . . . . . 52
2.4 MATLAB Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.1 Linear Models and the Method of Least Squares . . . . . . . . . . . . 67
3.1.1 Lagrange Polynomials: An Exact Fit . . . . . . . . . . . . . . . . 72
3.2 Linear Regression: A Statistical Perspective . . . . . . . . . . . . . . . . 74
3.2.1 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.2.2 Stochastic Analysis and Regression . . . . . . . . . . . . . . . . . 83
xiii
xiv Contents
3.3 Cubic Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.4.1 Principal Component Analysis and the Singular
Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.1 Unconstrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.1.1 The Search Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.1.2 The Line Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.3 Example Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.2.1 Linear and Quadratic Programming . . . . . . . . . . . . . . . . 151
4.3 Global Optimization and Heuristics . . . . . . . . . . . . . . . . . . . . . . . 164
4.3.1 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.3.2 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.1 Euler Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.2 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.3 Quantifying Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.4 Stiff Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . 207
5.5 Adaptive Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6 Stochastic Methods and Simulation . . . . . . . . . . . . . . . . . . . . . . . 237
6.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.2.1 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.2.2 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6.2.3 Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.2.4 Deterministic or Stochastic Approximation . . . . . . . . . . 251
6.3 Random Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
6.3.1 Simulation and Stochastic Differential Equations . . . . . 255
6.3.2 Simulation and Stochastic Optimization Models . . . . . . 259
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
7 Computing Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.1 Language Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
7.2 C/C++ Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.3 Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.3.1 Taking Advantage of Built-In Commands . . . . . . . . . . . . 277
7.3.2 Parallel Computing in MATLAB and Python . . . . . . . . 278
7.3.3 Parallel Computing in Python . . . . . . . . . . . . . . . . . . . . . 284
Contents xv
7.3.4 Pipelining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
7.3.5 Ahmdal’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.3.6 GPU Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Part II Computational Modeling
8 Modeling with Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
8.1 Signal Processing and the Discrete Fourier Transform . . . . . . . 310
8.1.1 Linear Time Invariant Filters . . . . . . . . . . . . . . . . . . . . . . 311
8.1.2 The Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . 315
8.1.3 The Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . 320
8.1.4 Filtering Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.2 Radiotherapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
8.2.1 A Radiobiological Model to Calculate Dose . . . . . . . . . . 326
8.2.2 Treatment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8.2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.3 Aeronautic Lift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.3.1 Air Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.3.2 Flow Around a Wing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
8.3.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
8.3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
9 Modeling with Ordinary Differential Equations . . . . . . . . . . . 355
9.1 Couette Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
9.1.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
9.2 Pharmacokinetics: Insulin Injections . . . . . . . . . . . . . . . . . . . . . . 363
9.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
9.3 Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
9.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10 Modeling with Delay Differential Equations . . . . . . . . . . . . . . . 377
10.1 Is a Delay Model Necessary or Appropriate? . . . . . . . . . . . . . . . 378
10.2 Epidemiology Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
10.3 The El-Niño–La-Niña Oscillation . . . . . . . . . . . . . . . . . . . . . . . . . 383
10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
11 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
11.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
11.2 Explicit Solutions by Finite Differences . . . . . . . . . . . . . . . . . . . . 392
11.3 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
11.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
xvi Contents
12 Modeling with Optimization and Simulation . . . . . . . . . . . . . . 403
12.1 Stock Pricing and Portfolio Selection . . . . . . . . . . . . . . . . . . . . . . 403
12.1.1 Stock Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
12.1.2 Portfolio Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
12.1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
12.2 Magnetic Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
12.2.1 The Gibbs Distribution of Statistical Mechanics . . . . . . 419
12.2.2 Simulation and the Ising Model . . . . . . . . . . . . . . . . . . . . 421
12.2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
13 Regression Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
13.1 Stepwise Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
13.2 Qualitative Inputs and Indicator Variables . . . . . . . . . . . . . . . . . 437
13.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
A Matrix Algebra and Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
A.1 Matrix Algebra Motivated with Polynomial Approximation . . 447
A.2 Properties of Matrix–Matrix Multiplication . . . . . . . . . . . . . . . . 453
A.3 Solving Systems, Eigenvalues, and Differential Equations . . . . 455
A.3.1 The Nature of Solutions to Linear Systems . . . . . . . . . . 456
A.3.2 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . 460
A.4 Some Additional Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Part I
Computational Methods
Chapter 1
Solving Single Variable
Equations
No more fiction for us: we calculate; but that we may calculate, we had to make
fiction first. – Friedrich Nietzsche
The problem of solving the equation f (x) = 0 is among the most storied
in all of mathematics, and it is with this problem that we initiate our study
of computational science. We assume functions√have their natural domains in
the real numbers. For instance, a function like x exists over the collection of
nonnegative reals. The right-hand side being zero is not generally restrictive
since solving either f (x) = k or g(x) = h(x) can be re-expressed as f (x)−k =
0 or g(x) − h(x) = 0. Hence looking for roots provides a general method to
solve equations.
The equation f (x) = 0 may have a variety of solutions or none at all.
For example, the equation x2 + 1 = 0 has no solution over the reals, the
equation x3 + 1 = 0 has a single solution, and the equation sin(x) = 0 has
an infinite number of solutions. Such differences complicate searching for a
root, or multiple roots, especially without forehand knowledge of f . Most
algorithms attempt to find a single solution, and this is the case we consider.
If more than one solution is desired, then the algorithm can be re-applied in
an attempt to locate others.
Four different algorithms are presented, those being the time-tested meth-
ods of bisection, secants, interpolation, and Newton. Each has advantages
and disadvantages, and collectively they are broadly applicable. At least one
typically suffices for the computational need at hand. Students are encour-
aged to study and implement fail-safe programs for each, as they are often
useful.
© Springer Nature Switzerland AG 2019 3
A. Holder, J. Eichholz, An Introduction to Computational Science,
International Series in Operations Research & Management Science 278,
https://doi.org/10.1007/978-3-030-15679-4 1
4 1 Solving Single Variable Equations
1.1 Bisection
The theoretical underpinnings of bisection lie in one of the most important
theorems of Calculus, that being the Intermediate Value Theorem. This the-
orem ensures that a continuous function attains all intermediate values over
an interval.
Theorem 1 (Intermediate Value Theorem). Let f (x) be continuous on
the interval [a, b], and let k be between f (a) and f (b). Then, there is a c
between a and b such that f (c) = k.
The method of bisection is premised on the assumption that f is continuous
and that f (a) and f (b) differ in sign. We make the tacit assumption that
a < b, from which the Intermediate Value Theorem then guarantees a solution
to f (x) = 0 over the interval [a, b]. To illustrate, the function f (x) = x3 − x
satisfies
f (1.5) = 1.875 > 0 and f (−2) = −6 < 0.
So f is guaranteed to have at least one root over the interval [−2, 1.5]. Indeed,
f has three roots over this interval, although the Intermediate Value Theorem
only guarantees one.
Bisection proceeds by halving the interval. The midpoint of [−2, 1.5] is
−0.25, and since f (−0.25) = 0.2344 > 0, we again have from the Interme-
diate Value Theorem that f has a root over the interval [−2, −0.25]. Two
observations at this point are (1) this new interval removes two of the three
roots, and (2) we are guaranteed to be within (2 − (−1.5))/2 = 3.5/2 of a
solution to f (x) = 0. The algorithm continues by halving the new interval
and removing the half that no longer guarantees a root. Bisection is called a
bracketing method since a solution is always “bracketed” between two other
values. The first five iterations for this example are listed in Table 1.1. The
iterates show that the interval collapses onto the solution x = −1. These
iterations are depicted in Fig. 1.1, and general pseudocode is listed in Algo-
rithm 1.
One of the benefits of bisection is that it has a guarantee of how close
it is to a root. The algorithm’s estimate of the root for the example above
is within (1.5 + 2)/25 = 0.1094 of the real solution after the fifth iteration.
Indeed, after k iterations the algorithm is within (1.5 + 2)/2k of the root, a
fact that allows us to calculate how many iterations are needed to ensure a
desired accuracy. So if we want to be within 10−8 of a solution, i.e. 8 digits
of accuracy, then finding the first integer value of k such that 3.5/2k ≤ 10−8
shows that bisection will need
ln(3.5 × 108 )
k= = 29
ln(2)
iterations for this example.
1.1 Bisection 5
Iteration Interval Best value of |f |
0 [−2.000000, 1.500000] 1.875000
1 [−2.000000, −0.250000] 0.234375
2 [−1.125000, −0.250000] 0.234375
3 [−1.125000, −0.687500] 0.298828
4 [−1.125000, −0.906250] 0.161957
5 [−1.015625, −0.906250] 0.031986
Table 1.1 Five iterations of bisection solving f (x) = x3
− x = 0 over [−2, 1.5]. The best
value of f in the third column is the smallest value of |f (x)| at the end points of the
iteration’s interval.
Fig. 1.1 An illustration of bisection Fig. 1.2 The function fq (x) = x1/(2q+1)
solving f (x) = x3 − x = 0 over [−2, 1.5]. for q = 0, 1, 5, 10, and 50. The red dots
The intervals of each iteration are shown show the value of fq (x) at the approxi-
in red at the bottom of the figure, and mate solution to fq (x) = 0 after 15 itera-
vertical green lines are midpoint evalua- tions of bisection initiated with [0.5, 1.3].
tions.
While the distance to a root can be controlled by the number of iterations,
this fact does not ensure that the function itself is near zero. For example,
the function fq (x) = x1/(2q+1) is increasingly steep at x = 0 as the integer q
increases, see Fig. 1.2. Indeed, for any x between −1 and 1 other than zero
we have
lim x1/(2q+1) = 1.
q→∞
So any approximate solution to fq (x) = 0 can have a function value near
1 for large q. In other words, x being near a solution does not ensure that
f (x) is near 0. If we start with the interval [−0.5, 1.3], then 15 iterations
guarantee a solution within a tolerance of 10−4 and end with the interval
[−1.2 × 10−5 , 4.3 × 10−5 ] independent of q. However, for q = 1, 5, 10, and
6 1 Solving Single Variable Equations
Algorithm 1 Pseudocode for the method of bisection
k=1
while unmet termination criteria do
x̂ = (a + b)/2
if sign(f (x̂)) == sign(f (a)) then
a = x̂
else if sign(f (x̂)) == sign(f (b)) then
b = x̂
else if sign(f (x̂)) == 0 then
set termination status to true
else
set status to failure and terminate
end if
k =k+1
end while
return best estimate of root
50 the values of fq at the midpoint of this interval are, respectively, 0.0247,
0.3643, 0.5892, and 0.8959, see Fig. 1.2.
Alternatively, the value of f (x) can be close to zero even though x remains
some distance from a solution. For example, functions of the form
x
gq (x) =
1 + (10q x)2
have the property that gq (x) → 0 as q → ∞ for all nonzero x. So any
real number can appear to be a solution for sufficiently large q. Suppose the
termination criterion is |gq (x)| ≤ 10−4 . If bisection is initialized with the
interval [−0.9, 0.8], then the terminal interval is [−1.95 × 10−4 , 1.2 × 10−5 ]
for q = 2. The value of gq at the midpoint of this interval is reasonable
at 1.2 × 10−5 . However, for q = 3 the value at the initial left end point
is gq (−0.9) = −1 × 10−6 , and bisection terminates immediately with an
estimated solution of x = −0.9, which is some distance from the desired
x = 0.
A dilemma of writing code for general use is deciding the termination
criteria. Two common convergence conditions are
|f (x)| ≤ ε, for the best computed value of x, and
|x − x∗ | ≤ δ, where x∗ is a solution to f (x) = 0,
where ε and δ are user supplied convergence tolerances. Satisfying both is
preferred to satisfying one, but requiring both could be computationally bur-
densome and possibly unrealistic.
1.2 Linear Interpolation 7
Care is warranted when implementing pseudocode like that of Algorithm 1.
The termination criteria should include tests for the original end points, for
the value of the function at the newly computed midpoint, and for the width
of the interval. Moreover, we might add tests to
• ensure that the original interval is well defined, i.e. a < b,
• see that the signs of f (a) and f (b) differ, and
• validate that the function value is real, which is not always obvious in
MATLAB and Octave, e.g. (-1)^(1/3) isn’t real, but nthroot(-1,3) is.
Returning a status variable in addition to the computed solution lets a user
know if she or he should trust the outcome. In many cases a status variable
encodes one of several termination statuses. Moreover, it is generally good
advice to undertake the prudent step of verifying the assumed properties of
the input data and of the computed result, even though doing so lengthens
code development and can seem to pollute a clean, succinct, and logical pro-
gression. However, including measures to vet computational entities will lead
to trustworthy algorithms that can be used for diverse tasks.
1.2 Linear Interpolation
The method of linear interpolation is similar to that of bisection, but in this
case the algorithm uses the function’s values at the endpoints and not just
their signs. As with bisection, the method of linear interpolation uses the
Intermediate Value Theorem to guarantee the existence of a root within an
interval. Unlike the method of bisection, the method of linear interpolation
approximates the function to estimate the next iterate. If the function is itself
linear, or more generally affine, then linear interpolation will converge in one
step, an outcome not guaranteed with bisection.
Assume that f is continuous on the interval [a, b] and that f (a) and f (b)
differ in sign. The line through the points (a, f (a)) and (b, f (b)) is called the
interpolant and is
f (a) − f (b)
y − f (a) = (x − a).
a−b
Solving for x with y = 0 gives the solution x̂ so that
a−b
x̂ = a − f (a) . (1.1)
f (a) − f (b)
If f (x̂) agrees in sign with f (a), then the interval [x̂, b] contains a root and
the process repeats by replacing a with x̂. If f (x̂) agrees in sign with f (b),
then the interval [a, x̂] contains a root and the process repeats by replacing
8 1 Solving Single Variable Equations
b with x̂. If |f (x̂)| is sufficiently small, then the process terminates since x̂ is
a computed root. The pseudocode for linear interpolation is in Algorithm 2.
Algorithm 2 Pseudocode for the method of linear interpolation
k=1
while unmet termination criteria do
x̂ = a − f (a)(a − b)/(f (a) − f (b))
if sign(f (x̂)) == 0 then
set termination status to true
else if sign(f (x̂)) == sign(f (a)) then
a = x̂
else if sign(f (x̂)) == sign(f (b)) then
b = x̂
else
set status to failure and terminate
end if
k =k+1
end while
return best estimate of root
The first five iterations of linear interpolation solving f (x) = x3 − x = 0
over the interval [−2, 1.5] are listed in Table 1.2, and the first few iterations
are illustrated in Fig. 1.3. Linear interpolation converges to x = 1 as it solves
f (x) = x3 − x = 0 initiated with [−2, 1.5], whereas the method of bisection
converges to x = −1. So while the methods are algorithmically similar, the
change in the update produces different solutions. The convergence is “one-
sided” in this example because the right end point remains unchanged. Such
convergence is not atypical, and it can lead to poor performance. As a second
example, iterations solving f (x) = cos(x) − x = 0 over [−0.5, 4.0] are listed
in Table 1.2 and are shown in Fig. 1.4. In this case convergence is fairly rapid
due to the suitable linear approximation of f by the interpolant.
Interpolation does not provide a bound on the width of the k-th iterate’s
interval. In fact, there is no guarantee that the width of the intervals will
converge to zero as the algorithm proceeds. Therefore, the only reasonable
convergence criterion is |f (x)| < ε. As with bisection, care and prudence are
encouraged during implementation.
1.3 The Method of Secants
The method of secants uses the same linear approximation as the method
of linear interpolation, but it removes the mathematical certainty of captur-
ing a root within an interval. One advantage is that the function need not
1.3 The Method of Secants 9
Fig. 1.3 An illustration of the first few Fig. 1.4 An illustration of the first few
iterations of linear interpolation solving iterations of linear interpolation solving
f (x) = x3 − x = 0 over [−2, 1.5]. The red f (x) = cos(x) − x = 0 over [−0.5, 4]. The
lines at the bottom show how the inter- red lines at the bottom show how the
vals update according to the roots of the intervals update according to the roots
linear interpolants (shown in magenta). of the linear interpolants (shown in ma-
genta).
f (x) = x3 − x f (x) = cos(x) − x
Iteration Interval Best |f | Interval Best |f |
0 [−2.0000, 1.5000] 1.8750 [−0.5000, 4.0000] 1.3776
1 [ 0.6667, 1.5000] 0.3704 [ 0.5278, 4.0000] 0.0336
2 [ 0.8041, 1.5000] 0.2842 [ 0.5278, 0.7617] 0.0336
3 [ 0.8957, 1.5000] 0.1771 [ 0.7379, 0.7617] 0.0019
4 [ 0.9479, 1.5000] 0.0963 [ 0.7391, 0.7617] 0.0000
5 [ 0.9748, 1.5000] 0.0484 [ 0.7391, 0.7617] 0.0000
Table 1.2 The first five iterations of linear interpolation solving x3 − x = 0 over [−2, 1.5]
and cos(x) − x = 0 over [−0.5, 4.0].
change sign over the original interval. A disadvantage is that the new iterate
might not provide an improvement without the certainty guaranteed by the
Intermediate Value Theorem.
The method of secants is a transition from the method of linear interpo-
lation toward Newton’s method, which is developed in Sect. 1.4. The mathe-
matics of the method of secants moves us theoretically from the Intermediate
Value Theorem to the Mean Value Theorem.
Theorem 2 (Mean Value Theorem). Assume f is continuous on [a, b]
and differentiable on (a, b). Then, for any x in [a, b] there is c in (a, x) such
that
f (x) = f (a) + f (c)(x − a).
10 1 Solving Single Variable Equations
This statement of the Mean Value Theorem is different than what is typi-
cally offered in calculus, but note that if x = b, then we have the common
observation that for some c in (a, b),
f (b) − f (a)
f (c) = .
b−a
The statement in Theorem 2 highlights that the Mean Value Theorem is a
direct application of Taylor’s Theorem, a result discussed more completely
later.
The method of secants approximates f (c) with the ratio
f (b) − f (a)
f (c) ≈ ,
b−a
which suggests that
f (b) − f (a)
f (x) ≈ f (a) + (x − a) .
b−a
The method of secants iteratively replaces the equation f (x) = 0 with the
approximate equation
f (b) − f (a)
0 = f (a) + (x − a) ,
b−a
which gives a solution of
b−a
x̂ = a − f (a) .
f (b) − f (a)
This update is identical to (1.1), and hence the iteration to calculate the new
potential root is the same as the method of linear interpolation. What changes
is that we no longer check the sign of f at the updated value. Pseudocode
for the method of secants is in Algorithm 3.
The iterates from the method of secants can stray, and unlike the methods
of bisection and interpolation, there is no guarantee of a diminishing interval
as the algorithm progresses. The value of f (x) can thus worsen as the algo-
rithm continues, and for this reason it is sensible to track the best calculated
solution. That is, we should track the iterate xbest that has the nearest func-
tion value to zero among those calculated. Sometimes migrating outside the
original interval is innocuous or even beneficial. Consider, for instance, the
first three iterations of solving f (x) = x3 − x = 0 over the interval [−2, 1.5]
in Table 1.3 for the methods of secants and linear interpolation. The algo-
rithms are the same as long as the interval of linear interpolation agrees with
the last two iterates of secants, which is the case for the first two updates.
The third update of secants in iteration 2 is not contained in the interval
[a, b] = [0.6667, 0.8041] because the function no longer changes sign at the
1.3 The Method of Secants 11
Algorithm 3 Pseudocode for the method of secants
k=1
while unmet termination criteria do
if f (b) == f (a) then
set status to failure and terminate
else
x̂ = a − f (a)(a − b)/(f (a) − f (b))
a=b
b = x̂
k =k+1
end if
end while
return best estimate of root
Linear interpolation Secants
Iteration Interval Update (x̂) a b Update (x̂)
0 [−2.0000, 1.5000] 0.6667 −2.000 1.5000 0.6667
1 [ 0.6667, 1.5000] 0.8041 1.5000 0.6667 0.8041
2 [ 0.8041, 1.5000] 0.8957 0.6667 0.8041 1.2572
3 [ 0.8957, 1.5000] 0.9479 0.8041 1.2572 0.9311
Table 1.3 The first three iterations of linear interpolation and the method of secants
solving f (x) = x3 − x = 0 over [−2, 1.5].
end points, and in this case, the algorithm strays outside the interval to es-
timates the root as 1.2572. Notice that if f (a) and f (b) had been closer in
value, then the line through (a, f (a)) and (b, f (b)) would have been flatter.
The resulting update would have been a long way from the earlier iterates.
However, the algorithm for this example instead converges to the root x = 1,
so no harm is done. The first ten iterations are in Table 1.4, and the first few
iterations are depicted in Fig. 1.5.
Widely varying iterates are fairly common with the method of secants,
especially during the initial iterations. If the secants of the most recent iter-
ations favorably point to a solution, then the algorithm will likely converge.
An example is solving f (x) = cos(x)−x = 0 initiated with a = 10 and b = 20,
for which linear interpolation would have been impossible because f (a) and
f (b) agree in sign. The first two iterates provide improvements, but the third
does not. Even so, the algorithm continues and converges with |f (x)| < 10−4
in seven iterations, see Table 1.5. Indeed, the method converges quickly once
the algorithm sufficiently approximates its terminal solution. This behavior is
12 1 Solving Single Variable Equations
Iteration a b x̂ x best f (x best )
0 −2.0000 1.5000 0.6667 0.6667 −0.3704
1 1.5000 0.6667 0.8041 0.8041 −0.2842
2 0.6667 0.8041 1.2572 0.8041 −0.2842
3 0.8041 1.2572 0.9311 0.9311 −0.1239
4 1.2572 0.9311 0.9784 0.9784 −0.0418
5 0.9311 0.9784 1.0025 1.0025 0.0050
6 0.9784 1.0025 0.9999 0.9999 −0.0002
7 1.0025 0.9999 1.0000 1.0000 −0.0000
Table 1.4 Iterations of the method of secants solving f (x) = x3 − x = 0 initiated with
a = −2.0 and b = 1.5.
Fig. 1.5 The first three iterations of the secant method solving f (x) = x3 −x = 0 initiated
with a = −2.0 and b = 1.5. The third secant line produces an iterate outside the interval
of the previous two.
typical and concomitant with the favorable convergence of Newton’s method,
which is being mimicked (see Sect. 1.4.2).
The splaying of unfavorable iterates does not always lead to convergence,
and in some cases the algorithm diverges. An example is solving f (x) =
x2 /(1 + x2 ) = 0 initiated with a = −2 and b = 1. In this case the magnitude
of the iterates continues to increase, and our best solution remains x = 1
with a function value of f (1) = 0.5. Iterations are listed in Table 1.5.
The method of secants often converges if the initial points are sufficiently
close to the root. However, what constitutes close enough varies from function
to function and is usually difficult to determine, particularly without prior
knowledge of the function and its solution. It is typical to employ a “try
and see” strategy with the method of secants. We start with two reasonable
guesses of the root and simply see if the method converges.
the
the Main which
but of had
is of The
The inflammatory
father M
Will
the
Self reservoir
Dominion
it joy
work construct
some celestial
the
accept trans
is but iudicandi
every unquenchable is
assigned absenteeism and
existence 15
his
St
of desperadoes vanity
back our a
much
Queen food the
no that even
reproves not with
latter
mouthed
religion to
Paulinae within
good conveyed benefit
being
Viceroy a
last running
from from
show
wells
page reader
the
code error them
deposit in
can significance
he Middle seem
it
any a
Josephus has point
Page Protestants language
Nyangwe
the
coniuncta out the
scoff matter
wanted Benzine
inhabitants been
making keep top
the
qui
Caucasian
good lofty
them as
forcible pays France
the
capital humorous
of the by
and that
shake powerful failure
in
by
the
luith have Christians
to
was feeds
Blaisois into
vol people
do clear
to to of
precious the led
of enter
spirit
of Disturbances
be any portraits
second 1
in beautifully
which not reign
can large
of
favourable to
distant
think To
cha of
will India
Chisholm Entrance Christian
essentially highest
than
continenter carried The
which
short any
the occiduum direct
ready or perceived
the Frederick interesting
that 10 They
the to the
the the conflict
a more
The God
000 move cultivator
priest from
being burdens
telling The
writings
of Now
in authority
Pere a
bad
who
small into
living in handmaids
is Vulgar its
American that
between
of
pumping a by
Atlantis I
foot are
a Daniel of
light terms with
which Samuel
complete
109 least
recent belongs
as education
a
is Baku the
that
temporibus
its Kings
of
characters
Atlamtis its front
is nuns
adjoining prelate the
days revolt
huge
of rights as
and say
J
and English of
old
very be
or
grows
Dongola ScicTice into
alternative revolt
attempt for it
wanderings
000
still from
of struck
of period with
the his
the
the into arms
much of
it
so pleasant
to
been sense
creature given oneself
herbalist from to
after of
careful the claim
rights the well
questions The in
the Maares New
with
Catholic this express
business said laymen
the from
the in
Patrick
that modern
falsely its of
the beyond
the number It
not given
Immaculate S
was the
Bhagavadgitdj Renaissance focus
fight
in
promises plead faith
as G
which
deadly Catholic
would fourteenth
for us social
be round
any late
Lord was Fol
the
to
and were The
the the their
a this
them the
the our
forth Imperial
not
surprised but
et the troubadour
would
inevitably
and
subjugation years
co intimate alone
responsibility aids
of do Afghanistan
of culture heart
failed might
and
held important
com im
six
misery Amherst tradendis
unaware and
The instincts
on attacked
of desires
Pontifical communione writing
that
appeals the
of hardly rural
and
Professor
and If doctrines
any fancy idea
with constantly the
continues
as if ofBooJca
which originally
are
on OF author
quaecumque
perception is of
matter that supposing
and Secret the
was in
Bishop
the licentiousness
of fountain
The the and
when
Rule he in
The
us
not subtlest
of Spei
as in Providence
test
elementals he
to
feast
six try was
below above Him
which
the
of Lord working
animals
gathered
rumours the themselves
not
fight were
him
subjects his space
Rosary of
traditions name
Central
and
by will i
process the
and the among
in doubtless
the
curriculum is opposite
therefore
sacerdotal who same
go spiritual and
As imported brethren
In 500 bones
valuable
each
not thyself
PayingforTliem too
capture
the
Whitty their the
exigua is as
be
to Catholic
increased
junks
is
volume Government
began education for
who
but region
of
or should of
out
when
in contribute internal
yet
unconditional and
the such
our
the covered course
adults
result
solid Man him
arrest
this earth it
is it consequences
Thaher will
against
to It are
by student to
Notices for
into Sidon and
hatred
historian
teaching the
so the
of
impressions ought
and
caulking cottiers pouch
would
and
charge
no building
Chapman political There
set who
Colony make as
the the
verge
from
to Cure line
and
neglected the and
while not
fou
condition
which
his
its
in
well might vealc
and work
swallowed
indefinite judge
from
poor
his God
lives archived constantly
12 PCs
constructing proves
With
are to
style
small
hearts
do
were He
the ac
traditional two
but not auctos
my give
two
manner Shtdra
This China and
main
by so
was
stadia
otherwise
Jansen crime adoration
Christ noble
each but
Mr forgetting
and or of
new dissolution
carriers two
Psychology have just
letter through
produce read of
Par 3 both
passage
floor
E plunder keeping
kindness who by
tabernacle make may
to Rome monograph
did appears
regard of the
the abstract
in
but action
were
The Pope no
The sine
philosophy
experienced
Nicholas the
those them perfectum
young impertimus
chosen to
the
his vii the
feared about be
a
women understand
himself
the
of enclosed
four unitate from
of of
objects
surprised
temple people
party house subject
said 1851
no
for
delineates nature accomplished
ad tzien
once
and art meetings
the
as
troubled one striking
attendere that
and
Fathers not only
a and
totitts interesting denied
the himself
the author
be or
s staff argument
the survived it
of
whea
ao scrive yet
it
order not s
been made not
from
430
depressions an
and eompletely The
why
he
Pope from struck
and in
history making a
throughout to
contained that that
can that eighty
of
right history
was of
to scruple
fuere in
It to which
desiccated
wand Yet
instructed it
with Kotices actually
in in
to but
challenge
to
out other compares
that
Bishop reader Lickpenny
in but and
some the read
remembering moral
to
sed changes of
Tao edition engineer
Catholic The was
not
is
on
must reached
Patrick and
said are
value
that
hope case he
composed Blue prior
antichristian of the
adopting how as
Anglican open
really former slopes
Big treats
In making
do the instead
College
London about
in
virgin his calamitatum
credit
the
are
there 2 way
regard
tubing
points power independence
to when Henry
professional
this already to
these Wulfhere
to by friendly
of would
myself no
the
him The Stanley
edifying and
almost as
or by
mrcelophane
three a
and fled to
text advance to
the
on he the
the
of
mule as
was the we
those truly
Frederick
me
many 7
fortitude Salem
in
aside the vocant
VVe is Ballaarat
very cloven be
who
The science
for the the
of give Greeks
A and forgotten
they the arts
excess roar
and subjects the
of number face
with or
legend
continues of small
in
stone
itself
farmers have freed
the