100% found this document useful (2 votes)
64 views

An Introduction to Parallel Programming 2. Edition Pacheco - eBook PDF download

The document is an introduction to the second edition of 'An Introduction to Parallel Programming' by Peter S. Pacheco and Matthew Malensek, aimed at teaching parallel programming using MPI, Pthreads, OpenMP, and CUDA. It highlights the importance of parallel programming in modern computing and provides a structured approach for students and professionals with minimal prerequisites. The book includes various chapters that cover the fundamentals of parallel systems, programming techniques, and practical applications, making it suitable for both classroom use and self-study.

Uploaded by

shotowajid2i
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
64 views

An Introduction to Parallel Programming 2. Edition Pacheco - eBook PDF download

The document is an introduction to the second edition of 'An Introduction to Parallel Programming' by Peter S. Pacheco and Matthew Malensek, aimed at teaching parallel programming using MPI, Pthreads, OpenMP, and CUDA. It highlights the importance of parallel programming in modern computing and provides a structured approach for students and professionals with minimal prerequisites. The book includes various chapters that cover the fundamentals of parallel systems, programming techniques, and practical applications, making it suitable for both classroom use and self-study.

Uploaded by

shotowajid2i
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

An Introduction to Parallel Programming 2.

Edition Pacheco - eBook PDF download

https://ebooksecure.com/download/an-introduction-to-parallel-
programming-ebook-pdf/

Download full version ebook from https://ebooksecure.com


We believe these products will be a great fit for you. Click
the link to download now, or visit ebooksecure.com
to discover even more!

(eBook PDF) Introduction to Programming Using Python An


1

http://ebooksecure.com/product/ebook-pdf-introduction-to-
programming-using-python-an-1/

Parallel programming: concepts and practice 1st Edition


- eBook PDF

https://ebooksecure.com/download/parallel-programming-concepts-
and-practice-ebook-pdf/

(eBook PDF) Java: An Introduction to Problem Solving


and Programming 7th Edition

http://ebooksecure.com/product/ebook-pdf-java-an-introduction-to-
problem-solving-and-programming-7th-edition/

(eBook PDF) Java: An Introduction to Problem Solving


and Programming 8th Edition

http://ebooksecure.com/product/ebook-pdf-java-an-introduction-to-
problem-solving-and-programming-8th-edition/
(eBook PDF) Microsoft Visual C#: An Introduction to
Object-Oriented Programming 7th Edition

http://ebooksecure.com/product/ebook-pdf-microsoft-visual-c-an-
introduction-to-object-oriented-programming-7th-edition/

Python Programming: An Introduction to Computer Science


3rd Edition by John Zelle (eBook PDF)

http://ebooksecure.com/product/python-programming-an-
introduction-to-computer-science-3rd-edition-by-john-zelle-ebook-
pdf/

(eBook PDF) Introduction to Programming Using Visual


Basic 10th Edition

http://ebooksecure.com/product/ebook-pdf-introduction-to-
programming-using-visual-basic-10th-edition/

Introduction to Java Programming, Comprehensive Version


10th edition- eBook PDF

https://ebooksecure.com/download/introduction-to-java-
programming-comprehensive-version-ebook-pdf/

(eBook PDF) Introduction to Java Programming, Brief


Version, Global Edition 11th Edition

http://ebooksecure.com/product/ebook-pdf-introduction-to-java-
programming-brief-version-global-edition-11th-edition/
An Introduction to Parallel
Programming

SECOND EDITION

Peter S. Pacheco
University of San Francisco

Matthew Malensek
University of San Francisco
Table of Contents

Cover image

Title page

Copyright

Dedication

Preface

Chapter 1: Why parallel computing

1.1. Why we need ever-increasing performance

1.2. Why we're building parallel systems

1.3. Why we need to write parallel programs

1.4. How do we write parallel programs?

1.5. What we'll be doing

1.6. Concurrent, parallel, distributed

1.7. The rest of the book


1.8. A word of warning

1.9. Typographical conventions

1.10. Summary

1.11. Exercises

Bibliography

Chapter 2: Parallel hardware and parallel software

2.1. Some background

2.2. Modifications to the von Neumann model

2.3. Parallel hardware

2.4. Parallel software

2.5. Input and output

2.6. Performance

2.7. Parallel program design

2.8. Writing and running parallel programs

2.9. Assumptions

2.10. Summary

2.11. Exercises

Bibliography

Chapter 3: Distributed memory programming with MPI


3.1. Getting started

3.2. The trapezoidal rule in MPI

3.3. Dealing with I/O

3.4. Collective communication

3.5. MPI-derived datatypes

3.6. Performance evaluation of MPI programs

3.7. A parallel sorting algorithm

3.8. Summary

3.9. Exercises

3.10. Programming assignments

Bibliography

Chapter 4: Shared-memory programming with Pthreads

4.1. Processes, threads, and Pthreads

4.2. Hello, world

4.3. Matrix-vector multiplication

4.4. Critical sections

4.5. Busy-waiting

4.6. Mutexes

4.7. Producer–consumer synchronization and semaphores

4.8. Barriers and condition variables


4.9. Read-write locks

4.10. Caches, cache-coherence, and false sharing

4.11. Thread-safety

4.12. Summary

4.13. Exercises

4.14. Programming assignments

Bibliography

Chapter 5: Shared-memory programming with OpenMP

5.1. Getting started

5.2. The trapezoidal rule

5.3. Scope of variables

5.4. The reduction clause

5.5. The parallel for directive

5.6. More about loops in OpenMP: sorting

5.7. Scheduling loops

5.8. Producers and consumers

5.9. Caches, cache coherence, and false sharing

5.10. Tasking

5.11. Thread-safety

5.12. Summary
5.13. Exercises

5.14. Programming assignments

Bibliography

Chapter 6: GPU programming with CUDA

6.1. GPUs and GPGPU

6.2. GPU architectures

6.3. Heterogeneous computing

6.4. CUDA hello

6.5. A closer look

6.6. Threads, blocks, and grids

6.7. Nvidia compute capabilities and device architectures

6.8. Vector addition

6.9. Returning results from CUDA kernels

6.10. CUDA trapezoidal rule I

6.11. CUDA trapezoidal rule II: improving performance

6.12. Implementation of trapezoidal rule with warpSize thread


blocks

6.13. CUDA trapezoidal rule III: blocks with more than one warp

6.14. Bitonic sort

6.15. Summary
6.16. Exercises

6.17. Programming assignments

Bibliography

Chapter 7: Parallel program development

7.1. Two n-body solvers

7.2. Sample sort

7.3. A word of caution

7.4. Which API?

7.5. Summary

7.6. Exercises

7.7. Programming assignments

Bibliography

Chapter 8: Where to go from here

Bibliography

Bibliography

Bibliography

Index
Copyright
Morgan Kaufmann is an imprint of Elsevier
50 Hampshire Street, 5th Floor, Cambridge, MA 02139,
United States

Copyright © 2022 Elsevier Inc. All rights reserved.

No part of this publication may be reproduced or


transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or any
information storage and retrieval system, without
permission in writing from the publisher. Details on how to
seek permission, further information about the Publisher's
permissions policies and our arrangements with
organizations such as the Copyright Clearance Center and
the Copyright Licensing Agency, can be found at our
website: www.elsevier.com/permissions.

This book and the individual contributions contained in it


are protected under copyright by the Publisher (other than
as may be noted herein).
Cover art: “seven notations,” nickel/silver etched plates,
acrylic on wood structure, copyright © Holly Cohn

Notices
Knowledge and best practice in this field are constantly
changing. As new research and experience broaden our
understanding, changes in research methods,
professional practices, or medical treatment may become
necessary.
Practitioners and researchers must always rely on their
own experience and knowledge in evaluating and using
any information, methods, compounds, or experiments
described herein. In using such information or methods
they should be mindful of their own safety and the safety
of others, including parties for whom they have a
professional responsibility.

To the fullest extent of the law, neither the Publisher nor


the authors, contributors, or editors, assume any liability
for any injury and/or damage to persons or property as a
matter of products liability, negligence or otherwise, or
from any use or operation of any methods, products,
instructions, or ideas contained in the material herein.

Library of Congress Cataloging-in-Publication Data


A catalog record for this book is available from the Library
of Congress

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the
British Library

ISBN: 978-0-12-804605-0

For information on all Morgan Kaufmann publications


visit our website at https://www.elsevier.com/books-and-
journals

Publisher: Katey Birtcher


Acquisitions Editor: Stephen Merken
Content Development Manager: Meghan Andress
Publishing Services Manager: Shereen Jameel
Production Project Manager: Rukmani Krishnan
Designer: Victoria Pearson

Typeset by VTeX
Printed in United States of America

Last digit is the print number: 9 8 7 6 5 4 3 2 1


Dedication

To the memory of Robert S. Miller


Preface
Parallel hardware has been ubiquitous for some time
now: it's difficult to find a laptop, desktop, or server that
doesn't use a multicore processor. Cluster computing is
nearly as common today as high-powered workstations
were in the 1990s, and cloud computing is making
distributed-memory systems as accessible as desktops. In
spite of this, most computer science majors graduate with
little or no experience in parallel programming. Many
colleges and universities offer upper-division elective
courses in parallel computing, but since most computer
science majors have to take a large number of required
courses, many graduate without ever writing a
multithreaded or multiprocess program.
It seems clear that this state of affairs needs to change.
Whereas many programs can obtain satisfactory
performance on a single core, computer scientists should
be made aware of the potentially vast performance
improvements that can be obtained with parallelism, and
they should be able to exploit this potential when the need
arises.
Introduction to Parallel Programming was written to
partially address this problem. It provides an introduction
to writing parallel programs using MPI, Pthreads, OpenMP,
and CUDA, four of the most widely used APIs for parallel
programming. The intended audience is students and
professionals who need to write parallel programs. The
prerequisites are minimal: a college-level course in
mathematics and the ability to write serial programs in C.
The prerequisites are minimal, because we believe that
students should be able to start programming parallel
systems as early as possible. At the University of San
Francisco, computer science students can fulfill a
requirement for the major by taking a course on which this
text is based immediately after taking the “Introduction to
Computer Science I” course that most majors take in the
first semester of their freshman year. It has been our
experience that there really is no reason for students to
defer writing parallel programs until their junior or senior
year. To the contrary, the course is popular, and students
have found that using concurrency in other courses is much
easier after having taken this course.
If second-semester freshmen can learn to write parallel
programs by taking a class, then motivated computing
professionals should be able to learn to write parallel
programs through self-study. We hope this book will prove
to be a useful resource for them.
The Second Edition
It has been nearly ten years since the first edition of
Introduction to Parallel Programming was published.
During that time much has changed in the world of parallel
programming, but, perhaps surprisingly, much also remains
the same. Our intent in writing this second edition has been
to preserve the material from the first edition that
continues to be generally useful, but also to add new
material where we felt it was needed.
The most obvious addition is the inclusion of a new
chapter on CUDA programming. When the first edition was
published, CUDA was still very new. It was already clear
that the use of GPUs in high-performance computing would
become very widespread, but at that time we felt that
GPGPU wasn't readily accessible to programmers with
relatively little experience. In the last ten years, that has
clearly changed. Of course, CUDA is not a standard, and
features are added, modified, and deleted with great
rapidity. As a consequence, authors who use CUDA must
present a subject that changes much faster than a
standard, such as MPI, Pthreads, or OpenMP. In spite of
this, we hope that our presentation of CUDA will continue
to be useful for some time.
Another big change is that Matthew Malensek has come
onboard as a coauthor. Matthew is a relatively new
colleague at the University of San Francisco, but he has
extensive experience with both the teaching and
application of parallel computing. His contributions have
greatly improved the second edition.
About This Book
As we noted earlier, the main purpose of the book is to
teach parallel programming in MPI, Pthreads, OpenMP, and
CUDA to an audience with a limited background in
computer science and no previous experience with
parallelism. We also wanted to make the book as flexible as
possible so that readers who have no interest in learning
one or two of the APIs can still read the remaining material
with little effort. Thus the chapters on the four APIs are
largely independent of each other: they can be read in any
order, and one or two of these chapters can be omitted.
This independence has some cost: it was necessary to
repeat some of the material in these chapters. Of course,
repeated material can be simply scanned or skipped.
On the other hand, readers with no prior experience with
parallel computing should read Chapter 1 first. This
chapter attempts to provide a relatively nontechnical
explanation of why parallel systems have come to dominate
the computer landscape. It also provides a short
introduction to parallel systems and parallel programming.
Chapter 2 provides technical background on computer
hardware and software. Chapters 3 to 6 provide
independent introductions to MPI, Pthreads, OpenMP, and
CUDA, respectively. Chapter 7 illustrates the development
of two different parallel programs using each of the four
APIs. Finally, Chapter 8 provides a few pointers to
additional information on parallel computing.
We use the C programming language for developing our
programs, because all four API's have C-language
interfaces, and, since C is such a small language, it is a
relatively easy language to learn—especially for C++ and
Java programmers, since they will already be familiar with
C's control structures.
Classroom Use
This text grew out of a lower-division undergraduate
course at the University of San Francisco. The course
fulfills a requirement for the computer science major, and it
also fulfills a prerequisite for the undergraduate operating
systems, architecture, and networking courses. The course
begins with a four-week introduction to C programming.
Since most of the students have already written Java
programs, the bulk of this introduction is devoted to the
use pointers in C.1 The remainder of the course provides
introductions first to programming in MPI, then Pthreads
and/or OpenMP, and it finishes with material covering
CUDA.
We cover most of the material in Chapters 1, 3, 4, 5, and
6, and parts of the material in Chapters 2 and 7. The
background in Chapter 2 is introduced as the need arises.
For example, before discussing cache coherence issues in
OpenMP (Chapter 5), we cover the material on caches in
Chapter 2.
The coursework consists of weekly homework
assignments, five programming assignments, a couple of
midterms and a final exam. The homework assignments
usually involve writing a very short program or making a
small modification to an existing program. Their purpose is
to insure that the students stay current with the
coursework, and to give the students hands-on experience
with ideas introduced in class. It seems likely that their
existence has been one of the principle reasons for the
course's success. Most of the exercises in the text are
suitable for these brief assignments.
The programming assignments are larger than the
programs written for homework, but we typically give the
students a good deal of guidance: we'll frequently include
pseudocode in the assignment and discuss some of the
more difficult aspects in class. This extra guidance is often
crucial: it's easy to give programming assignments that will
take far too long for the students to complete.
The results of the midterms and finals and the
enthusiastic reports of the professor who teaches operating
systems suggest that the course is actually very successful
in teaching students how to write parallel programs.
For more advanced courses in parallel computing, the
text and its online supporting materials can serve as a
supplement so that much of the material on the syntax and
semantics of the four APIs can be assigned as outside
reading.
The text can also be used as a supplement for project-
based courses and courses outside of computer science
that make use of parallel computation.
Support Materials
An online companion site for the book is located at
www.elsevier.com/books-and-journals/book-
companion/9780128046050.. This site will include errata
and complete source for the longer programs we discuss in
the text. Additional material for instructors, including
downloadable figures and solutions to the exercises in the
book, can be downloaded from
https://educate.elsevier.com/9780128046050.
We would greatly appreciate readers' letting us know of
any errors they find. Please send email to
mmalensek@usfca.edu if you do find a mistake.
Acknowledgments
In the course of working on this book we've received
considerable help from many individuals. Among them we'd
like to thank the reviewers of the second edition, Steven
Frankel (Technion) and Il-Hyung Cho (Saginaw Valley State
University), who read and commented on draft versions of
the new CUDA chapter. We'd also like to thank the
reviewers who read and commented on the initial proposal
for the book: Fikret Ercal (Missouri University of Science
and Technology), Dan Harvey (Southern Oregon
University), Joel Hollingsworth (Elon University), Jens
Mache (Lewis and Clark College), Don McLaughlin (West
Virginia University), Manish Parashar (Rutgers University),
Charlie Peck (Earlham College), Stephen C. Renk (North
Central College), Rolfe Josef Sassenfeld (The University of
Texas at El Paso), Joseph Sloan (Wofford College), Michela
Taufer (University of Delaware), Pearl Wang (George Mason
University), Bob Weems (University of Texas at Arlington),
and Cheng-Zhong Xu (Wayne State University). We are also
deeply grateful to the following individuals for their
reviews of various chapters of the book: Duncan Buell
(University of South Carolina), Matthias Gobbert
(University of Maryland, Baltimore County), Krishna Kavi
(University of North Texas), Hong Lin (University of
Houston–Downtown), Kathy Liszka (University of Akron),
Leigh Little (The State University of New York), Xinlian Liu
(Hood College), Henry Tufo (University of Colorado at
Boulder), Andrew Sloss (Consultant Engineer, ARM), and
Gengbin Zheng (University of Illinois). Their comments and
suggestions have made the book immeasurably better. Of
course, we are solely responsible for remaining errors and
omissions.
Slides and the solutions manual for the first edition were
prepared by Kathy Liszka and Jinyoung Choi, respectively.
Thanks to both of them.
The staff at Elsevier has been very helpful throughout
this project. Nate McFadden helped with the development
of the text. Todd Green and Steve Merken were the
acquisitions editors. Meghan Andress was the content
development manager. Rukmani Krishnan was the
production editor. Victoria Pearson was the designer. They
did a great job, and we are very grateful to all of them.
Our colleagues in the computer science and mathematics
departments at USF have been extremely helpful during
our work on the book. Peter would like to single out Prof.
Gregory Benson for particular thanks: his understanding of
parallel computing—especially Pthreads and semaphores—
has been an invaluable resource. We're both very grateful
to our system administrators, Alexey Fedosov and Elias
Husary. They've patiently and efficiently dealt with all of
the “emergencies” that cropped up while we were working
on programs for the book. They've also done an amazing
job of providing us with the hardware we used to do all
program development and testing.
Peter would never have been able to finish the book
without the encouragement and moral support of his
friends Holly Cohn, John Dean, and Maria Grant. He will
always be very grateful for their help and their friendship.
He is especially grateful to Holly for allowing us to use her
work, seven notations, for the cover.
Matthew would like to thank his colleagues in the USF
Department of Computer Science, as well as Maya
Malensek and Doyel Sadhu, for their love and support.
Most of all, he would like to thank Peter Pacheco for being
a mentor and infallible source of advice and wisdom during
the formative years of his career in academia.
Our biggest debt is to our students. As always, they
showed us what was too easy and what was far too difficult.
They taught us how to teach parallel computing. Our
deepest thanks to all of them.
1 “Interestingly, a number of students have said that they
found the use of C pointers more difficult than MPI
programming.”
Chapter 1: Why parallel
computing
From 1986 to 2003, the performance of microprocessors
increased, on average, more than 50% per year [28]. This
unprecedented increase meant that users and software
developers could often simply wait for the next generation
of microprocessors to obtain increased performance from
their applications. Since 2003, however, single-processor
performance improvement has slowed to the point that in
the period from 2015 to 2017, it increased at less than 4%
per year [28]. This difference is dramatic: at 50% per year,
performance will increase by almost a factor of 60 in 10
years, while at 4%, it will increase by about a factor of 1.5.
Furthermore, this difference in performance increase has
been associated with a dramatic change in processor
design. By 2005, most of the major manufacturers of
microprocessors had decided that the road to rapidly
increasing performance lay in the direction of parallelism.
Rather than trying to continue to develop ever-faster
monolithic processors, manufacturers started putting
multiple complete processors on a single integrated circuit.
This change has a very important consequence for
software developers: simply adding more processors will
not magically improve the performance of the vast majority
of serial programs, that is, programs that were written to
run on a single processor. Such programs are unaware of
the existence of multiple processors, and the performance
of such a program on a system with multiple processors
will be effectively the same as its performance on a single
processor of the multiprocessor system.
All of this raises a number of questions:

• Why do we care? Aren't single-processor systems


fast enough?
• Why can't microprocessor manufacturers continue
to develop much faster single-processor systems?
Why build parallel systems? Why build systems
with multiple processors?
• Why can't we write programs that will automatically
convert serial programs into parallel programs,
that is, programs that take advantage of the
presence of multiple processors?

Let's take a brief look at each of these questions. Keep in


mind, though, that some of the answers aren't carved in
stone. For example, the performance of many applications
may already be more than adequate.

1.1 Why we need ever-increasing performance


The vast increases in computational power that we've been
enjoying for decades now have been at the heart of many of
the most dramatic advances in fields as diverse as science,
the Internet, and entertainment. For example, decoding the
human genome, ever more accurate medical imaging,
astonishingly fast and accurate Web searches, and ever
more realistic and responsive computer games would all
have been impossible without these increases. Indeed,
more recent increases in computational power would have
been difficult, if not impossible, without earlier increases.
But we can never rest on our laurels. As our computational
power increases, the number of problems that we can
seriously consider solving also increases. Here are a few
examples:

• Climate modeling. To better understand climate


change, we need far more accurate computer
models, models that include interactions between
the atmosphere, the oceans, solid land, and the ice
caps at the poles. We also need to be able to make
detailed studies of how various interventions might
affect the global climate.
• Protein folding. It's believed that misfolded proteins
may be involved in diseases such as Huntington's,
Parkinson's, and Alzheimer's, but our ability to study
configurations of complex molecules such as
proteins is severely limited by our current
computational power.
• Drug discovery. There are many ways in which
increased computational power can be used in
research into new medical treatments. For example,
there are many drugs that are effective in treating a
relatively small fraction of those suffering from some
disease. It's possible that we can devise alternative
treatments by careful analysis of the genomes of the
individuals for whom the known treatment is
ineffective. This, however, will involve extensive
computational analysis of genomes.
• Energy research. Increased computational power
will make it possible to program much more detailed
models of technologies, such as wind turbines, solar
cells, and batteries. These programs may provide
the information needed to construct far more
efficient clean energy sources.
• Data analysis. We generate tremendous amounts of
data. By some estimates, the quantity of data stored
worldwide doubles every two years [31], but the vast
majority of it is largely useless unless it's analyzed.
As an example, knowing the sequence of nucleotides
in human DNA is, by itself, of little use.
Understanding how this sequence affects
development and how it can cause disease requires
extensive analysis. In addition to genomics, huge
quantities of data are generated by particle
colliders, such as the Large Hadron Collider at
CERN, medical imaging, astronomical research, and
Web search engines—to name a few.

These and a host of other problems won't be solved without


tremendous increases in computational power.

1.2 Why we're building parallel systems


Much of the tremendous increase in single-processor
performance was driven by the ever-increasing density of
transistors—the electronic switches—on integrated circuits.
As the size of transistors decreases, their speed can be
increased, and the overall speed of the integrated circuit
can be increased. However, as the speed of transistors
increases, their power consumption also increases. Most of
this power is dissipated as heat, and when an integrated
circuit gets too hot, it becomes unreliable. In the first
decade of the twenty-first century, air-cooled integrated
circuits reached the limits of their ability to dissipate heat
[28].
Therefore it is becoming impossible to continue to
increase the speed of integrated circuits. Indeed, in the last
few years, the increase in transistor density has slowed
dramatically [36].
But given the potential of computing to improve our
existence, there is a moral imperative to continue to
increase computational power.
How then, can we continue to build ever more powerful
computers? The answer is parallelism. Rather than building
ever-faster, more complex, monolithic processors, the
industry has decided to put multiple, relatively simple,
complete processors on a single chip. Such integrated
circuits are called multicore processors, and core has
become synonymous with central processing unit, or CPU.
In this setting a conventional processor with one CPU is
often called a single-core system.
1.3 Why we need to write parallel programs
Most programs that have been written for conventional,
single-core systems cannot exploit the presence of multiple
cores. We can run multiple instances of a program on a
multicore system, but this is often of little help. For
example, being able to run multiple instances of our
favorite game isn't really what we want—we want the
program to run faster with more realistic graphics. To do
this, we need to either rewrite our serial programs so that
they're parallel, so that they can make use of multiple
cores, or write translation programs, that is, programs that
will automatically convert serial programs into parallel
programs. The bad news is that researchers have had very
limited success writing programs that convert serial
programs in languages such as C, C++, and Java into
parallel programs.
This isn't terribly surprising. While we can write
programs that recognize common constructs in serial
programs, and automatically translate these constructs into
efficient parallel constructs, the sequence of parallel
constructs may be terribly inefficient. For example, we can
view the multiplication of two matrices as a sequence
of dot products, but parallelizing a matrix multiplication as
a sequence of parallel dot products is likely to be fairly slow
on many systems.
An efficient parallel implementation of a serial program
may not be obtained by finding efficient parallelizations of
each of its steps. Rather, the best parallelization may be
obtained by devising an entirely new algorithm.
As an example, suppose that we need to compute n
values and add them together. We know that this can be
done with the following serial code:
Now suppose we also have p cores and . Then each
core can form a partial sum of approximately values:

Here the prefix indicates that each core is using its own,
private variables, and each core can execute this block of
code independently of the other cores.
After each core completes execution of this code, its
variable will store the sum of the values computed by
its calls to . For example, if there are eight
cores, , and the 24 calls to return the
values

1, 4, 3, 9, 2, 8, 5, 1, 1, 6, 2, 7, 2, 5, 0, 4, 1, 8, 6, 5,
1, 2, 3, 9,
then the values stored in might be

Here we're assuming the cores are identified by


nonnegative integers in the range , where p is the
number of cores.
When the cores are done computing their values of ,
they can form a global sum by sending their results to a
designated “master” core, which can add their results:

In our example, if the master core is core 0, it would add


the values .
But you can probably see a better way to do this—
especially if the number of cores is large. Instead of making
the master core do all the work of computing the final sum,
we can pair the cores so that while core 0 adds in the result
of core 1, core 2 can add in the result of core 3, core 4 can
add in the result of core 5, and so on. Then we can repeat
the process with only the even-ranked cores: 0 adds in the
result of 2, 4 adds in the result of 6, and so on. Now cores
divisible by 4 repeat the process, and so on. See Fig. 1.1.
The circles contain the current value of each core's sum,
and the lines with arrows indicate that one core is sending
its sum to another core. The plus signs indicate that a core
is receiving a sum from another core and adding the
received sum into its own sum.
FIGURE 1.1 Multiple cores forming a global sum.

For both “global” sums, the master core (core 0) does


more work than any other core, and the length of time it
takes the program to complete the final sum should be the
length of time it takes for the master to complete. However,
with eight cores, the master will carry out seven receives
and adds using the first method, while with the second
method, it will only carry out three. So the second method
results in an improvement of more than a factor of two. The
difference becomes much more dramatic with large
numbers of cores. With 1000 cores, the first method will
require 999 receives and adds, while the second will only
require 10—an improvement of almost a factor of 100!
The first global sum is a fairly obvious generalization of
the serial global sum: divide the work of adding among the
cores, and after each core has computed its part of the
sum, the master core simply repeats the basic serial
addition—if there are p cores, then it needs to add p values.
The second global sum, on the other hand, bears little
relation to the original serial addition.
The point here is that it's unlikely that a translation
program would “discover” the second global sum. Rather,
there would more likely be a predefined efficient global
sum that the translation program would have access to. It
could “recognize” the original serial loop and replace it
with a precoded, efficient, parallel global sum.
We might expect that software could be written so that a
large number of common serial constructs could be
recognized and efficiently parallelized, that is, modified so
that they can use multiple cores. However, as we apply this
principle to ever more complex serial programs, it becomes
more and more difficult to recognize the construct, and it
becomes less and less likely that we'll have a precoded,
efficient parallelization.
Thus we cannot simply continue to write serial programs;
we must write parallel programs, programs that exploit the
power of multiple processors.

1.4 How do we write parallel programs?


There are a number of possible answers to this question,
but most of them depend on the basic idea of partitioning
the work to be done among the cores. There are two widely
used approaches: task-parallelism and data-parallelism.
In task-parallelism, we partition the various tasks carried
out in solving the problem among the cores. In data-
parallelism, we partition the data used in solving the
problem among the cores, and each core carries out more
or less similar operations on its part of the data.
As an example, suppose that Prof P has to teach a section
of “Survey of English Literature.” Also suppose that Prof P
has one hundred students in her section, so she's been
assigned four teaching assistants (TAs): Mr. A, Ms. B, Mr. C,
and Ms. D. At last the semester is over, and Prof P makes
up a final exam that consists of five questions. To grade the
exam, she and her TAs might consider the following two
options: each of them can grade all one hundred responses
to one of the questions; say, P grades question 1, A grades
question 2, and so on. Alternatively, they can divide the one
hundred exams into five piles of twenty exams each, and
each of them can grade all the papers in one of the piles; P
grades the papers in the first pile, A grades the papers in
the second pile, and so on.
In both approaches the “cores” are the professor and her
TAs. The first approach might be considered an example of
task-parallelism. There are five tasks to be carried out:
grading the first question, grading the second question,
and so on. Presumably, the graders will be looking for
different information in question 1, which is about
Shakespeare, from the information in question 2, which is
about Milton, and so on. So the professor and her TAs will
be “executing different instructions.”
On the other hand, the second approach might be
considered an example of data-parallelism. The “data” are
the students' papers, which are divided among the cores,
and each core applies more or less the same grading
instructions to each paper.
The first part of the global sum example in Section 1.3
would probably be considered an example of data-
parallelism. The data are the values computed by
, and each core carries out roughly the same
operations on its assigned elements: it computes the
required values by calling and adds them
together. The second part of the first global sum example
might be considered an example of task-parallelism. There
are two tasks: receiving and adding the cores' partial sums,
which is carried out by the master core; and giving the
partial sum to the master core, which is carried out by the
other cores.
When the cores can work independently, writing a
parallel program is much the same as writing a serial
program. Things get a great deal more complex when the
cores need to coordinate their work. In the second global
sum example, although the tree structure in the diagram is
very easy to understand, writing the actual code is
relatively complex. See Exercises 1.3 and 1.4.
Unfortunately, it's much more common for the cores to
need coordination.
In both global sum examples, the coordination involves
communication: one or more cores send their current
partial sums to another core. The global sum examples
should also involve coordination through load balancing.
In the first part of the global sum, it's clear that we want
the amount of time taken by each core to be roughly the
same as the time taken by the other cores. If the cores are
identical, and each call to requires the same
amount of work, then we want each core to be assigned
roughly the same number of values as the other cores. If,
for example, one core has to compute most of the values,
then the other cores will finish much sooner than the
heavily loaded core, and their computational power will be
wasted.
A third type of coordination is synchronization. As an
example, suppose that instead of computing the values to
be added, the values are read from . Say, is an array
that is read in by the master core:

In most systems the cores are not automatically


synchronized. Rather, each core works at its own pace. In
this case, the problem is that we don't want the other cores
to race ahead and start computing their partial sums before
the master is done initializing and making it available to
the other cores. That is, the cores need to wait before
starting execution of the code:
We need to add in a point of synchronization between the
initialization of and the computation of the partial sums:

The idea here is that each core will wait in the function
until all the cores have entered the function—in
particular, until the master core has entered this function.
Currently, the most powerful parallel programs are
written using explicit parallel constructs, that is, they are
written using extensions to languages such as C, C++, and
Java. These programs include explicit instructions for
parallelism: core 0 executes task 0, core 1 executes task 1,
…, all cores synchronize, …, and so on, so such programs
are often extremely complex. Furthermore, the complexity
of modern cores often makes it necessary to use
considerable care in writing the code that will be executed
by a single core.
There are other options for writing parallel programs—
for example, higher level languages—but they tend to
sacrifice performance to make program development
somewhat easier.

1.5 What we'll be doing


We'll be focusing on learning to write programs that are
explicitly parallel. Our purpose is to learn the basics of
programming parallel computers using the C language and
four different APIs or application program interfaces:
the Message-Passing Interface or MPI, POSIX threads
or Pthreads, OpenMP, and CUDA. MPI and Pthreads are
libraries of type definitions, functions, and macros that can
be used in C programs. OpenMP consists of a library and
some modifications to the C compiler. CUDA consists of a
library and modifications to the C++ compiler.
You may well wonder why we're learning about four
different APIs instead of just one. The answer has to do
with both the extensions and parallel systems. Currently,
there are two main ways of classifying parallel systems: one
is to consider the memory that the different cores have
access to, and the other is to consider whether the cores
can operate independently of each other.
In the memory classification, we'll be focusing on
shared-memory systems and distributed-memory
systems. In a shared-memory system, the cores can share
access to the computer's memory; in principle, each core
can read and write each memory location. In a shared-
memory system, we can coordinate the cores by having
them examine and update shared-memory locations. In a
distributed-memory system, on the other hand, each core
has its own, private memory, and the cores can
communicate explicitly by doing something like sending
messages across a network. Fig. 1.2 shows schematics of
the two types of systems.

FIGURE 1.2 (a) A shared memory system and (b) a


distributed memory system.
The second classification divides parallel systems
according to the number of independent instruction
streams and the number of independent data streams. In
one type of system, the cores can be thought of as
conventional processors, so they have their own control
units, and they are capable of operating independently of
each other. Each core can manage its own instruction
stream and its own data stream, so this type of system is
called a Multiple-Instruction Multiple-Data or MIMD
system.
An alternative is to have a parallel system with cores that
are not capable of managing their own instruction streams:
they can be thought of as cores with no control unit.
Rather, the cores share a single control unit. However, each
core can access either its own private memory or memory
that's shared among the cores. In this type of system, all
the cores carry out the same instruction on their own data,
so this type of system is called a Single-Instruction
Multiple-Data or SIMD system.
In a MIMD system, it's perfectly feasible for one core to
execute an addition while another core executes a multiply.
In a SIMD system, two cores either execute the same
instruction (on their own data) or, if they need to execute
different instructions, one executes its instruction while the
other is idle, and then the second executes its instruction
while the first is idle. In a SIMD system, we couldn't have
one core executing an addition while another core executes
a multiplication. The system would have to do something
like this: =5.7cm

Time First core Second core


1 Addition Idle
2 Idle Multiply
Since you're used to programming a processor with its
own control unit, MIMD systems may seem more natural to
you. However, as we'll see, there are many problems that
are very easy to solve using a SIMD system. As a very
simple example, suppose we have three arrays, each with n
elements, and we want to add corresponding entries of the
first two arrays to get the values in the third array. The
serial pseudocode might look like this:

Now suppose we have n SIMD cores, and each core is


assigned one element from each of the three arrays: core i
is assigned elements , and . Then our program can
simply tell each core to add its x- and y-values to get the z
value:

This type of system is fundamental to modern Graphics


Processing Units or GPUs, and since GPUs are extremely
powerful parallel processors, it's important that we learn
how to program them.
Our different APIs are used for programming different
types of systems:

• MPI is an API for programming distributed memory


MIMD systems.
• Pthreads is an API for programming shared memory
MIMD systems.
• OpenMP is an API for programming both shared
memory MIMD and shared memory SIMD systems,
although we'll be focusing on programming MIMD
systems.
• CUDA is an API for programming Nvidia GPUs,
which have aspects of all four of our classifications:
shared memory and distributed memory, SIMD, and
MIMD. We will, however, be focusing on the shared
memory SIMD and MIMD aspects of the API.

1.6 Concurrent, parallel, distributed


If you look at some other books on parallel computing or
you search the Web for information on parallel computing,
you're likely to also run across the terms concurrent
computing and distributed computing. Although there
isn't complete agreement on the distinction between the
terms parallel, distributed, and concurrent, many authors
make the following distinctions:

• In concurrent computing, a program is one in which


multiple tasks can be in progress at any instant [5].
• In parallel computing, a program is one in which
multiple tasks cooperate closely to solve a problem.
• In distributed computing, a program may need to
cooperate with other programs to solve a problem.

So parallel and distributed programs are concurrent, but


a program such as a multitasking operating system is also
concurrent, even when it is run on a machine with only one
core, since multiple tasks can be in progress at any instant.
There isn't a clear-cut distinction between parallel and
distributed programs, but a parallel program usually runs
multiple tasks simultaneously on cores that are physically
close to each other and that either share the same memory
or are connected by a very high-speed network. On the
other hand, distributed programs tend to be more “loosely
coupled.” The tasks may be executed by multiple
computers that are separated by relatively large distances,
and the tasks themselves are often executed by programs
that were created independently. As examples, our two
concurrent addition programs would be considered parallel
by most authors, while a Web search program would be
considered distributed.
But beware, there isn't general agreement on these
terms. For example, many authors consider shared-memory
programs to be “parallel” and distributed-memory
programs to be “distributed.” As our title suggests, we'll be
interested in parallel programs—programs in which closely
coupled tasks cooperate to solve a problem.

1.7 The rest of the book


How can we use this book to help us write parallel
programs?
First, when you're interested in high performance,
whether you're writing serial or parallel programs, you
need to know a little bit about the systems you're working
with—both hardware and software. So in Chapter 2, we'll
give an overview of parallel hardware and software. In
order to understand this discussion, it will be necessary to
review some information on serial hardware and software.
Much of the material in Chapter 2 won't be needed when
we're getting started, so you might want to skim some of
this material and refer back to it occasionally when you're
reading later chapters.
The heart of the book is contained in Chapters 3–7.
Chapters 3, 4, 5, and 6 provide a very elementary
introduction to programming parallel systems using C and
MPI, Pthreads, OpenMP, and CUDA, respectively. The only
prerequisite for reading these chapters is a knowledge of C
programming. We've tried to make these chapters
independent of each other, and you should be able to read
them in any order. However, to make them independent, we
did find it necessary to repeat some material. So if you've
read one of the three chapters, and you go on to read
another, be prepared to skim over some of the material in
the new chapter.
Chapter 7 puts together all we've learned in the
preceding chapters. It develops two fairly large programs
using each of the four APIs. However, it should be possible
to read much of this even if you've only read one of
Chapters 3, 4, 5, or 6. The last chapter, Chapter 8, provides
a few suggestions for further study on parallel
programming.

1.8 A word of warning


Before proceeding, a word of warning. It may be tempting
to write parallel programs “by the seat of your pants,”
without taking the trouble to carefully design and
incrementally develop your program. This will almost
certainly be a mistake. Every parallel program contains at
least one serial program. Since we almost always need to
coordinate the actions of multiple cores, writing parallel
programs is almost always more complex than writing a
serial program that solves the same problem. In fact, it is
often far more complex. All the rules about careful design
and development are usually far more important for the
writing of parallel programs than they are for serial
programs.

1.9 Typographical conventions


We'll make use of the following typefaces in the text:

• Program text, displayed or within running text, will


use the following typefaces:
• Definitions are given in the body of the text, and the
term being defined is printed in boldface type: A
parallel program can make use of multiple cores.
• When we need to refer to the environment in which
a program is being developed, we'll assume that
we're using a UNIX shell, such as , and we'll use a
to indicate the shell prompt:

• We'll specify the syntax of function calls with fixed


argument lists by including a sample argument list.
For example, the integer absolute value function, ,
in , might have its syntax specified with

For more complicated syntax, we'll enclose required


content in angle brackets and optional content in
square brackets . For example, the C statement
might have its syntax specified as follows:
This says that the statement must include an
expression enclosed in parentheses, and the right
parenthesis must be followed by a statement. This
statement can be followed by an optional clause.
If the clause is present, it must include a second
statement.

1.10 Summary
For many years we've reaped the benefits of having ever-
faster processors. However, because of physical limitations,
the rate of performance improvement in conventional
processors has decreased dramatically. To increase the
power of processors, chipmakers have turned to multicore
integrated circuits, that is, integrated circuits with multiple
conventional processors on a single chip.
Ordinary serial programs, which are programs written
for a conventional single-core processor, usually cannot
exploit the presence of multiple cores, and it's unlikely that
translation programs will be able to shoulder all the work
of converting serial programs into parallel programs—
programs that can make use of multiple cores. As software
developers, we need to learn to write parallel programs.
When we write parallel programs, we usually need to
coordinate the work of the cores. This can involve
communication among the cores, load balancing, and
synchronization of the cores.
In this book we'll be learning to program parallel
systems, so that we can maximize their performance. We'll
be using the C language with four different application
program interfaces or APIs: MPI, Pthreads, OpenMP, and
CUDA. These APIs are used to program parallel systems
that are classified according to how the cores access
memory and whether the individual cores can operate
independently of each other.
In the first classification, we distinguish between shared-
memory and distributed-memory systems. In a shared-
memory system, the cores share access to one large pool of
memory, and they can coordinate their actions by accessing
shared memory locations. In a distributed-memory system,
each core has its own private memory, and the cores can
coordinate their actions by sending messages across a
network.
In the second classification, we distinguish between
systems with cores that can operate independently of each
other and systems in which the cores all execute the same
instruction. In both types of system, the cores can operate
on their own data stream. So the first type of system is
called a multiple-instruction multiple-data or MIMD
system, and the second type of system is called a single-
instruction multiple-data or SIMD system.
MPI is used for programming distributed-memory MIMD
systems. Pthreads is used for programming shared-memory
MIMD systems. OpenMP can be used to program both
shared-memory MIMD and shared-memory SIMD systems,
although we'll be looking at using it to program MIMD
systems. CUDA is used for programming Nvidia graphics
processing units or GPUs. GPUs have aspects of all four
types of system, but we'll be mainly interested in the
shared-memory SIMD and shared-memory MIMD aspects.
Concurrent programs can have multiple tasks in
progress at any instant. Parallel and distributed
programs usually have tasks that execute simultaneously.
There isn't a hard and fast distinction between parallel and
distributed, although in parallel programs, the tasks are
usually more tightly coupled.
Parallel programs are usually very complex. So it's even
more important to use good program development
techniques with parallel programs.
1.11 Exercises
1.1 Devise formulas for the functions that calculate
and in the global sum example.
Remember that each core should be assigned
roughly the same number of elements of
computations in the loop. : First consider the
case when n is evenly divisible by p.
1.2 We've implicitly assumed that each call to
requires roughly the same amount of
work as the other calls. How would you change your
answer to the preceding question if call requires
times as much work as the call with ? How
would you change your answer if the first call ( )
requires 2 milliseconds, the second call ( )
requires 4, the third ( ) requires 6, and so on?
1.3 Try to write pseudocode for the tree-structured
global sum illustrated in Fig. 1.1. Assume the
number of cores is a power of two (1, 2, 4, 8, …).
: Use a variable to determine whether a
core should send its sum or receive and add. The
should start with the value 2 and be doubled
after each iteration. Also use a variable to
determine which core should be partnered with the
current core. It should start with the value 1 and
also be doubled after each iteration. For example, in
the first iteration and , so 0
receives and adds, while 1 sends. Also in the first
iteration and , so 0 and
1 are paired in the first iteration.
1.4 As an alternative to the approach outlined in the
preceding problem, we can use C's bitwise operators
to implement the tree-structured global sum. To see
how this works, it helps to write down the binary
(base 2) representation of each of the core ranks and
note the pairings during each stage: =8.5cm
From the table, we see that during the first stage each
core is paired with the core whose rank differs in the
rightmost or first bit. During the second stage, cores
that continue are paired with the core whose rank
differs in the second bit; and during the third stage,
cores are paired with the core whose rank differs in
the third bit. Thus if we have a binary value that
is 0012 for the first stage, 0102 for the second, and
1002 for the third, we can get the rank of the core
we're paired with by “inverting” the bit in our rank
that is nonzero in . This can be done using the
bitwise exclusive or ∧ operator.
Implement this algorithm in pseudocode using the
bitwise exclusive or and the left-shift operator.
1.5 What happens if your pseudocode in Exercise 1.3 or
Exercise 1.4 is run when the number of cores is not a
power of two (e.g., 3, 5, 6, 7)? Can you modify the
pseudocode so that it will work correctly regardless
of the number of cores?
1.6 Derive formulas for the number of receives and
additions that core 0 carries out using
a. the original pseudocode for a global sum, and
b. the tree-structured global sum.
Make a table showing the numbers of receives and
additions carried out by core 0 when the two sums
are used with cores.
1.7 The first part of the global sum example—when
each core adds its assigned computed values—is
usually considered to be an example of data-
parallelism, while the second part of the first global
sum—when the cores send their partial sums to the
master core, which adds them—could be considered
to be an example of task-parallelism. What about the
second part of the second global sum—when the
cores use a tree structure to add their partial sums?
Is this an example of data- or task-parallelism? Why?
1.8 Suppose the faculty members are throwing a party
for the students in the department.
a. Identify tasks that can be assigned to the
faculty members that will allow them to use task-
parallelism when they prepare for the party.
Work out a schedule that shows when the various
tasks can be performed.
b. We might hope that one of the tasks in the
preceding part is cleaning the house where the
party will be held. How can we use data-
parallelism to partition the work of cleaning the
house among the faculty?
c. Use a combination of task- and data-parallelism
to prepare for the party. (If there's too much
work for the faculty, you can use TAs to pick up
the slack.)
1.9 Write an essay describing a research problem in
your major that would benefit from the use of
parallel computing. Provide a rough outline of how
parallelism would be used. Would you use task- or
data-parallelism?

Bibliography
[5] Clay Breshears, The Art of Concurrency: A Thread
Monkey's Guide to Writing Parallel Applications.
Sebastopol, CA: O'Reilly; 2009.
[28] John Hennessy, David Patterson, Computer
Architecture: A Quantitative Approach. 6th ed.
Burlington, MA: Morgan Kaufmann; 2019.
[31] IBM, IBM InfoSphere Streams v1.2.0 supports
highly complex heterogeneous data analysis, IBM
United States Software Announcement 210-037,
Feb. 23, 2010
http://www.ibm.com/common/ssi/rep_ca/7/897/ENUS
210-037/ENUS210-037.PDF.
[36] John Loeffler, No more transistors: the end of
Moore's Law, Interesting Engineering, Nov 29, 2018.
See https://interestingengineering.com/no-more-
transistors-the-end-of-moores-law.
Chapter 2: Parallel
hardware and parallel
software
It's perfectly feasible for specialists in disciplines other
than computer science and computer engineering to write
parallel programs. However, to write efficient parallel
programs, we often need some knowledge of the underlying
hardware and system software. It's also very useful to have
some knowledge of different types of parallel software, so
in this chapter we'll take a brief look at a few topics in
hardware and software. We'll also take a brief look at
evaluating program performance and a method for
developing parallel programs. We'll close with a discussion
of what kind of environment we might expect to be working
in, and a few rules and assumptions we'll make in the rest
of the book.
This is a long, broad chapter, so it may be a good idea to
skim through some of the sections on a first reading so that
you have a good idea of what's in the chapter. Then, when a
concept or term in a later chapter isn't quite clear, it may
be helpful to refer back to this chapter. In particular, you
may want to skim over most of the material in
“Modifications to the von Neumann Model,” except “The
Basics of Caching.” Also, in the “Parallel Hardware”
section, you can safely skim the material on
“Interconnection Networks.” You can also skim the material
on “SIMD Systems” unless you're planning to read the
chapter on CUDA programming.
Another Random Scribd Document
with Unrelated Content
The Project Gutenberg eBook of Ambition
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Ambition

Author: William L. Bade

Illustrator: Lawrence Woromay

Release date: February 21, 2016 [eBook #51274]


Most recently updated: October 23, 2024

Language: English

Credits: Produced by Greg Weeks, Mary Meehan and the Online


Distributed Proofreading Team at http://www.pgdp.net

*** START OF THE PROJECT GUTENBERG EBOOK AMBITION ***


AMBITION

By WILLIAM L. BADE

Illustrated by L. WOROMAY

[Transcriber's Note: This etext was produced from


Galaxy Science Fiction October 1951.
Extensive research did not uncover any evidence that
the U.S. copyright on this publication was renewed.]
To the men of the future, the scientific
goals of today were as incomprehensible
as the ancient quest for the Holy Grail!

There was a thump. Maitland stirred, came half awake, and opened
his eyes. The room was dark except where a broad shaft of
moonlight from the open window fell on the foot of his bed. Outside,
the residential section of the Reservation slept silently under the pale
illumination of the full Moon. He guessed sleepily that it was about
three o'clock.
What had he heard? He had a definite impression that the sound
had come from within the room. It had sounded like someone
stumbling into a chair, or—
Something moved in the darkness on the other side of the room.
Maitland started to sit up and it was as though a thousand volts had
shorted his brain....
This time, he awoke more normally. He opened his eyes, looked
through the window at a section of azure sky, listened to the singing
of birds somewhere outside. A beautiful day. In the middle of the
process of stretching his rested muscles, arms extended back, legs
tensed, he froze, looking up—for the first time really seeing the
ceiling. He turned his head, then rolled off the bed, wide awake.
This wasn't his room!
The lawn outside wasn't part of the Reservation! Where the labs and
the shops should have been, there was deep prairie grass, then a
green ocean pushed into waves by the breeze stretching to the
horizon. This wasn't the California desert! Down the hill, where the
liquid oxygen plant ought to have been, a river wound across the
scene, almost hidden beneath its leafy roof of huge ancient trees.
Shock contracted Maitland's diaphragm and spread through his body.
His breathing quickened. Now he remembered what had happened
during the night, the sound in the darkness, the dimly seen figure,
and then—what? Blackout....
Where was he? Who had brought him here? For what purpose?
He thought he knew the answer to the last of those questions. As a
member of the original atomic reaction-motor team, he possessed
information that other military powers would very much like to
obtain. It was absolutely incredible that anyone had managed to
abduct him from the heavily guarded confines of the Reservation,
yet someone had done it. How?

He pivoted to inspect the room. Even before his eyes could take in
the details, he had the impression that there was something wrong
about it. To begin with, the style was unfamiliar. There were no
straight lines or sharp corners anywhere. The walls were paneled in
featureless blue plastic and the doors were smooth surfaces of
metal, half ellipses, without knobs. The flowing lines of the chair and
table, built apparently from an aluminum alloy, somehow gave the
impression of arrested motion. Even after allowances were made for
the outlandish design, something about the room still was not right.
His eyes returned to the doors, and he moved over to study the
nearer one. As he had noticed, there was no knob, but at the right
of this one, at about waist level, a push-button projected out of the
wall. He pressed it; the door slid aside and disappeared. Maitland
glanced in at the disclosed bathroom, then went over to look at the
other door.
There was no button beside this one, nor any other visible means of
causing it to open.
Baffled, he turned again and looked at the large open window—and
realized what it was that had made the room seem so queer.
It did not look like a jail cell. There were no bars....
Striding across the room, he lunged forward to peer out and
violently banged his forehead. He staggered back, grimacing with
pain, then reached forward cautious fingers and discovered a hard
sheet of stuff so transparent that he had not even suspected its
presence. Not glass! Glass was never this clear or strong. A plastic,
no doubt, but one he hadn't heard of. Security sometimes had
disadvantages.
He looked out at the peaceful vista of river and prairie. The
character of the sunlight seemed to indicate that it was afternoon.
He became aware that he was hungry.
Where the devil could this place be? And—muscles tightened about
his empty stomach—what was in store for him here?
He stood trembling, acutely conscious that he was afraid and
helpless, until a flicker of motion at the bottom of the hill near the
river drew his attention. Pressing his nose against the window, he
strained his eyes to see what it was.
A man and a woman were coming toward him up the hill. Evidently
they had been swimming, for each had a towel; the man's was hung
around his neck, and the woman was still drying her bobbed black
hair.
Maitland speculated on the possibility that this might be Sweden; he
didn't know of any other country where public bathing at this time of
year was customary. However, that prairie certainly didn't look
Scandinavian....
As they came closer, he saw that both of them had dark uniform
suntans and showed striking muscular development, like persons
who had trained for years with weights. They vanished below his
field of view, presumably into the building.
He sat down on the edge of the cot and glared helplessly at the
floor.

About half an hour later, the door he couldn't open slid aside into the
wall. The man Maitland had seen outside, now clad in gray trunks
and sandals, stood across the threshold looking in at him. Maitland
stood up and stared back, conscious suddenly that in his rumpled
pajamas he made an unimpressive figure.
The fellow looked about forty-five. The first details Maitland noticed
were the forehead, which was quite broad, and the calm, clear eyes.
The dark hair, white at the temples, was combed back, still damp
from swimming. Below, there was a wide mouth and a firm, rounded
chin.
This man was intelligent, Maitland decided, and extremely sure of
himself.
Somehow, the face didn't go with the rest of him. The man had the
head of a thinker, the body of a trained athlete—an unusual
combination.
Impassively, the man said, "My name is Swarts. You want to know
where you are. I am not going to tell you." He had an accent,
European, but otherwise unidentifiable. Possibly German. Maitland
opened his mouth to protest, but Swarts went on, "However, you're
free to do all the guessing you want." Still there was no suggestion
of a smile.
"Now, these are the rules. You'll be here for about a week. You'll
have three meals a day, served in this room. You will not be allowed
to leave it except when accompanied by myself. You will not be
harmed in any way, provided you cooperate. And you can forget the
silly idea that we want your childish secrets about rocket motors."
Maitland's heart jumped. "My reason for bringing you here is
altogether different. I want to give you some psychological tests...."
"Are you crazy?" Maitland asked quietly. "Do you realize that at this
moment one of the greatest hunts in history must be going on? I'll
admit I'm baffled as to where we are and how you got me here—but
it seems to me that you could have found someone less conspicuous
to give your tests to."
Briefly, then, Swarts did smile. "They won't find you," he said. "Now,
come with me."

After that outlandish cell, Swarts' laboratory looked rather


commonplace. There was something like a surgical cot in the center,
and a bench along one wall supported several electronics cabinets. A
couple of them had cathode ray tube screens, and they all presented
a normal complement of meters, pilot lights, and switches. Cables
from them ran across the ceiling and came to a focus above the high
flat cot in the center of the room.
"Lie down," Swarts said. When Maitland hesitated, Swarts added,
"Understand one thing—the more you cooperate, the easier things
will be for you. If necessary, I will use coercion. I can get all my
results against your will, if I must. I would prefer not to. Please don't
make me."
"What's the idea?" Maitland asked. "What is all this?"
Swarts hesitated, though not, Maitland astonishedly felt, to evade an
answer, but to find the proper words. "You can think of it as a lie
detector. These instruments will record your reactions to the tests I
give you. That is as much as you need to know. Now lie down."
Maitland stood there for a moment, deliberately relaxing his tensed
muscles. "Make me."
If Swarts was irritated, he didn't show it. "That was the first test," he
said. "Let me put it another way. I would appreciate it a lot if you'd
lie down on this cot. I would like to test my apparatus."
Maitland shook his head stubbornly.
"I see," Swarts said. "You want to find out what you're up against."
He moved so fast that Maitland couldn't block the blow. It was to the
solar plexus, just hard enough to double him up, fighting for breath.
He felt an arm under his back, another behind his knees. Then he
was on the cot. When he was able to breathe again, there were
straps across his chest, hips, knees, ankles, and arms, and Swarts
was tightening a clamp that held his head immovable.
Presently, a number of tiny electrodes were adhering to his temples
and to other portions of his body, and a minute microphone was
clinging to the skin over his heart. These devices terminated in
cables that hung from the ceiling. A sphygmomanometer sleeve was
wrapped tightly around his left upper arm, its rubber tube trailing to
a small black box clamped to the frame of the cot. Another cable left
the box and joined the others.
So—Maitland thought—Swarts could record changes in his skin
potential, heartbeat, and blood pressure: the involuntary responses
of the body to stimuli.
The question was, what were the stimuli to be?
"Your name," said Swarts, "is Robert Lee Maitland. You are thirty-
four years old. You are an engineer, specialty heat transfer,
particularly as applied to rocket motors.... No, Mr. Maitland, I'm not
going to question you about your work; just forget about it. Your
home town is Madison, Wisconsin...."
"You seem to know everything about me," Maitland said defiantly,
looking up into the hanging forest of cabling. "Why this recital?"
"I do not know everything about you—yet. And I'm testing the
equipment, calibrating it to your reactions." He went on, "Your
favorite recreations are chess and reading what you term science
fiction. Maitland, how would you like to go to the Moon?"
Something eager leaped in Maitland's breast at the abrupt question,
and he tried to turn his head. Then he forced himself to relax. "What
do you mean?"
Swarts was chuckling. "I really hit a semantic push-button there,
didn't I? Maitland, I brought you here because you're a man who
wants to go to the Moon. I'm interested in finding out why."
In the evening a girl brought Maitland his meal. As the door slid
aside, he automatically stood up, and they stared at each other for
several seconds.
She had the high cheekbones and almond eyes of an Oriental, skin
that glowed like gold in the evening light, yet thick coiled braids of
blonde hair that glittered like polished brass. Shorts and a sleeveless
blouse of some thick, reddish, metallic-looking fabric clung to her
body, and over that she was wearing a light, ankle-length cloak of
what seemed to be white wool.
She was looking at him with palpable curiosity and something like
expectancy. Maitland sighed and said, "Hello," then glanced down
self-consciously at his wrinkled green pajamas.
She smiled, put the tray of food on the table, and swept out, her
cloak billowing behind her. Maitland remained standing, staring at
the closed door for a minute after she was gone.
Later, when he had finished the steak and corn on the cob and
shredded carrots, and a feeling of warm well-being was diffusing
from his stomach to his extremities, he sat down on the bed to
watch the sunset and to think.
There were three questions for which he required answers before he
could formulate any plan or policy.
Where was he?
Who was Swarts?
What was the purpose of the "tests" he was being given?
It was possible, of course, that this was all an elaborate scheme for
getting military secrets, despite Swarts' protestations to the contrary.
Maitland frowned. This place certainly didn't have the appearance of
a military establishment, and so far there had been nothing to
suggest the kind of interrogation to be expected from foreign
intelligence officers.
It might be better to tackle the first question first. He looked at the
Sun, a red spheroid already half below the horizon, and tried to think
of a region that had this kind of terrain. That prairie out there was
unique. Almost anywhere in the world, land like that would be
cultivated, not allowed to go to grass.
This might be somewhere in Africa....
He shook his head, puzzled. The Sun disappeared and its blood-hued
glow began to fade from the sky. Maitland sat there, trying to get
hold of the problem from an angle where it wouldn't just slip away.
After a while the western sky became a screen of clear luminous
blue, a backdrop for a pure white brilliant star. As always at that
sight, Maitland felt his worry drain away, leaving an almost mystical
sense of peace and an undefinable longing.
Venus, the most beautiful of the planets.
Maitland kept track of them all in their majestic paths through the
constellations, but Venus was his favorite. Time and time again he
had watched its steady climb higher and higher in the western sky,
its transient rule there as evening star, its progression toward the
horizon, and loved it equally in its alter ego of morning star. Venus
was an old friend. An old friend....
Something icy settled on the back of his neck, ran down his spine,
and diffused into his body. He stared at the planet unbelievingly, fists
clenched, forgetting to breathe.
Last night Venus hadn't been there.
Venus was a morning star just now....
Just now!
He realized the truth in that moment.

Later, when that jewel of a planet had set and the stars were out, he
lay on the bed, still warm with excitement and relief. He didn't have
to worry any more about military secrets, or who Swarts was. Those
questions were irrelevant now. And now he could accept the
psychological tests at their face value; most likely, they were what
they purported to be.
Only one question of importance remained:
What year was this?
He grimaced in the darkness, an involuntary muscular expression of
jubilation and excitement. The future! Here was the opportunity for
the greatest adventure imaginable to 20th Century man.
Somewhere, out there under the stars, there must be grand
glittering cities and busy spaceports, roaring gateways to the
planets. Somewhere, out there in the night, there must be men who
had walked beside the Martian canals and pierced the shining cloud
mantle of Venus—somewhere, perhaps, men who had visited the
distant luring stars and returned. Surely, a civilization that had
developed time travel could reach the stars!
And he had a chance to become a part of all that! He could spend
his life among the planets, a citizen of deep space, a voyager of the
challenging spaceways between the solar worlds.
"I'm adaptable," he told himself gleefully. "I can learn fast. There'll
be a job for me out there...."
If—
Suddenly sobered, he rolled over and put his feet on the floor, sat in
the darkness thinking. Tomorrow. Tomorrow he would have to find a
way of breaking down Swarts' reticence. He would have to make the
man realize that secrecy wasn't necessary in this case. And if Swarts
still wouldn't talk, he would have to find a way of forcing the issue.
The fellow had said that he didn't need cooperation to get his
results, but—
After a while Maitland smiled to himself and went back to bed.

He woke in the morning with someone gently shaking his shoulder.


He rolled over and looked up at the girl who had brought him his
meal the evening before. There was a tray on the table and he
sniffed the smell of bacon. The girl smiled at him. She was dressed
as before, except that she had discarded the white cloak.
As he swung his legs to the floor, she started toward the door,
carrying the tray with the dirty dishes from yesterday. He stopped
her with the word, "Miss!"
She turned, and he thought there was something eager in her face.
"Miss, do you speak my language?"
"Yes," hesitantly. She lingered too long on the hiss of the last
consonant.
"Miss," he asked, watching her face intently, "what year is this?"
Startlingly, she laughed, a mellow peal of mirth that had nothing
forced about it. She turned toward the door again and said over her
shoulder, "You will have to ask Swarts about that. I cannot tell you."
"Wait! You mean you don't know?"
She shook her head. "I cannot tell you."
"All right; we'll let it go at that."
She grinned at him again as the door slid shut.

Swarts came half an hour later, and Maitland began his planned
offensive.
"What year is this?"
Swarts' steely eyes locked with his. "You know what the date is," he
stated.
"No, I don't. Not since yesterday."
"Come on," Swarts said patiently, "let's get going. We have a lot to
get through this morning."
"I know this isn't 1950. It's probably not even the 20th Century.
Venus was a morning star before you brought me here. Now it's an
evening star."
"Never mind that. Come."
Wordlessly, Maitland climbed to his feet, preceded Swarts to the
laboratory, lay down and allowed him to fasten the straps and attach
the instruments, making no resistance at all. When Swarts started
saying a list of words—doubtlessly some sort of semantic reaction
test—Maitland began the job of integrating "csc3x dx" in his head. It
was a calculation which required great concentration and frequent
tracing back of steps. After several minutes, he noticed that Swarts
had stopped calling words. He opened his eyes to find the other man
standing over him, looking somewhat exasperated and a little
baffled.
"What year is this?" Maitland asked in a conversational tone.
"We'll try another series of tests."
It took Swarts nearly twenty minutes to set up the new apparatus.
He lowered a bulky affair with two cylindrical tubes like the twin
stacks of a binocular microscope over Maitland's head, so that the
lenses at the ends of the tubes were about half an inch from the
engineer's eyes. He attached tiny clamps to Maitland's eyelashes.
"These will keep you from holding your eyes shut," he said. "You can
blink, but the springs are too strong for you to hold your eyelids
down against the tension."
He inserted button earphones into Maitland's ears—
And then the show began.
He was looking at a door in a partly darkened room, and there were
footsteps outside, a peremptory knocking. The door flew open, and
outlined against the light of the hall, he saw a man with a twelve-
gauge shotgun. The man shouted, "Now I've got you, you wife-
stealer!" He swung the shotgun around and pulled the trigger. There
was a terrible blast of sound and the flash of smokeless powder—
then blackness.
With a deliberate effort, Maitland unclenched his fists and tried to
slow his breathing. Some kind of emotional reaction test—what was
the countermove? He closed his eyes, but shortly the muscles
around them declared excruciatingly that they couldn't keep that up.
Now he was looking at a girl. She....
Maitland gritted his teeth and fought to use his brain; then he had it.
He thought of a fat slob of a bully who had beaten him up one day
after school. He remembered a talk he had heard by a politician who
had all the intelligent social responsibility of a rogue gorilla, but no
more. He brooded over the damnable stupidity and short-
sightedness of Swarts in standing by his silly rules and not telling
him about this new world.
Within a minute, he was in an ungovernable rage. His muscles
tightened against the restraining straps. He panted, sweat came out
on his forehead, and he began to curse. Swarts! How he hated....
The scene was suddenly a flock of sheep spread over a green
hillside. There was blood hammering in Maitland's temples. His face
felt hot and swollen and he writhed against the restraint of the
straps.
The scene disappeared, the lenses of the projector retreated from
his eyes and Swarts was standing over him, white-lipped. Maitland
swore at him for a few seconds, then relaxed and smiled weakly. His
head was starting to ache from the effort of blinking.
"What year is this?" he asked.
"All right," Swarts said. "A.D. 2634."
Maitland's smile became a grin.

"I really haven't the time to waste talking irrelevancies," Swarts said
a while later. "Honestly. Maitland, I'm working against a time limit. If
you'll cooperate, I'll tell Ching to answer your questions."'
"Ching?"
"Ingrid Ching is the girl who has been bringing you your meals."
Maitland considered a moment, then nodded. Swarts lowered the
projector to his eyes again, and this time the engineer did not resist.
That evening, he could hardly wait for her to come. Too excited to
sit and watch the sunset, he paced interminably about the room,
sometimes whistling nervously, snapping his fingers, sitting down
and jittering one leg. After a while he noticed that he was whistling
the same theme over and over: a minute's thought identified it as
that exuberant mounting phrase which recurs in the finale of
Beethoven's Ninth Symphony.
He forgot about it and went on whistling. He was picturing himself
aboard a ship dropping in toward Mars, making planetfall at Syrtis
Major; he was seeing visions of Venus and the awesome beauty of
Saturn. In his mind, he circled the Moon, and viewed the Earth as a
huge bright globe against the constellations....
Finally the door slid aside and she appeared, carrying the usual tray
of food. She smiled at him, making dimples in her golden skin and
revealing a perfect set of teeth, and put the tray on the table.
"I think you are wonderful," she laughed. "You get everything you
want, even from Swarts, and I have not been able to get even a
little of what I want from him. I want to travel in time, go back to
your 20th Century. And I wanted to talk with you, and he would not
let me." She laughed again, hands on her rounded hips. "I have
never seen him so irritated as he was this noon."
Maitland urged her into the chair and sat down on the edge of the
bed. Eagerly he asked, "Why the devil do you want to go to the 20th
Century? Believe me, I've been there, and what I've seen of this
world looks a lot better."
She shrugged. "Swarts says that I want to go back to the Dark Age
of Technology because I have not adapted well to modern culture.
Myself, I think I have just a romantic nature. Far times and places
look more exciting...."
"How do you mean—" Maitland wrinkled his brow—"adapt to modern
culture? Don't tell me you're from another time!"
"Oh, no! But my home is Aresund, a little fishing village at the head
of a fiord in what you would call Norway. So far north, we are much
behind the times. We live in the old way, from the sea, speak the old
tongue."

He looked at her golden features, such a felicitous blend of Oriental


and European characteristics, and hesitantly asked, "Maybe I
shouldn't.... This is a little personal, but ... you don't look altogether
like the Norwegians of my time."
His fear that she would be offended proved to be completely
unjustified. She merely laughed and said, "There has been much
history since 1950. Five hundred years ago, Europe was overrun by
Pan-Orientals. Today you could not find anywhere a 'pure' European
or Asiatic." She giggled. "Swarts' ancestors from your time must be
cursing in their graves. His family is Afrikander all the way back, but
one of his great-grandfathers was pure-blooded Bantu. His full name
is Lassisi Swarts."
Maitland wrinkled his brow. "Afrikander?"
"The South Africans." Something strange came into her eyes. It
might have been awe, or even hatred; he could not tell. "The Pan-
Orientals eventually conquered all the world, except for North
America—the last remnant of the American World Empire—and
southern Africa. The Afrikanders had been partly isolated for several
centuries then, and they had developed technology while the rest of
the world lost it. They had a tradition of white supremacy, and in
addition they were terrified of being encircled." She sighed. "They
ruled the next world empire and it was founded on the slaughter of
one and a half billion human beings. That went into the history
books as the War of Annihilation."
"So many? How?"
"They were clever with machines, the Afrikanders. They made
armies of them. Armies of invincible killing-machines, produced in
robot factories from robot-mined ores.... Very clever." She gave a
little shudder.
"And yet they founded modern civilization," she added. "The
grandsons of the technicians who built the Machine Army set up our
robot production system, and today no human being has to dirty his
hands raising food or manufacturing things. It could never have
been done, either, before the population was—reduced to three
hundred million."
"Then the Afrikanders are still on top? Still the masters?"
She shook her head. "There are no more Afrikanders."
"Rebellion?"
"No. Intermarriage. Racial blending. There was a psychology of guilt
behind it. So huge a crime eventually required a proportionate
expiation. Afrikaans is still the world language, but there is only one
race now. No more masters or slaves."
They were both silent for a moment, and then she sighed. "Let us
not talk about them any more."
"Robot factories and farms," Maitland mused. "What else? What
means of transportation? Do you have interstellar flight yet?"
"Inter-what?"
"Have men visited the stars?"
She shook her head, bewildered.
"I always thought that would be a tough problem to crack," he
agreed. "But tell me about what men are doing in the Solar System.
How is life on Mars and Venus, and how long does it take to get to
those places?"
He waited, expectantly silent, but she only looked puzzled. "I don't
understand. Mars? What are Mars?"
After several seconds, Maitland swallowed. Something seemed to be
the matter with his throat, making it difficult for him to speak.
"Surely you have space travel?"
She frowned and shook her head. "What does that mean—space
travel?"
He was gripping the edge of the bed now, glaring at her. "A
civilization that could discover time travel and build robot factories
wouldn't find it hard to send a ship to Mars!"
"A ship? Oh, you mean something like a vliegvlotter. Why, no, I don't
suppose it would be hard. But why would anyone want to do a thing
like that?"
He was on his feet towering over her, fists clenched. She raised her
arms as if to shield her face if he should hit her. "Let's get this
perfectly clear," he said, more harshly than he realized. "So far as
you know, no one has ever visited the planets, and no one wants to.
Is that right?"
She nodded apprehensively. "I have never heard of it being done."
He sank down on the bed and put his face in his hands. After a while
he looked up and said bitterly, "You're looking at a man who would
give his life to get to Mars. I thought I would in my time. I was
positive I would when I knew I was in your time. And now I know I
never will."

The cot creaked beside him and he felt a soft arm about his
shoulders and fingers delicately stroking his brow. Presently he
opened his eyes and looked at her. "I just don't understand," he
said. "It seemed obvious to me that whenever men were able to
reach the planets, they'd do it."
Her pitying eyes were on his face. He hitched himself around so that
he was facing her. "I've got to understand. I've got to know why.
What happened? Why don't men want the planets any more?"
"Honestly," she said, "I did not know they ever had." She hesitated.
"Maybe you are asking the wrong question."
He furrowed his brow, bewildered now by her.
"I mean," she explained, "maybe you should ask why people in the
20th Century did want to go to worlds men are not suited to
inhabit."
Maitland felt his face become hot. "Men can go anywhere, if they
want to bad enough."
"But why?"
Despite his sudden irrational anger toward her, Maitland tried to stick
to logic. "Living space, for one thing. The only permanent solution to
the population problem...."
"We have no population problem. A hundred years ago, we realized
that the key to social stability is a limited population. Our economic
system was built to take care of three hundred million people, and
we have held the number at that."
"Birth control," Maitland scoffed. "How do you make it work—secret
police?"
"No. Education. Each of us has the right to two children, and we
cherish that right so much that we make every effort to see that
those two are the best children we could possibly produce...."
She broke off, looking a little self-conscious. "You understand, what I
have been saying applies to most of the world. In some places like
Aresund, things are different. Backward. I still do not feel that I
belong here, although the people of the town have accepted me as
one of them."
"Even," he said, "granting that you have solved the population
problem, there's still the adventure of the thing. Surely, somewhere,
there must be men who still feel that.... Ingrid, doesn't it fire
something in your blood, the idea of going to Mars—just to go there
and see what's there and walk under a new sky and a smaller Sun?
Aren't you interested in finding out what the canals are? Or what's
under the clouds of Venus? Wouldn't you like to see the rings of
Saturn from, a distance of only two hundred thousand miles?" His
hands were trembling as he stopped.
She shrugged her shapely shoulders. "Go into the past—yes! But go
out there? I still cannot see why."
"Has the spirit of adventure evaporated from the human race, or
what?"
She smiled. "In a room downstairs there is the head of a lion. Swarts
killed the beast when he was a young man. He used a spear. And
time traveling is the greatest adventure there is. At least, that is the
way I feel. Listen, Bob." She laid a hand on his arm. "You grew up in
the Age of Technology. Everybody was terribly excited about what
could be done with machines—machines to blow up a city all at
once, or fly around the world, or take a man to Mars. We have had
our fill of—what is the word?—gadgets. Our machines serve us, and
so long as they function right, we are satisfied to forget about them.
"Because this is the Age of Man. We are terribly interested in what
can be done with people. Our scientists, like Swarts, are studying
human rather than nuclear reactions. We are much more fascinated
by the life and death of cultures than by the expansion or
contraction of the Universe. With us, it is the people that are
important, not gadgets."
Maitland stared at her, his face blank. His mind had just
manufactured a discouraging analogy. His present position was like
that of an earnest 12th Century crusader, deposited by some freak of
nature into the year 1950, trying to find a way of reanimating the
anti-Mohammedan movement. What chance would he have? The
unfortunate knight would argue in vain that the atomic bomb offered
a means of finally destroying the infidel....
Maitland looked up at the girl, who was regarding him silently with
troubled eyes. "I think I'd like to be alone for a while," he said.

In the morning, Maitland was tired, though not particularly


depressed. He hadn't slept much, but he had come to a decision.
When Ingrid woke him, he gave her a cavalier smile and a cheery
"Good morning" and sat down to the eggs and ham she had
brought. Then, before she could leave, he asked, "Last night when
we were talking about spaceships, you mentioned some kind of
vessel or vehicle. What was it?"
She thought. "Vliegvlotter? Was that it?"
He nodded emphatically. "Tell me about them."
"Well, they are—cars, you might say, with wheels that go into the
body when you take off. They can do, oh, 5,000 miles an hour in the
ionosphere, 50 miles up."
"Fifty miles," Maitland mused. "Then they're sealed tight, so the air
doesn't leak out?" Ingrid nodded. "How do they work? Rocket
drive?"
"No." She plucked at her lower lip. "I do not understand it very well.
You could picture something that hooks into a gravity field, and
pulls. A long way from the Earth if would not work very well,
because the field is so thin there.... I guess I just cannot explain it
very well to you."
"That's all I need." Maitland licked his lips and frowned. "On that
point, anyway. Another thing—Swarts told me I'd be here for about a
week. Is there any set procedure involved in that? Have other
persons been brought to this period from the past?"
She laughed. "Thousands. Swarts has published nearly a hundred
case studies himself, and spent time adding up to years in the 19th
and 20th centuries."
Maitland interrupted incredulously. "How on Earth could he ever
manage to keep that many disappearances quiet? Some of those
people would be bound to talk."
She shook her head definitely. "The technique was designed to avoid
just that. There is a method of 'fading' the memories people have of
their stay here. The episode is always accepted as a period of
amnesia, in the absence of a better explanation."
"Still, in thousands of cases...."
"Spread out over centuries in a total population of billions."
He laughed. "You're right. But will that be done to me?"
"I suppose so. I can't imagine Swarts letting you take your memories
back with you."
Maitland looked out the window at the green horizon. "We'll see," he
said.

Maitland removed his three-day beard with an effective depilatory


cream he discovered in the bathroom, and settled down to wait.
When Swarts arrived, the engineer said quietly, "Sit down, please. I
have to talk with you."
Swarts gave him the look of a man with a piece of equipment that
just won't function right, and remained standing. "What is it now?"
"Look," Maitland said, "Ingrid has told me that men never reached
the planets. You ought to know how I feel about space flight. It's my
whole life. Knowing that my work on rockets is going to pay off only
in the delivery of bombs, I don't want to go back to the 20th
Century. I want to stay here."
Swarts said slowly, "That's impossible."
"Now, look, if you want me to cooperate...."
The big man made an impatient gesture. "Not impossible because of
me. Physically impossible. Impossible because of the way time travel
works."
Maitland stared at him suspiciously.
"To displace a mass from its proper time takes energy," Swarts
explained, "and it's one of the oldest general physical principles that
higher energy states are unstable with respect to lower ones. Are
you familiar with elementary quantum theory? As an analogy, you
might regard yourself, displaced from your proper time, as an atom
in an excited state. The system is bound to drop back to ground
state. In the atomic case, the time which elapses before that
transition occurs is a matter of probabilities. In the case of time
travel, it just depends on the amount of mass and the number of
years the mass is displaced.
"In short, the laws of nature will insist on your returning to 1950 in
just a few days."
Maitland looked at the floor for a while, and his shoulders sagged.
"Your memories of this will be faded," Swarts said. "You'll forget
about what Ingrid has told you—forget you were ever here, and take
up your life where you left off. You were happy working on rockets,
weren't you?"
"But—" Maitland shook his head despairingly. Then he had an idea.
"Will you let me do one thing, before I go back? I realize now that
our time is limited, and you have a lot of tests to give me, but I'm
willing to help speed things up. I want to see the stars, just once,
from deep space. I know you'll make me forget it ever happened,
but once in my life.... You have vessels—vliegvlotter, Ingrid called
them—that can go into space. If you'd give me just a couple days to
go out there, maybe circle the Moon...?" There was a pleading note
in his voice, but he didn't care.
Swarts regarded him dispassionately for a moment, then nodded.
"Sure," he said. "Now let's get to work."

"The Earth doesn't change much," Maitland mused. Sitting on the


cot, his arm around Ingrid's yielding waist, he was wearing the new
blue trunks she had given him to replace his rumpled pajamas. The
room was full of evening sunlight, and in that illumination she was
more beautiful than any other woman he could remember. This had
been the last day of tests; tomorrow, Swarts had promised, he
would begin his heart-breakingly brief argosy to the Moon, with
Ingrid as pilot.
Over the past four days, he had been with the girl a lot. In the
beginning, he realized, she had been drawn to him as a symbol of
an era she longed, but was unable, to visit. Now she understood him
better, knew more about him—and Maitland felt that now she liked
him for himself.
She had told him of her childhood in backward Aresund and of
loneliness here at the school in Nebraska. "Here," she had said,
"parents spend most of their time raising their children; at home,
they just let us grow. Every time one of these people looks at me I
feel inferior."
She had confided her dream of visiting far times and places, then
had finished, "I doubt that Swarts will ever let me go back. He thinks
I am too irresponsible. Probably he is right. But it is terribly
discouraging. Sometimes I think the best thing for me would be to
go home to the fiord...."
Now, sitting in the sunset glow, Maitland was in a philosophic mood.
"The color of grass, the twilight, the seasons, the stars—those things
haven't changed." He gestured out the window at the slumbering
evening prairie. "That scene, save for unessentials, could just as well
be 1950—or 950. It's only human institutions that change rapidly...."
"I'll be awfully sorry when you go back," she sighed. "You're the first
person I've met here that I can talk to."
"Talk to," he repeated, dissatisfied. "You're just about the finest girl
I've ever met."
He kissed her, playfully, but when they separated there was nothing
playful left about it. Her face was flushed and he was breathing
faster than he had been. Savagely, he bit the inside of his cheek.
"Two days! A lifetime here wouldn't be long enough!"
"Bob." She touched his arm and her lips were trembling. "Bob, do
you have to go—out there? We could get a couple of horses
tomorrow, and we would have two days."
He leaned back and shook his head. "Can't you see, Ingrid? This is
my only chance. If I don't go tomorrow, I'll never get to the Moon.
And then my whole life won't mean anything...."
He woke with Ingrid shaking him. "Bob! Bob!" Her voice was an
urgent whisper. "You've got to wake up quick! Bob!"
He sat up and brushed the hair out of his eyes. "What's the matter?"
"I didn't really believe that Swarts would let you go into space. It
wasn't like him. Bob, he fooled you. Today is when your time runs
out!"
Maitland swallowed hard, and his chest muscles tightened
convulsively. "You mean it was all a trick?"
She nodded. "He told me just now, while he was putting something
in your milk to make you sleep." Her face was bitter and resentful.
"He said, 'This is a lesson for you, Ching, if you ever do any work
with individuals like this. You have to humor them, tell them
anything they want to believe, in order to get your data.'"
Maitland put his feet on the floor, stood up. His face was white and
he was breathing fast.
She grasped his arm. "What are you going to do?"
He shook her hand off. "I may not get to the Moon, but I'm going to
teach one superman the advantage of honesty!"
"Wait! That won't get you anywhere."
"He may be bigger than I am," Maitland gritted, "but—"
She squeezed his arm violently. "You don't understand. He would not
fight you. He'd use a gun."
"If I could catch him by surprise...."
She took hold of his shoulders firmly. "Now, listen, Bob Maitland. I
love you. And I think it's the most important thing in the world that
you get to see the stars. Swarts will never let me time travel,
anyway."
"What are you thinking?"
"I'll go down to the village and get a vliegvlotter. It won't take
twenty minutes. I'll come back, see that Swarts is out of the way, let
you out of here, and take you—" she hesitated, but her eyes were
steady—"wherever you want to go."
He was trembling. "Your career. I can't let you...."
She made as if to spit, then grinned. "My career! It's time I went
home to the fiord, anyway. Now you wait here!"

The vliegvlotter was about 50 feet long, an ellipsoid of revolution.


Maitland and Ingrid ran hand in hand across the lawn and she
pushed him up through the door, then slammed it shut and screwed
the pressure locks tight.
They were strapping themselves into the seats, bathed in sunlight
that flooded down through the thick plastic canopy, when she
stopped, pale with consternation.
"What's the matter?" he demanded.
"Oh, Bob, I forgot! We can't do this!"
"We're going to," he said grimly.
"Bob, sometime this morning you're going to snap back to 1950. If
that happens while we're up there...."
His jaw went slack as the implication soaked in. Then he reached
over and finished fastening the buckle on her wide seat belt.
"Bob, I can't. I would be killing you just as surely as...."
"Never mind that. You can tell me how to run this thing and then get
out, if you want to."
She reached slowly forward and threw a switch, took hold of the
wheel. Seconds later they were plummeting into the blue dome of
the sky.
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about testbank and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebooksecure.com

You might also like