100% found this document useful (1 vote)
31 views

Computer Organisation and Architecture An Introduction 2nd Edition B.S. Chalk download pdf

The document provides information on downloading various ebooks related to computer organization and architecture, including titles by B.S. Chalk and others. It outlines the contents of the second edition of 'Computer Organisation and Architecture An Introduction' and discusses the evolution of computing since its first publication. The book covers fundamental concepts such as the von Neumann model, data representation, CPU operation, and networked systems.

Uploaded by

limppubuzov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
31 views

Computer Organisation and Architecture An Introduction 2nd Edition B.S. Chalk download pdf

The document provides information on downloading various ebooks related to computer organization and architecture, including titles by B.S. Chalk and others. It outlines the contents of the second edition of 'Computer Organisation and Architecture An Introduction' and discusses the evolution of computing since its first publication. The book covers fundamental concepts such as the von Neumann model, data representation, CPU operation, and networked systems.

Uploaded by

limppubuzov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Visit https://ebookfinal.

com to download the full version and


explore more ebooks

Computer Organisation and Architecture An


Introduction 2nd Edition B.S. Chalk

_____ Click the link below to download _____


https://ebookfinal.com/download/computer-organisation-
and-architecture-an-introduction-2nd-edition-b-s-chalk/

Explore and download more ebooks at ebookfinal.com


Here are some suggested products you might be interested in.
Click the link to download

Computer fundamentals architecture and organisation 4th


Edition B. Ram

https://ebookfinal.com/download/computer-fundamentals-architecture-
and-organisation-4th-edition-b-ram/

Inside the Machine An Illustrated Introduction to


Microprocessors and Computer Architecture 1st Edition Jon
Stokes
https://ebookfinal.com/download/inside-the-machine-an-illustrated-
introduction-to-microprocessors-and-computer-architecture-1st-edition-
jon-stokes/

Buildings across Time An Introduction to World


Architecture 2nd Edition Marian Moffett

https://ebookfinal.com/download/buildings-across-time-an-introduction-
to-world-architecture-2nd-edition-marian-moffett/

Advanced Computer Architecture Parallelism Scalability


Programmability 2nd Edition Kai Hwang

https://ebookfinal.com/download/advanced-computer-architecture-
parallelism-scalability-programmability-2nd-edition-kai-hwang/
Computer Organization and Architecture 6th Edition William
Stallings

https://ebookfinal.com/download/computer-organization-and-
architecture-6th-edition-william-stallings/

Classification Made Simple An Introduction to Knowledge


Organisation and Information Retrieval 3rd Edition Eric J.
Hunter
https://ebookfinal.com/download/classification-made-simple-an-
introduction-to-knowledge-organisation-and-information-retrieval-3rd-
edition-eric-j-hunter/

Computer Forensics and Cyber Crime An Introduction 3rd


Edition Marjie T. Britz

https://ebookfinal.com/download/computer-forensics-and-cyber-crime-an-
introduction-3rd-edition-marjie-t-britz/

Computer Organization Design and Architecture 4th Edition


Sajjan G. Shiva

https://ebookfinal.com/download/computer-organization-design-and-
architecture-4th-edition-sajjan-g-shiva/

Computer Organization Design and Architecture 5th Edition


Sajjan G. Shiva

https://ebookfinal.com/download/computer-organization-design-and-
architecture-5th-edition-sajjan-g-shiva/
Computer Organisation and Architecture An
Introduction 2nd Edition B.S. Chalk Digital Instant
Download
Author(s): B.S. Chalk, A.T. Carter, R.W. Hind
ISBN(s): 9780230000605, 0230000606
Edition: 2
File Details: PDF, 1.28 MB
Year: 2004
Language: english
Computer
Organisation and
Architecture
An Introduction
Second Edition

B.S. Chalk, A.T. Carter and R.W. Hind

Palgrave
Macmillan
© B.S. Chalk, A.T. Carter and R.W. Hind 2004

First published 2004 by


PALGRAVE MACMILLAN
Houndmills, Basingstoke, Hampshire RG21 6XS and
175 Fifth Avenue, New York, N.Y. 10010
Companies and representatives throughout the world

ISBN 978-1-4039-0164-4 ISBN 978-0-230-00060-5 (eBook)


DOI 10.1007/978-0-230-00060-5

A catalogue record for this book is available from the British Library
Contents

Preface to the second edition ix


Acknowledgements x
List of trademarks xi

Chapter 1 Introduction 1

1.1 Computer functionality 2


1.2 The von Neumann model 2
1.3 A personal computer system 3
1.4 Representing memory 5
1.5 High- and low-level languages 6
1.6 The operating system 7
1.7 Networked systems 7
Answers to in text questions 8
Exercises 8

Chapter 2 Data representation and computer arithmetic 10

2.1 Bits, bytes and words 10


2.2 Binary codes 11
2.3 Number systems 12
2.4 Negative numbers 15
2.5 Binary arithmetic 18
2.6 Binary coded decimal (BCD) 20
2.7 Floating point representation 20
2.8 Summary 23
Answers to in text questions 24
Exercises 24

Chapter 3 Boolean logic 26


3.1 Logic gates 26
3.2 Combinational logic circuits 28
3.3 Sequential logic circuits 31
3.4 Flip-flop circuits 35
3.5 Summary 39
Answers to in text questions 40
Exercises 41

Chapter 4 Central processor unit operation 42

4.1 CPU details 42


4.2 Processor–Memory interconnection 43
4.3 Improving performance 52
4.4 The use of microcode 61
4.5 Summary 64
Answers to in text questions 64
Exercises 66

Chapter 5 The Intel 80x86 family of processors 68

5.1 The programmers model 68


5.2 Instruction types 71
5.3 Addressing modes 74
5.4 Instruction formats 77
5.5 Assembly code examples 80
5.6 Operating modes 83
5.7 Floating point arithmetic 83
5.8 Summary 86
Answers to in text questions 86
Exercises 88

Chapter 6 Primary memory 89

6.1 Memory hierarchy 89


6.2 RAM and cache basics 90
6.3 Semiconductor memory chips 90
6.4 Data and address buses 98
6.5 Cache memory 101
6.6 Summary 106
Answers to in text questions 107
Exercises 107

Chapter 7 Secondary memory 109

7.1 Magnetic surface technology 109


7.2 Magnetic disk storage 110
7.3 Optical disk storage systems 116
7.4 Summary 121
Answers to in text questions 121
Exercises 121
Chapter 8 Input–Output 123

8.1 PC buses 123


8.2 Types of interface 124
8.3 I/O addressing 132
8.4 Modes of I/O transfer 132
8.5 I/O buses 137
8.6 I/O devices 142
8.7 Summary 151
Answers to in text questions 152
Exercises 152

Chapter 9 Operating systems 154

9.1 Overview 154


9.2 Power-on self-test (POST) and system boot-up 155
9.3 Multiprogramming/multitasking 156
9.4 The process concept 156
9.5 Process management 157
9.6 Process scheduling 159
9.7 Inter-Process Communication (IPC) 160
9.8 Threads 163
9.9 Memory management 164
9.10 Operating system traps 170
9.11 File systems 171
9.12 Summary 174
Answers to in text questions 175
Exercises 176

Chapter 10 Reduced instruction set computers 177

10.1 CISC characteristics 178


10.2 Instruction usage 178
10.3 RISC architectures 179
10.4 The control unit 183
10.5 Pipelining 184
10.6 Hybrids 184
10.7 Performance and benchmarking 184
10.8 Superscalar and superpipelined architectures 185
10.9 Summary 186
Answers to in text questions 186
Exercises 187

Chapter 11 Networked systems 188

11.1 Introduction to networked systems 188


11.2 Local area networks 194
11.3 Wide area networks 205
11.4 Distributed systems 220
11.5 Security of networked systems 223
11.6 Summary 227
Answers to in text questions 228
Exercises 229

Chapter 12 A look ahead 230

12.1 Processors 230


12.2 Primary memory 232
12.3 Secondary memory 232
12.4 Peripheral devices 233
12.5 Networks 235
12.6 Complete systems 236
12.7 Summary 237
Exercises 237

Appendix 1 Introduction to logic circuit minimisation using


Karnaugh map methods 238
Appendix 2 Introduction to debug 247
Appendix 3 ASCII and Extended ASCII tables 257
Appendix 4 The 80x86 family of processors 260
Appendix 5 IEEE 754 floating point format 265
Acronyms 267
References and further reading 271
Index 273
Preface to the
second edition

A great deal has happened in the world of computing since the publication of
the first edition of this book. Processors have become faster and the number
of transistors contained in the processor chip has greatly increased. The
amount of memory, both primary and secondary, in the standard personal
computer has increased and become faster. New peripheral devices have
come onto the scene and some of the old ones have almost disappeared.
Networked computers are the norm, as is connection to the Internet for
almost all home computers. Having said all the above, the basic von
Neumann architecture has not been superseded yet.
This second edition of Computer Organisation and Architecture, An
Introduction, builds on the first edition, bringing the material up to date and
adding new chapters on ‘Networking and what’s next’. After considerable
thought, we have decided to use the Intel family of processors rather than the
Motorola 68000 for our examples. This is because the availability of Intel
based personal computers (PCs) tends to be greater than machines based on
the Motorola 68000, taking into account that many people, especially
students, have a PC at home. Our change must not be seen as a criticism of
the Motorola processors, but simply a matter of expedience for experiential
learning.
Many of our examples make reference to PCs, but all the basic principles
apply to all sizes and shapes of computers. There are still a large number of
powerful high-end computers being used in big organisations and it must be
remembered that the world of computing is not just PCs.
The target audience for this edition has not changed and with the addition
of the networking chapter, we hope that the area of appeal will have widened.
We have included Chapter 12 in order to look briefly at some
developments. Some are a few weeks away while others are experimental or
just proposals. With the rate of development we are seeing, it is difficult to
imagine where computing will be in say five years time. We live in exciting
times.
Suggested answers to a number of the end of chapter exercises are
available on the WEB site associated with this book.

A.T. Carter, R.W. Hind


Introduction

1
Not all that many years ago, the only places where one would be able to see
a computer would have been the central offices of large organisations. The
computer, costing at least £500000, would have been housed in a large,
temperature controlled room. The computer would have been run by a team
of people, called operators, working on a shift system which provided
24-hour operation. Users of the computer would have a terminal, consisting
of a TV screen and a keyboard, on their desk and they would use the facilities
of the computer by means of on-screen forms and menus. These computers
were called main frame computers and in fact there are still many of these in
operation today. Today, almost every home has a computer either in the
form of a Personal Computer (PC) or games console and the cost is well
under £1000.
There is a vast array of different types of computers between the two
types mentioned above, varying in size, cost and performance. However,
the majority of these computers are based on a model proposed by John
von Neumann and others in 1946. In Chapter 1, we describe the von
Neumann model and relate its logical units to the physical components
found in a typical PC. This will provide a foundation for a more detailed
discussion of computer organisation in subsequent chapters. There are two
approaches to investigating a complex system. One, known as the top-
down approach, looks at the system as a whole with particular attention
being applied to what it does, in other words, the functions the system
performs. Then each function is investigated in more detail with the
intention of gaining an understanding of how the system performs the
function. The level of detail considered increases until the individual
component level is reached, at which point the operation of the whole
system should be understood in minute detail. The alternative approach,
known as the bottom-up approach, considers individual components and
then looks at ways in which these can be connected together to provide the
functions required of a system.
In this book, we will start by using the top-down approach to get an
understanding of what basic functions a computer can perform, then we will
use the bottom-up approach to show how basic components can be
interconnected to provide the required functionality.
Computer organisation and architecture

1.1 Computer functionality


The mighty computer can do little more than add two numbers together.
Everything else we see the computer being used for, be it playing a graphics
game, word processing a document or running a payroll, is a sequence of
operations that mainly involves adding numbers together. ‘Wait a minute’
you say, ‘computers can subtract, multiply, divide and do many other things
too’. We will deal with these simple functions here and the rest of the book
will cover many other aspects. Take subtraction, if we wish to subtract
20 from 30 all we need to do is change the sign of 20 and add the two
numbers to give 10. So we have done subtraction by using addition.

30 ⫹ (⫺20) ⫽ 10

Multiplication is successive addition so if we wish to multiply 25 by 3 we can


carry out the following calculation:

25 ⫹ 25 ⫹ 25 ⫽ 75

Division is successive subtraction, which is successive addition with a sign


change.

TQ 1.1 How would you use the above system to check if two numbers were equal?

Let us see how the addition function can be achieved.

1.2 The von Neumann model


A key feature of this model is the concept of a stored program. A program is a
set of instructions that describe the steps involved when carrying out a
computational task, such as carrying out a calculation or accessing a
database. The program is stored in memory together with any data upon
which the instructions operate, as illustrated in Figure 1.1.
To run a program, the CPU or Central Processing Unit repeatedly fetches,
decodes and executes the instructions one after the other in a sequential
manner. This is carried out by a part of the CPU called the control unit. The
execution phase frequently involves fetching data, altering it in some way and
then writing it back to memory. For this to be possible, an instruction must
specify both the operation to be performed and the location or memory
address of any data involved. Operations such as addition and subtraction
are performed by a part of the CPU called the Arithmetic and Logic
Unit (ALU). Input and Output devices are needed to transfer information to
and from memory. To sequence these transfers and to enforce the orderly
movement of instructions and data in the system, the control unit uses
various control lines.

2
Introduction

Figure 1.1 The von Neumann model


CPU

ALU

Control Unit

Input Output
Device Memory Device

(Program and data are


stored here)

control lines
instructions and
data flow

Figure 1.2 Basic PC system


monitor

houses CPU
and Memory

printer
processor unit

keyboard
mouse

1.3 A personal computer system


Figure 1.2 shows some of the basic hardware of a ‘stand alone’ personal
computer (PC) system. The processor unit houses the bulk of the
electronics, including the CPU and memory. Attached to this are various
peripheral devices, such as a keyboard, a mouse, a monitor which can be a
TV type screen or a flat Liquid Crystal Display (LCD) and a printer. These
devices provide the Input/Output (I/O) facility. If we open the processor
unit and take a look inside, we find a number of electronic components
mounted on a large printed circuit board known as a motherboard, as
shown in Figure 1.3. The components are connected together by
conducting tracks for carrying electrical signals between them. These
signals carry information in digitized or digital form and are therefore
referred to as digital signals.
Most of the electronic components are in the form of integrated circuits
(IC), which are circuits built from small slices or ‘chips’ of the semiconductor
material, silicon. The chips are mounted in plastic packages to provide for
connecting them to the motherboard. One of the largest and most complex
ICs on the board is the microprocessor, normally referred to as the processor,
which is the CPU of the system. This chip contains millions of electronic

3
Computer organisation and architecture

Figure 1.3 A typical motherboard (reproduced with permission from EPOX Electronics)

Keyboard 5
JP13 J4
(Top)
CPU SYS1
1
JP18
JP17
Mouse KBPO FAN FAN1 JP16
(Bottom) 1
CPU Vio
Voltage Select

SOCKET462
USB 1

COM1
JP14
Parallel 1
Port CPU Host Clock

COM2 Power on and


CPU Socket A
DIMM Socket
PW1 ATX SW2 remain powered
Power Conn. CPU

1 2 34 5
Ratio LED (DIP Type

ON
Select Red) LED1
SW1 CPU VIA
Speaker Vcore Select
1 2 34 5 VT8363 FDD1

• • • • • • • • • • • • • • • • • • • • • • • • •
• • • • • • • • • • • • • • • • • • • • • • • • •
Line-in Game Port ON

MIC
Battery
CD1 4X AGP Slat DIMM3~1
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ••
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ••

PCI Slot # 1
1
• • • • • • • • • • • • • • • • • • • • • • • • • • • • 1
IDE1 • • • • • • • • • • • • • • • • • • • • • • • • • • • •

PCI Slot # 2

1
IDE2 • • • • • • • • • • • • • • • • • • • • • • • • • • • •
• • • • • • • • • • • • • • • • • • • • • • • • • • • •

AUX MODEM1 PCI Slot # 3 POWER Loss


Recovery J3 POWER_
JP3 1 1 ON/OFF
VIA TB/LED
JP1 +
PCI Slot # 4 VT82c686A HD/LED
1 +
Clear CMOS JP2
STR IR CONN.
PCI Slot # 5 Function 1
1 J2
J7 PWR_LED
WOL PCI Slot # 6 J6 1

BIOS
CHASSIS
FAN SPK
1 1
1 RESET
ISA Slot USB2
1

switches called transistors organised in the form of logic gates, the basic
building blocks of digital circuits. These logic gates are used to implement
the control unit, the ALU and other components of the CPU such as its
register set. Logic gates are discussed in Chapter 3.
There are two basic types of semiconductor memory on the motherboard,
Random Access Memory (RAM) which is a read–write memory and Read
Only Memory (ROM). These form the fast primary or main memory of the
system and both store information in binary form (1s and 0s). RAM is often
provided in the form of memory modules, each module containing a number
of memory chips. The modules are plugged into sockets on the motherboard.
Because RAM can be read from and written to, it is suitable for storing
programs and data. Unfortunately RAM chips are normally volatile and
therefore lose their content when the computer’s power is switched off.
ROMs on the other hand, are non-volatile and are used for storing various
system programs and data that needs to be available when the computer is
switched on. Non-volatile means that the ROM does not lose its content even
when the power is removed.

4
Introduction

TQ 1.2 Why is ROM unsuitable for storing user programs?

In addition to a fast main memory, the PC also has a large but slower
secondary memory, usually in the form of a hard disk and one or two
floppy disk units and a CD or DVD read/write unit. Programs are stored
on disk as files and must be loaded into main memory before they can
be executed by the processor. Computer memory is discussed in
Chapters 6 and 7.
The processor is connected to memory and the other parts of the system
by a group of conducting tracks called a system bus, which provides a
pathway for the exchange of data and control information. Logically, a
system bus is divided into an address bus, a data bus and a control bus.
To co-ordinate activities taking place inside the processor with those taking
place on the system bus, some form of timing is required. This is provided
by a crystal controlled clock.
Input/Output (I/O) cards are plugged into the sockets shown in
Figure 1.3. The sockets are connected to the system bus. The cards are used
for connecting peripheral devices to the system. In general, peripheral
devices operate at much slower speeds than the CPU and so the I/O cards
will have special interface chips mounted on them for connecting the
peripheral devices to the system bus. Interfacing is discussed in Chapter 8.
It is worth mentioning that although PCs are very common and
there are many millions in use today, two other types of small computer
are becoming very popular, namely the small laptop or portable computer
and the even smaller, palmtop or personal data assistant (PDA) computer.
Both laptop and PDA computers are single unit devices with the monitor,
keyboard and mouse built into the single unit. Other than size and a
slightly higher price, there is little difference between a laptop and a PC.
PDAs have a restricted keyboard and sometimes a stylus is used to actuate
the keys rather than fingers. They also tend to have somewhat limited
capability.

1.4 Representing memory


We can visualise main memory as a series of storage boxes or locations, as
shown in Figure 1.4. Each location is identified by an address and can be
used to store an instruction or some data. For example, the instruction
move 4, is stored at address 0 and the datum, 2, is stored at address 5.
The first instruction, move 4, copies the ‘contents of address 4’ or
number 1, into one of the processor’s registers. The second instruction,
add 5, adds the ‘contents of address 5’ or number 2, to the first number
stored in the register. The third instruction, store 6, stores the ‘contents of
this register’ or the sum of the two numbers, into address 6. Finally the
last instruction, stop, halts or prevents any further execution of the
program.

5
Computer organisation and architecture

Figure 1.4 A representation of memory


Main Memory
address
Memory location 0
0 move 4 content is “move 4”
1 add 5

2 store 6

3 stop

4 1

5 2

1.5 High- and low-level languages


Instructions such as move and add are called machine instructions and are the
only instructions the processor can ‘understand’ and execute. Writing
programs at this level requires a knowledge of the computer’s architecture,
which includes amongst other things, details of the processor’s registers, the
different instructions it can execute (instruction set) and the various ways
these instructions can address memory (addressing modes). Programming at
machine level is called low-level language programming and some examples
of this can be seen in Chapters 4 and 5.
When we wish to write programs to solve particular problems, it is often
easier to write them in English-like statements using a high-level language
(HLL), such as Java or C.
For example, the HLL statement:
Sum:⫽ A ⫹ B;
gives the same result as our previous program while being easier to follow.
The fact that the variables A, B and Sum refer to memory addresses 4, 5 and 6
or some other locations, is hidden from the programmer and allows him or
her to concentrate on the logic of the problem rather than the organisation
of the computer.
Because the machine cannot directly understand or execute HLL program
statements, these statements must be translated into machine instructions
before the program can be executed. Translating a HLL program into a
machine language program, often called machine code, is the responsibility of a
piece of system software. Two approaches to the process of translating HLL
into machine code are common. One is called Interpretation, where each HLL
statement is in turn converted into machine code statements which are then
executed. The other is called Compilation, where the whole HLL program is
converted into machine code statements and placed into a file called an
executable file. After the compilation process is completed the executable
file is then executed. Interpretation is ideal for the software development
stage. Compilation is best for a fully developed program as it runs faster.

6
Introduction

Figure 1.5 Different user interfaces (a) graphical (b) command driven

Microsoft(R) Windows 95
(C)Copyright Microsoft Corp
1981–1996.

C:\WINDOWS>

(a) (b)

1.6 The operating system


As well as software for solving user problems (applications software),
software is needed for carrying out various system tasks, such as controlling
the monitor, reading the keyboard, loading files into memory from the hard
disk and so on. These programs are part of a powerful piece of systems
software called the operating system.
When we switch on a PC, we are presented with some form of user
interface. The interface might be graphical, as shown in Figure 1.5(a), or
command driven, as shown in Figure 1.5(b). In either case, the operating
system creates an environment for the user conveniently to examine files and
run programs. For a Graphical User Interface (GUI), this is done by ‘clicking’
on icons using a pointing device such as a mouse, while for a Command
Driven Interface (CDI), it is done by entering special commands and file
names from the keyboard. The fact that we do not have to know where a file
is stored on disk or the main memory locations in which a program is
loaded, is simply due to the operating system.
Many operating system functions are either invisible to the user, or
become apparent only when things go wrong, such as when an error occurs.
The operating system is often referred to as a resource manager as part of its
job is to control the use of the processor, memory and file system. It is also
responsible for controlling access to the computer itself by providing a
security mechanism, which might involve user passwords. We will return to
the topic of operating systems in Chapter 9.

1.7 Networked systems


Very few office or college PCs are stand-alone systems. They are connected to
a network, which means that users of PCs can communicate using e-mail or
share resources such as printers, scanners and other PC’s disk systems.
There are two basic network configurations, peer-to-peer and server-based
networks. Peer-to-peer networks consist of a number of PCs connected

7
Computer organisation and architecture

together in such a way that each PC is of equal standing. Each PC can,


providing permission has been granted, access disks and peripheral devices
of any other PC directly. This is ideal if the number of PCs on the network is
small, say up to 10, but it is a difficult configuration to manage and keep
secure. Server-based networks consist of a number of PCs connected together
and also connected to a special PC called a server. The server provides a
central file store and a machine to control printing and network access. To
use the network, a PC user must ‘log on’ to the server, which involves security
and access checking. The PC user can then access the server file system and
the peripherals connected to it. Each user is normally allocated his or her
own area of storage on the file system, which is only available for that user. A
common file area is often provided, available for all users, into which work to
be shared can be loaded. Server-based networks are ideal for larger networks.
Server-based networks are sometimes incorrectly referred to as client/server
networks. Client/server systems are more to do with distributed computer
systems than the Local Area Networks (LANs) commonly found. We will
cover networks in more detail in Chapter 11.

Answers to in text questions


TQ 1.1 Subtract one number from the other and see if the answer is zero. If the
answer is zero, the numbers are equal. The ALU can easily tell if a register
contains all zeros.

TQ 1.2 Because they can only be read from and not written to, they cannot be
loaded with user programs.

EXERCISES
1 Explain what the letters CPU, RAM, ROM and LAN stand for.
2 Write down the main features of a von Neumann style computer.
3 Explain why ROM is needed in a PC system.
4 Explain what is meant by the terms machine instruction and
instruction set.
5 State the parts of the CPU that are used for (a) fetching and
interpreting instructions (b) performing arithmetic operations
such as ‘add’.
6 Briefly explain the benefits of programming in a HLL.
7 Software can be classified as either application software or systems
software. Give an example of each type.
8 When it is required to run a piece of software designed to run on
one type of machine on another type of machine, the software needs
to be recompiled. Explain why this is so.

8
Introduction

9 From the time you ‘double click’ on an icon for a text document in a
GUI, to the time it appears on the screen and you are able to edit it,
the operating system must perform a number of tasks. Outline what
you think these might be.
10 Networks allow users to share peripherals and file stores. Explain the
security risks that this might involve.
11 Explain why a laptop computer may cost more than a PC with a
similar specification.
12 There is a growing trend for desktop PC users to want LCD displays
rather than TV type monitors. Explain why you think this is.
13 In a peer-to-peer network it is possible to send a message from one
PC to another PC directly but this is not possible in a server-based
network. Does this mean that server-based networks can not be
used for e-mail? Explain.
14 What is the effect if one PC in a peer-to-peer network fails or is
switched off?
15 What is the effect if the server machine in a server-based network
fails?

9
Data representation
and computer
arithmetic
2
Data is represented and stored in a computer using groups of binary digits
called words. This chapter begins by describing binary codes and how words
are used to represent characters. It then concentrates on the representation of
positive and negative integers and how binary arithmetic is performed
within the ALU. The chapter concludes with a discussion on the
representation of real numbers and floating point arithmetic.

2.1 Bits, bytes and words


Because of the two-state nature of logic gates, see Chapter 3 for more details
on logic gates, the natural way of representing information inside an
electronic computer is by using the digits 0 and 1 called binary digits. A
binary digit or bit is the basic unit from which all information is structured.
Computers store and process information using groups of bits called words,
as illustrated in Figure 2.1.
In principle, the number of bits in the word or word length can be any
size, but for practical reasons, modern computers currently standardise on

Figure 2.1 Words stored in memory


Memory
address

0 word 0 An n-bit word


e.g. 10110011
1 word 1
2 word 2
3 word 3 Information
e.g. a character ‘A’
e.g. an integer 5
e.g. a real number 3.4
Data representation and computer arithmetic

multiples of 8-bits, typical word lengths being 16, 32 or 64 bits. A group of


8 bits is called a byte so we can use this unit to express these word lengths as
2 bytes, 4 bytes and 8 bytes, respectively. Bytes are also used as the base unit
for describing memory storage capacity, the symbols K, M, G and T being
used to represent multiples of this unit as shown in the following table:

Multiple Pronounced Symbol


1024 kilo K
1024 ⫻ 1024 mega M
1024 ⫻ 1024 ⫻ 1024 giga G
1024 ⫻ 1024 ⫻ 1024 ⫻ 1024 tera T

Thus K or KB represents 1024 bytes, M or MB represents 1048576 bytes,


G or GB represents 1073741824 bytes and T or TB represents 1099511627776
bytes.
In this book, we will use the lower case b to represent bits. Thus Kb means
Kbits and so on.

2.2 Binary codes


With an n-bit word there are 2n different unique bit patterns that can be
used to represent information. For example, if n ⫽ 2, there are 22 or four
bit patterns 00, 01, 10 and 11. To each pattern we can assign some meaning,
such as:
00 ⫽ North, 01 ⫽ South, 10 ⫽ East, 11 ⫽ West
The process of assigning a meaning to a set of bit patterns defines a
particular binary code.

TQ 2.1 How many different ‘things’ can we represent with 7 bits ?

(1) ASCII code


The ASCII code (American Standard Code for Information Interchange), is a
7-bit character code originally adopted for representing a set of 128 different
symbols that were needed for exchanging information between computers.
These symbols include alphanumeric characters such as (A–Z, a–z, 0–9),
special symbols such as (⫹, ⫺, &, %, etc.), and control characters including
‘Line Feed’ and ‘Carriage Return’. Table 2.1 illustrates some of the printable
ASCII codes such as ‘A’ ⫽ 1000001 and ‘%’ ⫽ 0100101. b6,b5, …, b0 are the
seven bit positions, numbered from left to right.

11
Computer organisation and architecture

Table 2.1 ASCII codes for, ‘A’, ‘z’, ‘2’ and ‘%’

Character ASCII Codes


b6 b5 b4 b3 b2 b1 b0
A 1 0 0 0 0 0 1
z 1 1 1 1 0 1 0
2 0 1 1 0 0 1 0
% 0 1 0 0 1 0 1

Control codes, such as ‘Carriage Return’ ⫽ 0001101 and ‘Line Feed’ ⫽


0001010, are called non-printing characters. The full ASCII table is given in
Appendix 3.
In addition to providing a code for information exchange, the ASCII code
has also been adapted for representing characters inside a computer.
Normally characters occupy a single byte of memory: the lower 7 bits being
used to represent the ASCII code and the upper bit being set to 0 or 1,
depending upon the machine. The extra bit can also be used to provide
additional codes for storing graphic characters, or as a parity bit for checking
single bit errors.

TQ 2.2 By referring to the ASCII table in Appendix 3, write down the ASCII codes
for the characters – ‘a’, ‘Z’ and ‘*’.

Binary codes can also be used to represent other entities, such as


instructions and numbers. To represent numeric data we require a set of
rules or numbering system for assigning values to the codes.

2.3 Number systems


(1) Decimal number system
We represent decimal numbers using strings of digits taken from the set
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. Moving from left to right, each symbol represents
a linearly increasing value. To represent numbers greater than 9 we use
combinations of digits and apply a weighting to each digit according to its
position in the number. For example, the decimal integer 126 is assigned a
value of:
1 ⫻ 100 ⫹ 2 ⫻ 10 ⫹ 6 ⫻ 1 ⫽ 100 ⫹ 20 ⫹ 6

12
Data representation and computer arithmetic

Figure 2.2 Weightings used in the decimal number system


position
of digits

1 × 102 + 2 × 101 + 6 × 100

The weighting applied to these digits is 10 raised to the power of the position
of the digit, as shown in Figure 2.2.
The position of a digit is found by counting from right to left starting at
position 0.
Fractional or real numbers use a decimal point to separate negative powers
of 10 from positive powers of ten. For example 52.6 represents:

5 ⫻ 101 ⫹ 2 ⫻ 100 ⫹ 6 ⫻ 10⫺1

The reason for using 10 is that there are ten different digits in this
representation, which we call the base or radix of the system. Other
positional number systems use different sets of digits and therefore have
different bases. To distinguish one number system from another, we often
subscript the number by its base, such as 12610.

(2) Binary number system


The binary number system uses just two digits { 0, 1} and therefore has a base
of 2. The positional weighting of the digits is based on powers of 2, giving the
number 10112, for example, a decimal value of:

1 ⫻ 23 ⫹ 0 ⫻ 22 ⫹ 1 ⫻ 21 ⫹ 1 ⫻ 20 ⫽ 8 ⫹ 0 ⫹ 2 ⫹ 1 ⫽ 1110

This system of weighting is called pure binary, the binary digit furthest to the
right being the least significant bit (lsb) and the one furthest to the left being
the most significant bit (msb).

TQ 2.3 What is the decimal value of the number 11.12?

(3) Hexadecimal number system


The hexadecimal (Hex) number system is a base-16 system and therefore has
16 different symbols to represent its digits. By convention the symbols
adopted are {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F}, where:

A ⫽ 1010, B ⫽ 1110, C ⫽ 1210, D ⫽ 1310, E ⫽ 1410 and F ⫽ 1510

In this system the weighting is 16 raised to the power of the position of the
digit. For example A1F16 has a decimal value of:
A ⫻ 162 ⫹ 1 ⫻ 161 ⫹ F ⫻ 160 ⫽ 10 ⫻ 256 ⫹ 1 ⫻ 16 ⫹ 15 ⫻ 1 ⫽ 259110

13
Computer organisation and architecture

Table 2.2 Comparison of binary and hexadecimal number systems

binary hexadecimal

0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 8
1001 9
1010 A
1011 B
1100 C
1101 D
1110 E
1111 F

(4) Binary to hexadecimal conversion


Table 2.2 compares the first sixteen digits of the binary number system with
the hexadecimal number system.
From the table we can see that a single hexadecimal digit is capable of
representing a 4-bit binary number. Because of this fact, we can convert a
binary number into hexadecimal by grouping the digits into 4’s, replacing
each group by one hexadecimal digit, as shown below:

1011 0011 1010


B 3 A

The binary number 1011001110102 expressed in hexadecimal is therefore


B3A16. To convert a hexadecimal number into binary, we reverse this
operation and replace each hexadecimal digit by a 4-bit binary number. One
reason for using hexadecimal, is that it makes it easier for humans to talk
about bit patterns if we express them in their Hex equivalent. Try this
experiment with a friend. Read out a 16-bit binary number quite quickly and
get the friend to write the number on paper. Very few will be able to do this
correctly. Now convert the binary number to Hex and read out the Hex
value, again asking the friend to write the number down. Almost certainly,
they will get the number down correctly this time.

14
Data representation and computer arithmetic

TQ 2.4 Convert the hexadecimal number ABCD16 into binary

2.4 Negative numbers


The binary system described so far is unable to represent negative integers
and for this reason we call it unsigned binary. To support the use of negative
numbers it is necessary to modify our representation to include information
about the sign as well as the magnitude of a number. In this section we will
consider two ways of doing this.

(1) Sign and magnitude representation


In this representation, the leftmost bit of the number is used as a sign bit and
the remaining bits are used to give its magnitude. By convention, 0 is used for
positive numbers and 1 for negative numbers. For example, using an 8-bit
representation the numbers ⫺510 and ⫹2010 are 10000101 and 00010100
respectively.

TQ 2.5 What do 10001111 and 01010101 represent?

Unfortunately, representing numbers in this way makes binary addition and


subtraction, which is performed by the Arithmetic and Logic Unit (ALU),
more awkward to deal with. When performing addition, for example, the
sign bits must be checked before the magnitudes are separated out and
added. If the sign bits are different, then a binary subtraction must be
substituted for an addition, and before completing the operation, an
appropriate sign bit must be reinserted. These extra processing steps add
to the complexity of the ALU and increases the execution time of the
operation.

TQ 2.6 How is zero represented in this system?

(2) Two’s complement representation


In this representation, there is only one representation of zero and it is more
flexible than sign and magnitude in that it allows binary addition and
subtraction to be treated in the same way. Rather than separating the sign
from the magnitude, the ‘negativeness’ of the number is built into it. This is
accomplished by giving the most significant bit position of an n-bit number
a weighting of ⫺2n⫺1 instead of ⫹2n⫺1 that we use with unsigned binary.

15
Computer organisation and architecture

Figure 2.3 Showing the weighting of two’s complement numbers


positiveness
+127

+1

0 10000001= – 128 + 1 = – 127 01111111 = – 0 + 127 = + 127

– 128

negativeness

Therefore with an 8-bit representation, the numbers ⫹12710 and ⫺12710 are
given by:
⫹127 ⫽ 01111111 ⫽ ⫺0 ⫻ 27 ⫹ 1 ⫻ 26 ⫹ 1 ⫻ 25 ⫹ 1 ⫻ 24 ⫹ 1 ⫻ 23
⫹ 1 ⫻ 22 ⫹ 1 ⫻ 21 ⫹ 1 ⫻ 20
⫽ ⫺0 ⫹ 64 ⫹ 32 ⫹ 16 ⫹ 8 ⫹ 4 ⫹ 2 ⫹ 1
⫺127 ⫽ 10000001 ⫽ ⫺1 ⫻ 27 ⫹ 0 ⫻ 26 ⫹ 0 ⫻ 25 ⫹ 0 ⫻ 24 ⫹ 0 ⫻ 23
⫹ 0 ⫻ 22 ⫹ 0 ⫻ 21 ⫹ 1 ⫻ 20
⫽ ⫺128 ⫹ 0 ⫹ 0 ⫹ 0 ⫹ 0 ⫹ 0 ⫹ 0 ⫹ 1
We can visualise these two numbers as shown in Figure 2.3, where the most
significant bit provides a large negative contribution and the remaining
seven bits provide a positive contribution.
Any two’s complement number where the most significant bit (msb) is
equal to 1, must have an overall negative value. The msb therefore acts as
both a sign bit and a magnitude bit.
With an 8-bit two’s complement representation, we can represent
numbers between ⫺12810 and ⫹12710 as shown in Table 2.3.

TQ 2.7 What would the number 10000111 represent?

From the table we can identify a connection between the bit pattern of a
positive number, such as ⫹2 ⫽ 00000010, and the bit pattern of its opposite
number, ⫺2 ⫽ 11111110. If we reverse all the bits of the number ⫹2,
exchanging 1 bits for 0 bits and vice versa, we get the bit pattern 11111101.
This is called finding the one’s complement. If we now add ‘1’ to the lsb of this
number we get:
11111101 ⫹
1
11111110 ⫽ two’s compliment

16
Data representation and computer arithmetic

Table 2.3 8-bit two’s complement representation

⫺128 10000000
⫺127 10000001
⫺126 10000010
⫺2 11111110
⫺1 11111111
0 00000000
⫹1 00000001
⫹2 00000010
⫹126 01111110
⫹127 01111111

TQ 2.8 What is the two’s complement representation of the number ⫺310?

Worked example What is the decimal value of the two’s complement number 11110000 ?

Solution Because the sign bit is 1, we know that it must be a negative number.
If we represent this number as ⫺X, then its two’s complement must be
⫺(⫺X) ⫽ ⫹X. The two’s complement of 11110000 is 00010000 as
shown below:
00001111 ⫹
1
00010000

Because this is ⫹1610 then the decimal value of 11110000 is ⫺1610

Another
worked example How do we represent ⫺25?

Solution ⫹25 ⫽ 00011001


The 1’s complement of this is 11100110 ⫹
add 1 to get the two’s complement 1
11100111

So 11100111 is how ⫺25 is stored in two’s complement form.

17
Computer organisation and architecture

Table 2.4 Rules for binary addition

Digits added Sum Carry-out


0⫹0 0 0
0⫹1 1 0
1⫹0 1 0
1⫹1 0 1

2.5 Binary arithmetic


(1) Binary addition
The rules for adding pairs of binary digits are given in Table 2.4. Using these
rules, we add the binary numbers 1010 and 0011 by adding the digits together
in pairs, starting with the least significant pair of digits on the far right, as
shown in the following example. Note that the second pair of digits when added
produce a carry forward which must be added in with the third pair of digits.
1010 ⫹
0011
1101

1↵
The carry forward or carry-out generated when adding the second pair of
digits is shown in table row four. This gets included as a carry-in to the sum of
the next most significant pair of digits, just as we do with decimal addition.
Table 2.5 makes this more explicit and shows how the sum (S) and carry-out
(Co) depend upon the digits (A and B) being added and the carry-in (Ci).

(2) Two’s complement arithmetic


We add two’s complement numbers in the same way as we add unsigned
binary. For example, 12 ⫹ 20 ⫽ 32 as shown below:
12 ⫽ 00001100 ⫹
20 ⫽ 00010100
00100000 ⫽ 32

We can also add negative numbers, such as ⫺1 ⫹ (⫺2) ⫽ ⫺3, provided that
we ignore the bit carried-out of the sum:
⫺1 ⫽ 11111111 ⫹
⫺2 ⫽ 11111110
11111101 ⫽ ⫺3

(ignore) 1 ↵

18
Data representation and computer arithmetic

Table 2.5 Rules for binary addition with carry-in included

A B Ci S Co
0 0 0 0 0
0 1 0 1 0
1 0 0 1 0
1 1 0 0 1
0 0 1 1 0
0 1 1 0 1
1 0 1 0 1
1 1 1 1 1

If we add large positive or large negative numbers together, we sometimes get


the wrong answers, as the following examples illustrate:
64 ⫽ 01000000 ⫹
65 ⫽ 01000001
10000001 ⫽ ⫺127 (should be ⫹129)

⫺64 ⫽ 11000000 ⫹
⫺65 ⫽ 10111111
01111111 ⫽ ⫹127 (should be ⫺129)
(ignore) ↵
These are examples of arithmetic overflow, which occurs whenever a sum
exceeds the range of the representation. In the first example, the sum should
be ⫹129 and in the second it should be ⫺129. From Table 2.3, we can see
that these results are both out of range, because the largest and smallest
numbers that can be represented are ⫹127 and ⫺128. Overflow is a
consequence of two’s complement arithmetic and can only occur when we
add two numbers of the same sign. If the sign bit of the sum is different from
that of the numbers being added, then overflow has taken place. The ALU
signals this event by setting the overflow flag in the Flags Register.
One of the main advantages in using a two’s complement representation,
is that the ALU can perform binary subtraction using addition. For example,
7 ⫺ 5 is the same as 7 ⫹ (⫺5), so to perform this operation we add 7 to the
two’s complement of 5. This is shown below:
00000111 ⫹
11111010
1
00000010 ⫽ ⫹210
(ignore) 1↵

19
Computer organisation and architecture

Figure 2.4 BCD representation of decimal number 9164


Memory

9 1 6 4
10010001 91

01100100 64
1001 0001 0110 0100

Do you remember what we said in Chapter 1 about computers only being


able to add?

2.6 Binary Coded Decimal (BCD)


When entering decimal data into a computer, the data must be converted
into some binary form before processing can begin. To reduce the time
needed to perform this conversion, we sometimes use a less compact but
easily converted form of binary representation called Binary Coded
Decimal (BCD).
To convert a decimal number into BCD, we use a 4-bit positional code for
each decimal digit. It is usual to weight these digits in the normal 8-4-2-1
way, so that the decimal digits 1, 2, 3, … are replaced by the BCD codes
0001, 0010, 0011, Figure 2.4 illustrates how the decimal number 9164 is
encoded and stored in two consecutive bytes of memory.

TQ 2.9 Which 4-bit binary codes are left unused by the BCD representation?

Because of these unused or invalid codes, we cannot perform arithmetic on


BCD numbers in the same way as we do with pure binary. For example, 9 ⫹ 1
would give 1010, which is an invalid code. To overcome this problem most
computers include special logic in the ALU for performing BCD or decimal
arithmetic.

2.7 Floating point representation


In the decimal number system we frequently represent very large or very
small numbers in scientific notation rather than as a fixed point number. For
example, the fixed point decimal numbers 299800000 and
0.0000000000000000001602 can be represented as 2.998 ⫻ 10⫹8 and
1.602 ⫻ 10⫺19, respectively. The power to which 10 is raised, such as ⫹8 or
⫺19, is called the exponent or characteristic, while the number in front is
called the mantissa.

20
Data representation and computer arithmetic

Figure 2.5 A simple floating point format


mantissa exponent
sign bit

0 1011010 00000011

implied binary point

By substituting the base 2 for the base 10, we can use a similar notation for
representing real numbers in a computer. For example, the decimal number
5.625 could be represented as 1.01101 ⫻ 22 or 1011.01 ⫻ 2⫺1, where each
exponent specifies the true position of the binary point relative to its current
position in the mantissa. Because the binary point can be dynamically altered
by adjusting the size of the exponent, we call this representation floating
point.

(1) Storing floating point numbers


To store a floating point number we need to record information about the
sign and magnitude of both the mantissa and the exponent. The number of
words used to do this and the way this information is encoded is called a
floating point format. Figure 2.5 shows how 1.0011010 ⫻ 22 might be
represented and stored using two bytes or l6-bits of storage space.
With this particular format, a sign and magnitude representation is used
for storing the mantissa and a two’s complement representation is used for
the exponent. Before storing this number it must be normalised by adjusting
the exponent so that the binary point is immediately before the most
significant digit. The normalised form of the number 1.011010 ⫻ 22 is
therefore given by 0.1011010 ⫻ 23, so the digits 1011010 and the two’s
complement representation of the exponent ⫹3, which is 00000011, are
stored in their respective bytes.

TQ 2.10 What is the largest number we can represent using this format?

The range of numbers we can represent with an 8-bit exponent is


approximately 10⫺39 to 10⫹39 and the precision we get with a 7-bit mantissa is
about 1 part in 103. We can increase the precision by using more bits to store
the mantissa, but with a 16-bit mode of representation, this can only be done
by reducing the range. To overcome this type of problem, most machines
support two modes of precision, single precision and double precision, as
illustrated in Figure 2.6.

TQ 2.11 How would the number 5.12510 be stored in single precision format?

21
Computer organisation and architecture

Figure 2.6 Single and double precision formats


16-bit single precision format

1 mantissa 7-bits exponent 8-bits

24-bit double precision format

1 mantissa 15-bits exponent 8-bits

mantissa sign

Figure 2.7 Floating point addition


(a)
01010010 00000011

01101101 00000100

(b) arithmetic shift one place add one to exponent

00101001 00000100

01101101 00000100

(c) sign add mantissae

0 10010110 00000100

(d) arithmetic shift one place,


including the sign bit in the
shift
sign add one to exponent

0 01001011 00000101

(2) Floating point arithmetic


Floating point arithmetic is more complicated than integer arithmetic. To
illustrate this, we will consider the steps involved in performing the
operation 5.125 ⫹ 13.625, using single precision arithmetic. We will assume
that these numbers have been normalised and stored in memory, as shown in
Figure 2.7(a).

22
Data representation and computer arithmetic

The first step in this operation involves aligning the binary points of the
two numbers, which is carried out by comparing their exponents and
arithmetically shifting the smaller number until its exponent matches that of
the other. In Figure 2.7(b), the mantissa of the smaller number 5.125, is
shifted one place to the right so that its exponent becomes the same as that of
the number 13.625. Notice that a zero has been inserted in its most
significant bit position.
Having the same exponents, the sign bits are separated and the
mantissae are added, as shown in Figure 2.7(c). Because the result now
occupies 8 bits, it must be re-normalised, by shifting the bits one place to
the right and incrementing the exponent. Finally, the sign bit is reinserted
as shown in Figure 2.7(d), to produce a sum of ⫹0.1001011 ⫻ 25 or 18.7510.
Floating point multiplication and division operations also involve a
number of steps including adding/subtracting the exponents,
multiplying/dividing the mantissae and re-normalising the result.
Remember that multiplication and division can be carried out by successive
addition and successive subtraction, respectively. These operations can be
carried out either by using software routines or by employing special
hardware in the form of a floating point coprocessor. Floating point or
numeric coprocessors improve the performance of compute intensive
applications, by allowing any floating point arithmetic to take place in
parallel with the CPU. When the CPU detects a floating point instruction,
the operands are passed to the coprocessor, which performs the arithmetic
operation while the CPU proceeds with another activity.

2.8 Summary
Computers store and manipulate information as n-bit words. An n-bit word
can represent 2n different entities, such as characters and numbers. A group
of 8 bits is called a byte and can be used to store a single ASCII character.
The binary number system uses a positional weighting scheme based on
powers of 2. The hexadecimal number system uses a positional weighting
based on powers of 16. The hexadecimal number system provides a useful
shorthand for representing large binary numbers. Negative numbers are
often represented in binary form using the two’s complement
representation. This representation allows subtraction to be carried out
using the same basic circuitry used for addition. When adding two’s
complement numbers with the same sign, a condition called overflow can
occur. An overflow condition is automatically flagged in the Flags Register.
Real numbers can be represented as floating point numbers. Floating point
numbers use a particular format to represent the mantissa and the
exponent. Floating point arithmetic involves more steps than with integer
arithmetic and can be performed using either software routines or by
employing additional hardware in the form of a coprocessor. Floating
point coprocessors can execute floating point operations in parallel with
the CPU.

23
Computer organisation and architecture

Answers to in text questions


TQ 2.1 With n ⫽ 7 there are 27 ⫽ 128 unique bit patterns that can be used to
represent different ‘things’.

TQ 2.2 ‘a’ ⫽ 1100001, ‘Z’ ⫽ 1011010 and ‘*’ ⫽ 0101010

TQ 2.3 1 ⫻ 21 ⫹ 1 ⫻ 20 ⫹ 1 ⫻ 2⫺1 ⫽ 2 ⫹ 1 ⫹ 0.5 ⫽ 3.510

TQ 2.4 1010 1011 1100 1101

TQ 2.5 10001111 represents ⫺1510 and 01010101 represents ⫹8510

TQ 2.6 Zero can be written as either 1000000 or 00000000

TQ 2.7 This number would represent ⫺128 ⫹ 7 ⫽ ⫺121

TQ 2.8 (a) Write down the 8-bit representation of the number, ⫹310 ⫽ 00000011
and find its one’s complement, 11111100
(b) Add 1 to the lsb
11111100 ⫹
1
11111101 ⫽ two’s compliment

TQ 2.9 Because only 10 of the 16 possible 4-bit binary codes are used, we are left
with the six invalid codes.
1010, 1011, 1100, 1101, 1110, 1111

TQ 2.10 The largest number is ⫹0.1111111 ⫻ 2⫹127

TQ 2.11 ⫹5.12510 ⫽ 101.001 ⫻ 20 ⫽ 0.101001 ⫻ 2⫹3 when normalised. It would


therefore be stored as:
sign bit ↓
01010010 00000011

EXERCISES
1 How many binary codes can we generate with 16 bits?
2 Convert the following decimal numbers into binary:
(a) 16 (b) 127 and (c) 255
3 Convert the following binary numbers into decimal:
(a) 0111 (b) 101101000011 and (c) 1011.0111
4 Convert the following binary numbers into hexadecimal:
(a) 101011101011 (b) 11100110 and (c) 010100011

24
Data representation and computer arithmetic

5 Perform the following binary additions:


(a) 00101 ⫹ 10110 and (b) 100111 ⫹ 100101 ⫹ 000001
6 If a byte addressable RAM occupies the hexadecimal addresses A000
to BFFF, then how many KB of storage space is available?
7 Perform the following operations using 8-bit two’s complement
arithmetic. In which cases will arithmetic overflow occur?
(a) 100 ⫹ 27 (b) 84 ⫹ 52 (c) 115 ⫺ 64 (d) ⫺85 ⫺ 44
8 Represent:
(a) ⫹101.1111 ⫻ 2⫹5 and (b) ⫺0.0001 ⫻ 2⫹6
using the simple floating point format given in Figure 2.5.

25
Boolean logic

3 In Chapter 1 we mentioned that logic gates are the basic building blocks of
a digital computer. This chapter describes these gates and how they can be
used to build useful circuits.

3.1 Logic gates


Integrated circuits such as microprocessors, memory, interface chips and
so on, are manufactured by putting hundreds, thousands or millions of
simple logic gates on to a silicon chip. The chip is then packaged to provide
pins for connecting the circuit to the rest of the system, as illustrated in
Figure 3.1.
Each logic gate generates an output that depends on the electronic logic
level applied to its input(s). For two-state logic devices, the logic levels are
described as one of: true/false, high/low, on/off or 1/0. Only a few basic types
of gate are needed to build digital circuits, each gate performing a particular
logic function such as AND, OR, or NOT. We represent these gates using
special symbols, as shown in Figure 3.2.
The input and output logic levels applied to these gates are represented by
boolean variables, such as A, B and X. These variables can take only the
values 1 or 0. For simplicity we have only considered dual-input gates, but it
should be remembered that apart from the NOT gate, all other gates can
have two, three or more inputs, the upper limit depending upon the
technology used to implement the gate. The function of each logic gate is
described by a truth table, which relates its input logic state to its output logic
state. For example, the truth table of the AND gate shows that the two inputs
can take the values 00, 01, 10 or 11 and that the output value is 1 only when
the input is 11.

TQ 3.1 What is the output value of an Exclusive-OR gate if just one of its inputs is at
logic 1?
TQ 3.2 If the output value of a NAND gate is 0, then what can we deduce about its
inputs?
TQ 3.3 Sometimes we describe a NOT gate as an inverter. Why?
Boolean logic

Figure 3.1 Relationships between chip, logic gate and package

Microprocessor
chip

Logic gate
Microprocessor components
package

Logic gate

Figure 3.2 Digital logic gates

FUNCTION SYMBOL TRUTH TABLE

NOT A A
f(X) = ‘A 0 1
or A A X 1 0
or !A
or NOT A
A B A.B
AND
A 0 0 0
f(X) = A.B
X 0 1 0
or A&B
B 1 0 0
or A AND B
1 1 1

A B A.B
NAND 0 0 1
f(X) = A.B A
0 1 1
or A NAND B X
1 0 1
or NOT(A AND B) B
1 1 0

A B A+B
OR 0 0 0
f(X) = A + B A
0 1 1
or A|B X
1 0 1
or A OR B B
1 1 1

A B A+B
NOR
A 0 0 1
f(X) = A + B
X 0 1 0
or A NOR B
B 1 0 0
or NOT(A OR B)
1 1 0

A B A⊕B
eXclusive-OR A 0 0 0
f(X) = A⊕B X 0 1 1
or A XOR B B 1 0 1
1 1 0

27
Computer organisation and architecture

3.2 Combinational logic circuits


By connecting simple logic gates together in various ways, we can build a
whole range of useful circuits. In this section we will illustrate this with a few
simple examples. Binary addition is covered in Section 2.5 but in the
following two sections we will look at how circuits can be built to perform
the addition function.

3.2.1 Half-adder
Figure 3.3(a) illustrates a circuit called a half-adder, which can be built using
an AND gate in combination with an Exclusive-OR gate. The circuit has two
inputs, labelled A, B and two outputs, labelled S, C. From the AND and
Exclusive-OR truth tables, we can see that when A and B are both at logic 0,
both S and C are also at logic 0. If we now take B to logic 1, then S also goes
to logic 1, while C remains at logic 0.

TQ 3.4 Complete the truth table in Figure 3.3(b).

The half-adder, represented symbolically in Figure 3.3(c), is used as a


building block for a more useful circuit called a full-adder.

3.2.2 Full-adder
A full-adder is shown in Figure 3.4(a). It is a combinational logic circuit with
three inputs, labelled A, B, Ci and two outputs, labelled S and Co. The circuit
is used to find the sum S of a pair of binary digits, A and B. Co is 1 if a carry-
out is generated and is 0 otherwise. Ci or carry-in, is used to allow any carry
generated by adding a previous pair of binary digits to be included in the
sum. The truth table for the full-adder circuit is given in the Table.3.1.

Figure 3.3 Half-adder

A
S AB SC
B 0 0 0 0 A S
0 1 1 0 HA
1 0 B C
1 1
C

(a) Logic circuit (b) Truth table (c) Symbol


(incomplete)

28
Figure 3.4 Full-adder

Ci A S S
HA
A A S B C
HA Co
B B C

(a) Logic circuit A


B S
(b) Symbol FA
Ci Co

A3 B3 A2 B2 A1 B1 A0 B0

Co C3 C2 C1 0
FA FA FA FA

S3 S2 S1 S0
(c) 4-bit addition circuit

A3 B3 A2 B2 A1 B1 A0 B0

Co C3 C2 C1 1
FA FA FA FA

S3 S2 S1 S0
(d) 4-bit subtraction circuit

Table 3.1 Truth table for full-adder

A B Ci S Co
0 0 0 0 0
0 1 0 1 0
1 0 0 1 0
1 1 0 0 1
0 0 1 1 0
0 1 1 0 1
1 0 1 0 1
1 1 1 1 1

29
Computer organisation and architecture

A chain of these full-adders can be used to add binary numbers together as in


Figure 3.4(c) where a 4-bit addition unit is built from a series of four full-
adders. In Section 2.5 two’s complement arithmetic was introduced. With
two’s complement arithmetic it is possible to perform subtraction using adder
circuits. Figure 3.4(d) demonstrates how a 4-bit subtraction unit can be built
from a series of full-adders with the B input negated and the carry-in to the
first full-adder set to 1.

TQ 3.5 Why is the carry-in set to 1 in Figure 3.4(d)?

3.2.3 A 2-to-4 decoder


Another useful circuit is the 2-to-4 line decoder shown in Figure 3.5, which
can be built from a combination of NAND and NOT gates. The NOT gates
are arranged in such a way that each of the four input combinations 00, 01,
10, 11 activates a different NAND gate, by taking both of its inputs
‘high’. This forces the output of the NAND gate to go ‘low’. The inputs
A, B are therefore used to select one and only one of the outputs S0,…, S3
by forcing it to go low.
This circuit can be used to select one device from a choice of four. The
select lines in such devices are often active low, that is, the device is selected
when the control input is 0. This is due to the electrical characteristics of the
circuits whereby a more efficient circuit can be designed with an active low
control input. This circuit can be used for address decoding, which we
discuss in Section 6.4.

Figure 3.5 A 2-to-4 decoder

A
S3
B

AB S0 S1 S2 S3
S2 0 0 0 1 1 1
0 1 1 0 1 1
1 0 1 1 0 1
1 1 1 1 1 0

S1

S0

(a) Logic circuit (b) Truth table

30
Boolean logic

Figure 3.6 2-input multiplexor

X XY S F
0 0 0 0
0 1 0 0
1 0 0 1
F 1 1 0 1

0 0 1 0
Y
0 1 1 1
1 0 1 0
1 1 1 1
S
(a) Logic circuit (b) Truth table

3.2.4 A 2-input multiplexor


The circuit in Figure 3.6 has three inputs X, Y, S and one output F. From the
truth table you will notice that when S ⫽ 0, the output F is the same as the
input X, and when S ⫽ 1, the output F is the same as the input Y. In other
words, the circuit acts as a logic switch, the output F being connected to
X or Y depending upon whether S ⫽ 1 or S ⫽ 0.
When designing logic circuits the result is often a very long and complex
expression. In order to simplify these logic expressions, algebraic techniques
of minimisation or Karnough maps may be used. These techniques are
explained and demonstrated in Appendix 1.

3.3 Sequential logic circuits


Combinational logic circuits, where the output depends solely on the current
state of the input, are useful for implementing functional units such as
adders or switches. However, for memory elements and other functional
units that have outputs that depend upon their current input and the current
state of the circuit, we need to use sequential logic elements. The simplest
form of sequential logic circuit is the flip-flop.

3.3.1 R-S flip-flop


Figure 3.7 illustrates a NOR gate version of an R-S flip-flop, the NOR gates
being labelled G1 and G2. The circuit has two inputs, labelled R, S and two
outputs, labelled Q and Q. The bar over the latter Q (pronounced ‘not Q’),
indicates that this output is the complement or inverse of Q.
The circuit can exist in one of two stable states by virtue of the fact that its
outputs are cross-coupled to its inputs. For this reason we call this type of
circuit a bistable device.
With the inputs and outputs shown in Figure 3.8(a), the circuit is in the
first of its two stable states. We can check this by referring to the truth table
of the NOR gate given in Figure 3.2, and noting that the output of a NOR

31
Computer organisation and architecture

Figure 3.7 R-S flip-flop

R
G1 Q

G2 Q
S

Figure 3.8 Operation of an R-S flip-flop circuit


R=0 R=0
Q=0 Q=0
G1 G1

G2 G2
Q=1 Q=1
S=0 S=1
(a) (b)

R=0
Q=1
G1

G2
Q=0
S=0
(c)

gate is always 0 if either or both of its inputs are at logic 1. Because the
output Q, from gate G1, is also an input to G2, then when Q ⫽ 0 and S ⫽ 0
the output Q is 1. This output is fed-back to G1 and holds Q ⫽ 0,
irrespective of whether R ⫽ 0 or R ⫽ 1.
When S is taken to logic 1, as shown in Figure 3.8(b), Q goes low forcing
the Q output of Gl to 1, since both its inputs are now low. The output of Gl
is fed-back to G2 and holds Q low, so that when S is restored to 0, as shown
in Figure 3.8(c), the outputs remain in this second stable state.

TQ 3.6 Describe what happens if R is now taken high then low.

32
Boolean logic

Figure 3.9 Clocked R-S flip-flop

R R⬘
Q

S⬘ Q

R-S flip-flop

The R or Reset input is used to restore the circuit to its original state (Q ⫽ 0),
while the input S is called Set, because it sets the circuit into a second stable
state (Q ⫽ 1).

3.3.2 Clocked R-S flip-flop


A clocked R-S flip-flop circuit is shown in Figure 3.9.
In this circuit, inputs R and S are ANDed with a third clock input C. The
outputs of the AND gates (R⬘ and S⬘) then act as inputs to the R-S flip-flop.

TQ 3.7 When C ⫽ 0, what will the values of R⬘ and S⬘ be?

Only when C ⫽ 1 do the R⬘ and S⬘ inputs take on the input values R and S
and affect the output of the circuit. The circuit therefore acts as a clock
controlled storage element. When the clock is high, a 1 can be stored at the
Q output by taking S ⫽ 1 and R ⫽ 0, and a 0 can be stored by taking R ⫽ 1
and S ⫽ 0. When the clock goes low, the information (either 1 or 0) stored in
the memory element is protected from alteration.

3.3.3 D-type flip-flop


A simple D-type flip-flop circuit is shown in Figure 3.10. It is basically a
clocked R-S flip-flop with the R-input connected by a NOT gate to the S-input.

TQ 3.8 When D ⫽ 1 what will the values R and S be?

We can illustrate the way in which data is clocked or latched into this type
of storage element, by using a timing diagram as shown in Figure 3.11.

33
Exploring the Variety of Random
Documents with Different Content
Tee′ pee. Indian circular house or tent made of poles covered with
skins or cloth.

Ten′ nay. See Apache and Navajo.

Ti ō′ ta. A lake in central New York.

Tom′ a hawk. An Indian battle ax.

Tŭs ca ro′ ras. A tribe from North Carolina which joined the Iroquois
in 1712. They now live upon a reservation in western New York, near
Niagara Falls, and are noted for their fine farms, schools, and
churches.

Wä bas′ so. The Chippewa word for rabbit.

Wau bē′ sē. The wild swan.

Wee′ di goes. Mythical giants. A Chippewa word.

Wick′ i up. A brushwood tent-like house used by the Apaches and


other roving tribes. It is made of short poles or brush bent over,
fastened together, and covered hastily with skins, blankets, or other
covering. It is never carried from place to place as the teepee and
wigwam are by other tribes.

Wig′ wam. A circular tent-like house made of birch bark or other


bark by the New England tribes and others. It is easily rolled and
carried from place to place.

Zuñi (Zoon′ ye). A semi-civilized Pueblo tribe, perhaps the best


known of any of the Village Indians of the United States. They have
a governor and lieutenant-governor of their own; good laws, good
farms, and are a remarkable people. Very few of them can
understand English. [277]
[Contents]
PRONOUNCING VOCABULARY
OF WORDS NOT FOUND IN THE GLOSSARY

Ȧ rā′ bi an Kwä′ sind Săm′ ō set


Ăr i zō′ nȧ Lou is i a na (lo͞ o′ e Sem′ ĭ nōle
Ăz′ tĕꞓ ze ȧ′ na̯ ) Sĕn′ e ꞓȧ
Cā nŏn′ i ꞓus Man′ dan Shin′ gē bĭs
Cham plain Măs′ sā soit Ski ka (ske͞ e′
(shăm plān′) Mī ăn′ tō nō′ mah kȧ)
Chĕr′ ō ke͞ e′ Mich a bo (mish ä′ Snell′ ing
Chib′ i ä′ bōs bo) Su pe′ ri or
Chŏk än′ i pŏk Min′ go Tä ꞓō′ mä
Cō rō nä′ do Mis′ sis sip′ pi Tah′ le quah
Dah′ min Mo′ hawk T chä′ kō
Del′ a ware Mŏn däh′ min be͞ ech
Esh kŏs′ sim Mō rā′ vĭ an Tē ꞓum′ seh
Gen′ e se͞ e′ Nä mē′ si Sip′ u Tĕx′ as
Goh weh (gō′ Nä näb′ ush Ū sä′ mä
wāy) Nătch′ ez Unk tä′ hē
Ī ē tän′ Nē nē mĭsh′ e͞ e Wah′ be gwan′
Il li nois (īl lī noi′) Oh weh (o′ way) ne͞ e
Kā-än′-er-wäh′ On on da ga (on un Wah kan′
Kä-bib′-ŏn-ŏk′-ka daw′ ga) Wah kan′ de͞ e
Kan′ so͞ o ko͞ o tay′ Ō pe͞ e′ che͞ e Wah kan′ Ä tē
pe On she mä′ dä Wah kan′ e on
Kā wey a (kā wī′ Os′ sē ō ton′ ka
yä) Ō wāy′ nē ō [278] Wä′ wä tāis′ sa
Ke neu (ke new′) Pȧ poose′ Wē′ enk
Kē′ we͞ e naw Par′ lia ment Win′ ni wis′ si
K haih (k hāy′) Pĕnn sy̆ l vā′ nĭ a Wŭn′ a̤ u mŏn
Kick ä po͞ o′ Pôr′ ꞓŭ pīne Yŭꞓe′ ꞓȧ
Kĭ măn′ che͞ e

KEY TO USE OF MARKS

āte, târe, härm, tȧsk, ca̤ ll; ēat, sĕnd, hēr; rīce, tĭll; ōver, ôr, dŏn; Ūna,
ŭtter, ûrge; ꞓ as in can; ch as in chase; g as in get; sh as in she; th
as in that; oi as in oil; ow as in now; o͞ o as in cool; e͞ e as in feel.
Colophon
Availability

This eBook is for the use of anyone anywhere at no cost and with
almost no restrictions whatsoever. You may copy it, give it away or re-
use it under the terms of the Project Gutenberg License included with
this eBook or online at www.gutenberg.org ↗️.

This eBook is produced by the Online Distributed Proofreading Team


at www.pgdp.net ↗️.

Metadata

Title: Wigwam stories


told by North
American Indians
Author: Mary Catherine Info
Judd https://viaf.org/viaf/26619728/
Illustrator: Henook-Makhewe- Info
Kelenaka (1868– https://viaf.org/viaf/119652129/
1919)
File 2024-03-10
generation 07:20:14 UTC
date:
Language: English
Original 1904
publication
date:

Revision History

2024-02-22 Started.
*** END OF THE PROJECT GUTENBERG EBOOK WIGWAM
STORIES TOLD BY NORTH AMERICAN INDIANS ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright
in these works, so the Foundation (and you!) can copy and
distribute it in the United States without permission and without
paying copyright royalties. Special rules, set forth in the General
Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree to
abide by all the terms of this agreement, you must cease using
and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project
Gutenberg™ works in compliance with the terms of this
agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms
of this agreement by keeping this work in the same format with
its attached full Project Gutenberg™ License when you share it
without charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the


United States and most other parts of the world at no
cost and with almost no restrictions whatsoever. You may
copy it, give it away or re-use it under the terms of the
Project Gutenberg License included with this eBook or
online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the
country where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite
these efforts, Project Gutenberg™ electronic works, and the
medium on which they may be stored, may contain “Defects,”
such as, but not limited to, incomplete, inaccurate or corrupt
data, transcription errors, a copyright or other intellectual
property infringement, a defective or damaged disk or other
medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES -


Except for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU
AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE,
STRICT LIABILITY, BREACH OF WARRANTY OR BREACH
OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER
THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR
ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE
OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF
THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If


you discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person or
entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR
ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you do
or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status by
the Internal Revenue Service. The Foundation’s EIN or federal
tax identification number is 64-6221541. Contributions to the
Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or
determine the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like