Full download (Ebook) Signal Processing for Intelligent Sensor Systems with MATLAB®, Second Edition by David C. Swanson ISBN 9781420043044, 1420043048 pdf docx
Full download (Ebook) Signal Processing for Intelligent Sensor Systems with MATLAB®, Second Edition by David C. Swanson ISBN 9781420043044, 1420043048 pdf docx
https://ebooknice.com/product/signal-processing-for-intell-sensor-
systems-with-matlab-4094464
https://ebooknice.com/product/biota-grow-2c-gather-2c-cook-6661374
(Ebook) Systems and Signal Processing with MATLAB®: Two Volume Set,
3rd Edition by Taan S. ElAli ISBN 9780367533595, 0367533596
https://ebooknice.com/product/systems-and-signal-processing-with-
matlab-two-volume-set-3rd-edition-51055758
https://ebooknice.com/product/signals-systems-transforms-and-digital-
signal-processing-with-matlab-2384698
(Ebook) Intelligent Sensor Networks: The Integration of Sensor
Networks, Signal Processing and Machine Learning by Fei Hu (editor),
Qi Hao (editor) ISBN 9781439892817, 1439892814
https://ebooknice.com/product/intelligent-sensor-networks-the-
integration-of-sensor-networks-signal-processing-and-machine-
learning-33486826
https://ebooknice.com/product/surface-acoustic-wave-filters-second-
edition-with-applications-to-electronic-communications-and-signal-
processing-1134706
https://ebooknice.com/product/discrete-systems-and-digital-signal-
processing-with-matlab-5144608
https://ebooknice.com/product/verified-signal-processing-algorithms-
in-matlab-and-c-42640188
https://ebooknice.com/product/conceptual-digital-signal-processing-
with-matlab-22477384
Signal Processing for Intelligent Sensor Systems with
MATLAB Second Edition David C. Swanson Digital
Instant Download
Author(s): David C. Swanson
ISBN(s): 9781420043044, 1420043048
Edition: 2
File Details: PDF, 22.17 MB
Year: 2011
Language: english
Signal
Processing
Processi g foro
Intelligent
Sensor Systems
ms
with MATLAB
T AB ®
Second Edition
David C. Swanson
www.itpub.net
®
www.itpub.net
Signal
Processing for
Intelligent
Sensor Systems
with MATLAB ®
Second Edition
David C. Swanson
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2012 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
www.itpub.net
This book is dedicated to all who aspire to deeply understand signal processing
for sensors, not just enough to pass an exam or assignment, or to complete a
project, but deep enough to experience the joy of natural revelation. This takes
more than just effort. You have to love the journey. This was best said by one
of America’s greatest inventors, George Washington Carver, in the quote
“Anything will give up its secrets if you love it enough…”
www.itpub.net
Contents
Preface������������������������������������������������������������������������������������������������������������������������������������������� xiii
Acknowledgments���������������������������������������������������������������������������������������������������������������������������� xv
Author��������������������������������������������������������������������������������������������������������������������������������������������xvii
Chapter 2 z-Transform.................................................................................................................. 19
2.1 Comparison of Laplace and z-Transforms........................................................ 19
2.2 System Theory.................................................................................................. 27
2.3 Mapping of s-Plane Systems to the Digital Domain........................................ 30
2.4 MATLAB® Examples....................................................................................... 39
2.5 Summary..........................................................................................................40
Problems...................................................................................................................... 41
References................................................................................................................... 41
ix
x Contents
www.itpub.net
Contents xi
Problems....................................................................................................................280
References................................................................................................................. 281
www.itpub.net
Contents xiii
www.itpub.net
Preface
The second edition of Signal Processing for Intelligent Sensor Systems enhances many of the unique
features of the first edition with more answered problems, web access to a large collection of
MATLAB® scripts used throughout the book, and the addition of more audio engineering, transduc-
ers, and sensor networking technology. All of the key algorithms and development methodologies
have been kept from the first edition, and hopefully all of the typographical errors have been fixed.
The addition of a chapter on Digital Audio processing reflects a growing interest in digital surround
sound (5.1 audio) techniques for entertainment, home theaters, and virtual reality systems. Also,
new sections are added in the areas of sensor networking, use of meta-data architectures using
XML, and agent-based automated data mining and control. This later information really ties large-
scale networks of intelligent sensors together as a network of thin file servers. Intelligent algorithms,
either resident in the sensor/file-server nodes, or run remotely across the network as intelligent
agents, can then provide an automated situational awareness. The many algorithms presented in
Signal Processing for Intelligent Sensor Systems can then be applied locally or network-based to
realize elegant solutions to very complex detection problems.
It was nearly 20 years ago that I was asked to consider writing a textbook on signal processing
for sensors. At the time I typically had over a dozen textbooks on my desk, each with just a few
small sections bookmarked for frequent reference. The genesis of this book was to bring together
all these key subjects into one text, summarize the salient information needed for design and appli-
cation, and organize the broad array of sensor signal processing subjects in a way to make it acces-
sible to engineers in school as well as those practicing in the field. The discussion herein is somewhat
informal and applied and in a tone of engineer-to-engineer, rather than professor-to-student. There
are many subtle nuggets of critical information revealed that should help most readers quickly
master the algorithms and adapt them to meet their requirements. This text is both a learning
resource and a field reference. In support of this, every data graph in the text has a MATLAB
m-script in support of it and these m-scripts are kept simple, commented, and made available to
readers for download from the CRC Press website for the book (http://www.crcpress.com/product/
isbn/9781420043044). Taylor & Francis Group (CRC Press) acquired the rights to the first edition
and have been relentless in encouraging me to update it in this second edition. There were also a
surprising number of readers who found me online and encouraged me to make an updated second
edition. Given the high cost of textbooks and engineering education, we are excited to cut the price
significantly, make the book available electronically online, as well as for “rent” electronically which
should be extremely helpful to students on a tight budget. Each chapter has a modest list of solved
problems (answer book available from the publisher) and references for more information.
The second edition is organized into five parts, each of which could be used for a semester course
in signal processing, or to supplement a more focused course textbook. The first two parts,
“Fundamentals of Digital Signal Processing” and “Frequency Domain Processing,” are appropriate
for undergraduate courses in Electrical and/or Computer Engineering. Part III “Adaptive System
Identification and Filtering” can work for senior-level undergraduate or a graduate-level course, as
is Part IV on “Wave Number Sensor Systems” that applies the earlier techniques to beamforming,
image processing, and signal detection systems. If you look carefully at the chapter titles, you will
see these algorithm applications grouped differently from most texts. Rather than organizing these
subjects strictly by application, we organize them by the algorithm, which naturally spans several
applications. An example of this is the recursive least-squares algorithm, projection operator sub-
space decomposition, and Kalman filtering of state vectors, which all share the same basic recursive
update algorithm. Another example is in Chapter 13 where we borrow the two-dimensional FFT
xv
xvi Preface
usually reserved for image processing and compression and use it to explain available beam pattern
responses for various array shapes.
Part V of the book covers advanced signal processing applications such as noise cancellation,
transducers, features, pattern recognition, and modern sensor networking techniques using XML
messaging and automation. It covers the critical subjects of noise, sensors, signal features, pattern
matching, and automated logic association, and then creates generic data objects in XML so that all
this information can be found. The situation recognition logic emerges as a cloud application in the
network that automatically mines the sensor information organized in XML across the sensor nodes.
This keeps the sensors as generic websites and information servers and allows very agile develop-
ment of search engines to recognize situations, rather than just find documents. This is the current
trend for sensor system networks in homeland security, business, and environmental and demo-
graphic information systems. It is a nervous system for the planet, and to that end I hope this contri-
bution is useful.
MATLAB® is a registered trademark of The MathWorks, Inc. For product information, please
contact:
www.itpub.net
Acknowledgments
I am professionally indebted to all the research sponsors who supported my colleagues, students,
and me over the years on a broad range of sensor applications and network automation. It was
through these experiences and by teaching that I obtained the knowledge behind this textbook. The
Applied Research Laboratory at The Pennsylvania State University is one of the premier engineer-
ing laboratories in the world, and my colleagues there will likely never know how much I have
learnt from them and respect them. A special thanks goes to Mr. Arnim Littek, a great engineer in
the beautiful country of New Zealand, who thought enough of the first edition to send me a very
detailed list of typographical errors and suggestions for this edition. There were others, too, who
found me through the Internet, and I really loved the feedback which served as an inspiration to
write the second edition. Finally to my wife Nadine, and children Drew, Anya, Erik, and Ava, your
support means everything to me.
xvii
www.itpub.net
Author
David C. Swanson has over 30 years of experience with sensor electronics and signal processing
algorithms and 15 years of experience with networking sensors. He has been a professor in the
Graduate Program in Acoustics at The Pennsylvania State University since 1989 and has done
extensive research in the areas of advanced signal processing for acoustic and vibration sensors
including active noise and vibration control. In the late 1990s, his research shifted to rotating equip-
ment monitoring and failure prognostics, and since 1999 has again shifted into the areas of chemi-
cal, biological, and nuclear detection. This broad range of sensor signal processing applications
culminates in his book Signal Processing for Intelligent Sensor Systems, now in its second edition.
Dr. Swanson has written over 100 articles for conferences and symposia, dozens of journal articles
and patents, and three chapters in books other than his own. He has also worked in industry for
Hewlett-Packard and Textron Defense Systems, and has had many sponsored industrial research
projects. He is a fellow of the Acoustical Society of America, a board-certified member of the
Institute of Noise Control engineers and a member of the IEEE. His current research is in the areas
of advanced biomimetic sensing for chemicals and explosives, ion chemistry signal processing, and
advanced materials for neutron detection. Dr. Swanson received a BEE (1981) from the University
of Delaware, Newark, and an MS (1984) and PhD (1986) from The Pennsylvania State University,
University Park, where he currently lives with his wife and four children. Dr. Swanson enjoys music,
football, and home brewing.
xix
www.itpub.net
Part I
Fundamentals of Digital
Signal Processing
It was in the late 1970s that the author first learned about digital signal processing as a freshman
electrical engineering student. Digital signals were a new technology and generally only existed
inside computer programs and as hard disk files on cutting edge engineering projects. At the time,
and reflected in the texts of that time, much of the emphasis was on the mathematics of a sampled
signal, and how sampling made the signal different from the analog signal equivalent. Analog signal
processing is very much a domain of applied mathematics, and looking back over 40 years later, it
is quite remarkable how the equations we process easily today in a computer program were imple-
mented eloquently in analog electronic circuits. Today there is little controversy about the equiva-
lence of digital and analog signals except perhaps among audio extremists/purists. Our emphasis in
this part is on explaining how signals are sampled, compressed, and reconstructed, how to filter
signals, how to process signals creatively for images and audio, and how to process signal informa-
tion “states” for engineering applications. We present how to manage the nonlinearity of converting
a system defined mathematically in the analog s-plane to an equivalent system in the digital z-plane.
These nonlinearities become small in a given low-frequency range as one increases the digital
sample rate of the digital system, but numerical errors can become a problem if too much oversam-
pling is done. There are also options for warping the frequency scale between digital and analog
systems.
We present some interesting and useful applications of signal processing in the areas of audio
signal processing, image processing, and tracking filters. This provides for a first semester course to
cover the basics of digital signals and provide useful applications in audio and images in addition to
the concept of signal kinematic states that are used to estimate and control the dynamics of a signal
or system. Together these applications cover most of the signal processing people encounter in
everyday life. This should help make the material interesting and accessible to students new to the
field while avoiding too much theory and detailed mathematics. For example, we show frequency
response functions for digital filters in this part, but we do not go into spectral processing of signals
until Part II. This also allows some time for MATLAB® use to develop where students can get used
to making m-scripts and plots of simple functions. The application of fixed-gain tracking filters on
a rocket launch example will make detailed use of signal state estimation and prediction as well as
computer graphics in plotting multiple functions correctly. Also, using a digital photograph and
2 Signal Processing for Intelligent Sensor Systems with MATLAB®
two-dimensional low- and high-pass filters provide an interesting introduction to image processing
using simple digital filters. Over 40 years ago, one could not imagine teaching signal processing
fundamentals while covering such a broad range of applications. However, any cell phone today has
all of these applications built in, such as sampling, filtering, and compression of the audio signal,
image capture and filtering, and even a global positioning system (GPS) for estimating location,
speed, and direction.
www.itpub.net
1 Sampled Data Systems
Figure 1.1 shows a basic general architecture that can be seen to depict most adaptive signal process-
ing systems. The number of inputs to the system can be very large, especially for image processing
sensor systems. Since an adaptive signal processing system is constructed using a computer, the
inputs generally fall into the categories of analog “sensor” inputs from the physical world and digital
inputs from other computers or human communication. The outputs also can be categorized into
digital information, such as identified patterns, and analog outputs that may drive actuators (active
electrical, mechanical, and/or acoustical sources) to instigate physical control over some part of the
outside world. In this chapter, we examine the basic constructs of signal input, processing using
digital filters, and output. While these very basic operations may seem rather simple compared to
the algorithms presented later in the text, careful consideration is needed to insure a high-fidelity
adaptive processing system. Figure 1.1 also shows how the adaptive processing can extract the
salient information from the signal and automatically arrange it into XML (eXtensible Markup
Language) databases, which allows broad use by network processes. Later in the book we will dis-
cuss this from the perspective of pattern recognition and web services for sensor networks. The next
chapter will focus on fundamental techniques for extracting information from the signals.
Consider a transducer system that produces a voltage in response to some electromagnetic or
mechanical wave. In the case of a microphone, the transducer sensitivity would have units of
volts/Pascal. For the case of a video camera pixel sensor, it would be volts per lumen/m 2, while
for an infrared imaging system the sensitivity might be given as volts per degree Kelvin. In any
case, the transducer voltage is conditioned by filtering and amplification in order to make the best
use of the analog-to-digital converter (ADC) system. While most adaptive signal processing sys-
tems use floating-point numbers for computation, the ADC converters generally produce fixed-
point (integer) digital samples. The integer samples from the ADC are further converted to
floating-point format by the signal processor chip before subsequent processing. This relieves the
algorithm developer from the problem of controlling numerical dynamic range to avoid underflow
or overflow errors in fixed-point processing unless lesser expensive fixed-point processors are
used. If the processed signals are to be output, then floating-point samples are simply reconverted
to integer and an analog voltage is produced using a digital-to-analog converter (DAC) system
and filtered and amplified.
3
4 Signal Processing for Intelligent Sensor Systems with MATLAB®
XML
database
Input Extracted
sensing ADC information
system Adaptive
signal Web
processing services
Output system
control DAC Commands and
actuator configuration
FIGURE 1.1 A generic architecture for an adaptive signal processing system, including sensor inputs, control
outputs, and information formatting in XML databases for access through the Internet.
output sample rate may be. The toggling of the LSB as it approximates the analog input signal leads
to a low level of uniformly distributed (between 0 and 1) random noise in the digitized signal. This
is normal, expected, and not a problem as long as the sensor signal strengths are sufficient enough
such that the quantization noise is small compared to signal levels. It is important to understand how
transducer and data acquisition systems work so that the adaptive signal processing algorithms can
exploit and control their operation.
While there are many digital coding schemes, the binary number produced by the ADC is usu-
ally coded in either offset binary or in two’s complement formats [1]. Offset binary is used for either
all-positive or all-negative data such as absolute temperature. The internal DAC in Figure 1.2 is set
to produce a voltage Vmin that corresponds to the number 0, and Vmax for the biggest number or 255
(11111111), for the 8-bit ADC. The largest number produced by an M-bit ADC is therefore 2M − 1.
The smallest number, or LSB, will actually be wrong about 50% of the time due to the approximation
process. Most data acquisition systems are built around either 8-, 12-, 16-, or 24-bit ADCs giving
maximum offset binary numbers of 255, 4095, 65535, and 16777215, respectively. If a “noise-less”
signal corresponds to a number of, say 1000, on a 12-bit A/D, the signal-to-noise ratio (SNR) of the
quantization is 1000:1, or approximately 60 dB.
Signed numbers are generally encoded in two’s complement format where the most significant
bit (MSB) is 1 for negative numbers and 0 for positive numbers. This is the normal “signed integer”
format in programming languages such as “C.” If the MSB is 1 indicating a negative number, the
Digital output
Internal
8-bit
8-bit
counter
DAC
Analog a
If a > b: count down
input If b > a: count up
b
FIGURE 1.2 A generic successive approximation type 8-bit ADC showing the internal DAC converter to
compare the counter result to the input voltage.
www.itpub.net
Sampled Data Systems 5
magnitude of the negative binary number is found by complementing (changing 0–1 or 1–0) all of
the bits and adding 1. The reason for this apparently confusing coding scheme has to do with the
binary requirements of logic-based addition and subtraction circuitry in all of today’s computers
[2,3]. The logical simplicity of two’s complement arithmetic can be seen when considering that
the sum of 2 two’s complement numbers, N1 and N2, is done exactly the same as for offset binary
numbers, except any carryover from the MSB is simply ignored. Subtraction of N1 from N2 is done
simply by forming the two’s complement of N1 (complementing the bits and adding 1), and then
adding the two numbers together ignoring any MSB carryover. An 8-, 12-, 16-, or 24-bit two’s
complement ADC with numbers over ranges of (+127, −128), (+2047, −2048), (+32767, −32768), and
(+8388607, −8388608), respectively.
Table 1.1 shows two’s complement binary for a 3-bit ±3.5 V A/D and shows the effect of sub
tracting the number +2 (010 or +2.5 V) from each of the possible 3-bit numbers. Note that the
complement of +2 is (101) and adding 1 gives the “two’s complement” of (110), which is equal to
numerical −2 or −1.5 V in Table 1.1.
As can be seen in Table 1.1, the numbers and voltages with an asterisk are rather grossly in error.
This type of numerical error is the single most reason to use floating-point rather than fixed-point
signal processors. It is true that fixed-point signal processor chips are very inexpensive, lower power,
and faster at fixed-point arithmetic. However, a great deal of attention must be paid to insuring that
no numerical errors of the type in Table 1.1 occur in a fixed-point processor. Fixed-point processing
severely limits the numerical dynamic range of the adaptive algorithms used. In particular, algo-
rithms involving many divisions, matrix operations, or transcendental functions such as logarithms
or trigonometric functions are generally not good candidates for fixed-point processing. All the
subtractions are off by at least 0.5 V, or half the LSB. A final point worth noting from Table 1.1 is
that while the analog voltages of the ADC are symmetric about 0 V, the coded binary numbers are
not, giving a small numerical offset from the two’s complement coding. In general, the design of
analog circuits with nearly zero offset voltage is a difficult enough task that one should always
assume some nonzero offset in all digitized sensor data.
The maximum M-bit two’s complement positive number is 2M−1 − 1 and the minimum negative
number is −2M−1. This is because one of the bits is used to represent the sign of the number and one
number is reserved to correspond to zero. We want zero to be “digital zero” and we could just leave
it at that but it would make addition and subtraction logically more complicated. That is why two’s
complement format is used for signed integers. Even though the ADC and analog circuitry offset is
small, it is good practice in any signal processing system to numerically remove it. This is simply
done by recursively computing the mean of the A/D samples and subtracting this time-averaged
mean from each ADC sample.
TABLE 1.1
Effect of Subtracting 2 from the Range of Numbers from a 3-bit Two’s Complement A/D
Voltage N Binary N Binary N2 Voltage N2
+3.5 011 001 +1.5
+2.5 010 000 +0.5
+1.5 001 111 −0.5
+0.5 000 110 −1.5
−0.5 111 101 −2.5
−1.5 110 100 −3.5
−2.5 101 011* +1.5*
−3.5 100 010* +0.5*
6 Signal Processing for Intelligent Sensor Systems with MATLAB®
A j ωt A − j ωt
x(t ) = A cos(ωt ) = e + e . (1.1)
2 2
We now sample x(t) every T seconds giving a sampling frequency of fs Hz (samples per second).
The digital waveform is denoted as x[n], where n refers to the nth sample in the digitized sequence
in equation
x[ n] = x(nT ) = A cos(ωnT )
⎛ 2π f ⎞ (1.2)
= A cos ⎜ n ⎟.
⎝ fs ⎠
Equation 1.2 shows a “digital frequency” of Ω = 2πf/fs, which has the same period as an analog
waveform of frequency f so long as f is less than fs/2. Clearly, for the real sampled cosine waveform,
a digital frequency of 1.1π is basically indistinguishable from 0.9π except that the period of the 1.1π
waveform will actually be longer than the analog frequency f! Figures 1.3 and 1.4 graphically illus-
trate this phenomenon well-known as aliasing. Figure 1.3 shows a 100-Hz analog waveform sampled
1000 times/s. Figure 1.4 shows a 950-Hz analog signal with the same 1000 Hz sample rate. Since the
periods of the sampled and analog signals match only when f ≤ fs/2, the frequency components of the
analog waveform are said to be unaliased, and adequately represented in the digital domain [4].
Restricting real analog frequencies to be less than fs/2 have become widely known as the Nyquist
sampling criterion. This restriction is generally implemented by a low-pass filter (LPF) with −3 dB
cutoff frequency in the range of 0.4 fs to insure a wide margin of attenuation for frequencies above
fs/2. However, as will be discussed in the rest of this chapter, the “antialiasing” filters can have
environment-dependent frequency responses which adaptive signal processing systems can
intelligently compensate.
It will be very useful for us to explore the mathematics of aliasing to fully understand the phe-
nomenon, and to take advantage of its properties in high-frequency bandlimited ADC systems.
Consider a complex exponential representation of the digital waveform in Equation 1.3 showing
both positive and negative frequencies
x[ n] = A cos(Ω n )
A A (1.3)
= e + j Ω n + e − j Ω n.
2 2
While Equation 1.3 compares well with 1.1, there is a big difference due to the digital sampling.
Assuming that no antialiasing filters are used, the digital frequency of Ω = 2πf/fs (from the analog
waveform sampled every T seconds) could represent a multiplicity of analog frequencies
www.itpub.net
Sampled Data Systems 7
0.8
0.6
0.4
0.2
–0.2
–0.4
–0.6
–0.8
–1
0 0.005 0.01 0.015
Seconds
FIGURE 1.3 A 75-Hz sinusoid (solid line) is sampled at 1 kHz (1 ms per sample) as seen by each asterisk (*)
showing that the digital signal accurately represents the frequency and amplitude of the analog signal.
or the real signal in Equation 1.3, both the positive and negative frequencies have images at
±2πm; m = 0, 1, 2,… . Therefore, if the analog frequency f is outside the Nyquist bandwidth of
0 − fs/2 Hz, one of the images of ±f will appear within the Nyquist bandwidth, but at the wrong
(aliased) frequency. Since we want the digital waveform to a linear approximation to the original
analog waveform, the frequencies of the two must be equal. One must always suppress frequencies
0.8
0.6
0.4
0.2
–0.2
–0.4
–0.6
–0.8
–1
0 0.005 0.01 0.015
Seconds
FIGURE 1.4 A 950-Hz sinusoid sampled at 1 kHz clearly show the aliasing effect as the digital samples (*)
appear as a 50-Hz signal.
8 Signal Processing for Intelligent Sensor Systems with MATLAB®
o utside the Nyquist bandwidth to be sure that no aliasing occurs. In practice, it is not possible to
make an analog signal filter that perfectly passes signals in the Nyquist band while completely
suppressing all frequencies outside this range. One should expect a transition zone near the
Nyquist band upper frequency where unaliased frequencies are attenuated and some aliased fre-
quency “images” are detectable. Most spectral analysis equipment will implement an antialias
filter with a −3 dB cutoff frequency of about 1/3 the sampling frequency. The frequency range
from 1/3 fs to 1/2 fs is usually not displayed as part of the observed spectrum so the user does not
notice the antialias filter’s transition region and the filter very effectively suppresses frequencies
above fs/2.
Figure 1.5 shows a graphical representation of the digital frequencies and images for a sample
rate of 1000 Hz and a range of analog frequencies including those of 100 and 950 Hz in Figures 1.3
and 1.4, respectively. When the analog frequency exceeds the Nyquist rate of fs/2 (π on the Ω axis),
one of the negative frequency images (dotted lines) appears in the Nyquist band with the wrong
(aliased) frequency, violating assumptions of system linearity.
100 Hz
Ω
–3π –2π π 0 +π +2π +3π
300 Hz
Ω
–3π –2π π 0 +π +2π +3π
495 Hz
Ω
–3π –2π π 0 +π +2π +3π
Aliased
400 Hz
600 Hz
Ω
–3π –2π π 0 +π +2π +3π
Aliased
50 Hz 950 Hz
Ω
–3π –2π π 0 +π +2π +3π
FIGURE 1.5 A graphical view of 100, 300, 495, 600, and 950 Hz analog signals sampled at 1 kHz in the
frequency domain showing the aliased “images” of the positive and negative frequency components where the
shaded box represents the digital signal bandwidth.
www.itpub.net
Sampled Data Systems 9
FIGURE 1.6 Analog frequencies bandpass filtered in the mth band will naturally appear in the baseband
from 0 to fs/2 Hz, just shifted in frequency.
10 Signal Processing for Intelligent Sensor Systems with MATLAB®
top and bottom of the digital baseband allow for a transition zone of the antialiasing filters. Practical
use of this technique requires precise bandpass filtering and selection of the sample rate. However,
Figure 1.6 should also raise concerns about the effects of high-frequency analog noise “leaking” into
digital signal processing systems at the point of ADC. The problem of aliased electronic noise is par-
ticularly acute in systems where many high-speed digital signal processors operate in close proximity
to high-impedance analog circuits and the ADC subsystem has a large number of resolution bits.
For the case of a very narrow bandwidth at a high frequency it is obvious to see the numerical
savings, and it is relatively easy to pick a sample rate where only a little bandwidth is left unused.
However, for wider analog signal bandwidths a more general approach is needed where the band-
width of interest is not required to lie within a multiple of the digital baseband. To accomplish this
we must insure that the negative images of the sampled data do not mix with the positive images for
some arbitrary bandwidth of interest. The best way to do this is to simply get rid of the negative
frequency and its images entirely by using complex (real plus imaginary) samples.
How can one obtain complex samples from the real output of the ADC? Mathematically, one can
describe a “cosine” waveform as the real part of a complex exponential. However, in the real world
where we live (at least most of us some of the time), the sinusoidal waveform is generally observed
and measured as a real quantity. Some exceptions to this are simultaneous measurement of spatially
orthogonal (e.g., horizontal and vertical polarized) wave components such as polarization of elec-
tromagnetic waves, surface Rayleigh waves, or orbital vibrations of rotating equipment, all of which
can directly generate complex digital samples. To generate a complex sample from a single real
ADC convertor, we must tolerate a signal-phase delay which varies with frequency. However, since
this phase response of the complex sampling process is known, one can easily remove the phase
effect in the frequency domain.
The usual approach is to gather the real part as before and to subtract in the imaginary part using
a T/4 delayed sample
x R [ n] = A cos(2 π f nT + ϕ),
(1.5)
I ⎡ T⎤
j x [ n] = − A cos(2 π f ⎢ nT + ⎥ + ϕ).
⎣ 4⎦
The parameter φ in Equation 1.5 is just an arbitrary phase angle for generality. For the frequency
f = fs, Equation 1.5 reduces to
x R [ n] = A cos(2 πn + ϕ),
⎛ π⎞
j x I [ n] = − A cos ⎜ 2 πn + ϕ + ⎟
⎝ 2⎠
= A sin(2 πn + ϕ) (1.6)
so that for this particular frequency, the phase of the imaginary part is actually correct. We now
have a usable bandwidth fs, rather than fs/2 as with real samples. However, each complex sample is
actually two real samples, keeping the total information rate (number of samples per second) con-
stant! As the frequency decreases toward 0, a phase error bias will increase toward a phase lag of
π/2. However, since we wish to apply complex sampling to high-frequency bandpass systems, the
phase bias can be changing very rapidly with frequency, but it will be fixed for the given sample
rate. The complex samples in terms of the digital frequency Ω and analog frequency f are
www.itpub.net
Sampled Data Systems 11
π⎛ f ⎞
Δθ = − 1− ⎟ . (1.8)
2 ⎜⎝ fs ⎠
For adaptive signal processing systems that require phase information, usually two or more chan-
nels have their relative phases measured. Since the phase bias caused by the complex sampling is
identical for all channels, the phase bias can usually be ignored if relative channel phase is needed.
The scheme for complex sampling presented here is sometimes referred to as “quadrature sam-
pling” or even “Hilbert transform sampling” due to the mathematical relationship between the real
and imaginary parts of the sampled signal in the frequency domain.
Figure 1.7 shows how any arbitrary bandwidth can be complex sampled at a rate equal to the
bandwidth in Hertz, and then digitally “demodulated” into the Nyquist baseband. If the signal band-
width of interest extends from f1 to f 2 Hz, an analog bandpass filter is used to band limit the signal
and complex samples are formed as seen in Figure 1.7 at a sample rate of fs = f 2 − f1 samples per
second. To move the complex data with frequency f1 down to 0 Hz and the data at f 2 down to fs Hz,
all one needs to do is multiply the complex samples by e−jΩ1n, where Ω1 is simply 2πf1/fs. Therefore,
the complex samples in Equation 1.5 are demodulated as seen in equation
x R [ n] = A cos(Ωn + ϕ)e − jΩ n ,1
⎡ 1⎤ (1.9)
j x I [ n] = − A cos(Ω ⎢ n + ⎥ + ϕ)e − jΩ n .1
⎣ 4⎦
Analog signal reconstruction can be done by remodulating the real and imaginary samples by f1
in the analog domain. Two oscillators are needed, one for the cos(2πf1t) and the other for the
sin(2πf1t). A real analog waveform can be reconstructed from the analog multiplication of the DAC
real sample times the cosine minus the DAC imaginary sample times the sinusoid. As with the
complex sample construction, some phase bias will occur. However, the technique of modulation
and demodulation is well established in amplitude-modulated (AM) radio. In fact, one could have
just as easily demodulated (i.e., via an analog heterodyne circuit) a high-frequency signal, band-
limited it to a low-pass frequency range of half the sample rate, and ADC it as real samples.
Reconstruction would simply involve DAC, low-pass filtering, and remodulation by a cosine
Complex bandwidth
of interest
0 fs f1 f2
–jΩ1n
e
Complex baseband
0 2π 4π 6π
FIGURE 1.7 An arbitrary high-frequency signal may be bandpass filtered and complex sampled and demod-
ulated to a meaningful baseband for digital processing.
12 Signal Processing for Intelligent Sensor Systems with MATLAB®
aveform. In either case, the net signal information rate (number of total samples per second) is
w
constant for the same signal bandwidth. It is merely a matter of algorithm convenience and desired
analog circuitry complexity from which the system developer must decide how to handle high-
frequency band-limited signals.
The “6.02” is 20 times the base-10 logarithm of 2, and 1.76 is 10 times the base-10 logarithm of
1.5, which is apparently added into account for quantization noise in the LSB giving the correct bit
setting 50% of the time. Hence, for a 16-bit sample, one might use Equation 1.10 to say that the SNR
is over 97 dB, which is not correct. N should refer to the number of precision bits, which are 15 for
a 16-bit sample because the LSB is wrong 50% of the time. Therefore, for a single-ended 16-bit
sample the maximum SNR is approximately 92.06 dB. For signed integer (two’s compliment) sam-
ple where the SNR is measured for sinusoids in white noise the maximum SNR is only 86.04 dB,
because one bit is used to represent the sign. The ENOB is simply the SNR divided by 6.02.
The DSC actually gets theoretical 9 dB SNR improvement each halving of sample rate due to
something called quantization noise shaping inherent in the delta modulator circuit, and by increas-
ing the number of bits in the binary sample by 1 with each addition in the digital filtering. The
integrator in the delta modulator and the feedback differencing operation have the effect of shifting
the quantization noise to higher frequencies while enhancing the signal more at lower frequencies.
Because of this, it makes sense to add a bit with each addition in the low-pass decimation filter,
www.itpub.net
Sampled Data Systems 13
g iving three additional bits with each halving of the sample rate. Hence for a 6.4 MHz 1-bit sample
bitstream (12.8 MHz modulation clock), one gets 12-bit samples at a rate of 400 kHz. However, the
low-frequency signal enhancement means that the signal bandwidth is not flat, but rather rolls off
significantly near the Nyquist rate. Hence, most DSC designs also employ a cascade of digital filters
to correct this rolloff in the passband and enhance the filtering in the stopband. The additions in
these filters add 2 bits per halving of the sample rate and provide an undistorted waveform (linear
phase response) with a little added delay. The 12-bit samples at 400 kHz emerge delayed but with
16-bits at a 100 kHz sample rate and neatly filtered at a Nyquist cutoff frequency of 50 kHz. The
DSC has a built in low-pass antialiasing filter, usually a simple R-C filter with a cutoff around
100 kHz, which attenuates by about 36 dB at 6.4 MHz, six octaves higher at the 1-bit delta modula-
tor input. Any aliased signal images are therefore 72 dB attenuated back down at 100 kHz, and
more as you go lower in frequency. At 25 kHz, aliased signals are 84 dB attenuated, so for audio-
band recording with 16-bit samples there is effectively no aliasing problem.
At the heart of a DSC is a device called a “delta modulator” that can be seen depicted in Figure 1.8.
The delta modulator produces a 1-bit digital signal called a bitstream at a very high sample rate
where one can convert a frame of N-bits to a log2N-bit word. The analog voltage level at the end of
the frame will be a filtered sum of the bits within the frame. Hence, if the analog input in Figure 1.8
was very close to Vmax, the bitstream would be nearly all ones; if it were close to 0, the bitstream
would be nearly all zeros; and if it were near Vmax/2, about 50% of the bits within the frame would
be 1’s. The delta-modulated bitstream can be found today on “super audio DVD discs,” which typi-
cally have 24-bit samples at sample rates of 96 kHz, and sometimes even 192 kHz, much higher
resolution than the 16-bit 44.1 kHz samples of the standard compact disc.*
The DSC has some very interesting frequency response properties. The action of the integrator
and latch give a transfer function which essentially filters out low-frequency quantization noise,
improving the theoretical SNR about 3 dB each time the bandwidth is halved. The quantization
noise attenuation allows one to keep additional bits from the filtering and summing, which yields a
theoretical 9 dB improvement overall each time the frame rate is halved. This makes generating
large sample words at lower frequencies very accurate. However, the noise-shaping effect also
makes the upper end of the signal frequency response roll off well below the Nyquist rate. DSC
manufacturers correct for this using a digital filter to restore the high-frequency response, but this
also brings a time delay to the DSC output sample. This will be discussed in more detail in Chapter 3
in the sections on digital filtering with finite impulse response (FIR) filters. For some devices this
delay can be on the order of 32 samples, and hence the designer must be careful with this detail for
Comparator
Input signal Integrator
(1-bit ADC)
(0 –Vmax)
∫
+ Sample Output
+ and
– hold
– latch
Difference
amplifier
1-bit DAC Modulation
Vmax clock
Gnd
FIGURE 1.8 A delta modulator is used to convert an analog voltage to a 1-bit “bitstream” signal where the
amplitude of the signal is proportional to the number of 0s and 1s in a given section of the bitstream.
* The author was very skeptical of this technology until he actually heard it. The oversampling and bigger bit-depth really
does make a difference since most movie and music recordings are compilations of sounds with a wide range of loudness
dynamics.
14 Signal Processing for Intelligent Sensor Systems with MATLAB®
applications that require real-time signal inputs and outputs, such as control loop applications. The
maximum theoretical SNR of a DSC can be estimated by considering the noise shaping of the delta
modulator and the oversampling ratio (OSR).
where OSR is the ratio of the 1-bit sample rate fs divided by the N-bit decimated sample rate fsN. This
SNR improvement is more of a marketing nuance than a useful engineering parameter because one
only has a finite dynamic range available based on the number of bits in the output samples. For our
6.4 MHz sampled bitstream processed down to 16-bit samples at 100 kHz, the theoretical SNR
from Equation 1.11 is 116.1 dB using N = 16 and 104.1 dB using N = 14 (1 bit for sign and ignoring
the LSB). What does all this marketing rhetoric mean? It means that the DSC does not introduce
quantization noise, and so the effective SNR is about 90 dB for 16-bit signed samples. However, by
using more elaborate filters some DSC will produce more useful bits because of this higher theoreti-
cal limit. It is common to see 24-bit samples from a DSC which have SNRs in the range of 120 dB
for audio bandwidths. The 24-bit sample word format conveniently maps to 3 bytes per sample, even
though the actual SNR is not using all 24 bits. An SNR of 120 dB is ratio of about 1 million to 1.
Since most signals are recorded with a maximum of ±10 V or less, and the analog electronic noise
floor at room temperature is of the order of microvolts for audio bandwidths (unless one employs
cooling to reduce thermal noise in electronic devices), an ENOB of around 20 can be seen as
adequate to exceed the dynamic range of most sensors and electronics. As such, using a 24-bit
DSC with effectively 20 bits of real SNR, one no longer needs to be concerned with setting the
voltage gain to match the sensor signal to the ADC! For most applications where the signal is
simply recorded and used, the DSC filter delay is not important either. As a result of the accuracy
and convenience of the DSC, it is now the most common ADC in use.
* Provided the user pledges to only use the author’s m-scripts for good, not evil.
www.itpub.net
Sampled Data Systems 15
TABLE 1.2
m-Script Example for Generating Simple Graphs of Sampled Sinusoids
% MATLAB m-file for Figures 1.2 and 1.3 A2D-Demo
fs = 1000; % sample rate
Ts = 1/fs; % sample time interval
fs_analog = 10000; % “our” display sample rate (analog signal points)
npts_analog = 200; % number of analog display points
T_analog = 1/fs_analog; % “our” display sample interval
f0 = 950; % use 75 Hz for Fig 1.3 and 950 Hz for Fig 1.4
Tstop = 0.015; % show 15 msecs of data
Ta = 0:T_analog:Tstop; % analog “samples”
Td = 0:Ts:Tstop; % digital samples
ya = zeros(size(Ta)); % zero out data vectors same length as time
yd = zeros(size(Td));
w0 = 2*pi*f0;
ya = cos(w0.*Ta); % note scalar by vector multiply (.*) gives vector in
% the cosine argument and a vector in the output ya
yd = cos(w0.*Td);
figure(1); % initialize a new figure window for plotting
plot(Ta,ya,’k’); % plot in black
hold on; % keep the current plot and add another layer
plot(Td,yd,’k*’); % plot in black “*”
hold off; % return figure to normal state
xlabel(‘Seconds’);
for-loops. It also executes substantially faster than a for-loop and leaves a script that is very easily
read. The “.*” dot product extends to vectors and matrices. Conversely, one has to consider matrix
algebra rules when multiplying and dividing matrices and vectors. If “x” and “y” are both row
vectors, the statement “x*y” will generate an error. Using the transpose operator on “y” will do a
Hermitian transpose (flip a row vector into a column and replace the elements with complex conju-
gates) so that “x*y′” will yield a scalar result. If you do not want a complex conjugate (it does not
matter for real signals) the correct syntax is “x*y′”. The “dot-transpose” means just transpose the
vector without the conjugate operation. Once one masters this “vector concept” in the m-scripts
generating plots of all the signal processing in this book presented in m-scripts will become very
straightforward. The “plot” statement has to have the x and y components defined as identical-sized
vectors to execute properly. The most common difficulty the author has seen is these vectors not
matching (rows–columns need to be flipped or vectors of different lengths) in functions like “plot”.
The statement “hold on” allows one to overlay plots, which can also be done by adding multiple
x − y vector pairs to the plot argument. On the MATLAB command line one can enter “help plot” to
get more details as well as through the help window. The reason MATLAB is part of this book is
that it has emerged as one of the most effective ways to quickly visualize and test signal processing
algorithms. The m-scripts are deliberately kept very simple for brevity and to expose the algorithm
coding details, but many users will embed the algorithms into very sophisticated MATLAB-based
graphical user interfaces (GUIs) or port the algorithms to other languages such as C, C++, C#,
Visual Basic, and Web-based script languages such as Java script or Flash script.
use with unsigned or signed integer arithmetic, respectively. Floating-point digital signal processors
subsequently convert the integers from the ADC to their internal floating-point format for process-
ing, and then back to the appropriate integer format for DAC conversion. Even though floating-point
arithmetic has a huge numerical dynamic range, the limited dynamic range of the ADC and DAC
convertors must always be considered. Adaptive signal processing systems can, and should, adap-
tively adjust input and output gains while maintaining floating-point data calibration. This is much
less of an issue when using ADC and DAC with over 20 bits of precision. Adaptive signal calibration
is straightforwardly based on known transducer sensitivities, signal conditioning gains, and the
voltage sensitivity and number of bits in the ADC and DAC convertors. The LSB is considered to
be a random noise source both numerically for the ADC convertor and electronically for the DAC
convertor. Given a periodic rate for sampling analog data and reconstruction of analog data from
digital samples, analog filters must be applied before ADC and after DAC conversion to avoid
unwanted signal aliasing. The DSC has a built-in antialiasing filter, and one can alter the clock of
the device over a fairly wide range and still have high-fidelity samples down to a frequency of
approximately 8 kHz. Below that, an external antialias filter is needed. For real digital data, the
sample rate must be at least twice the highest frequency which passes through the analog “antialias-
ing” filters. For complex samples, the complex-pair sample rate equals the bandwidth of interest,
which may be demodulated to baseband if the bandwidth of interest was in a high-frequency range.
The frequency response of DAC conversion as well as sophisticated techniques for analog signal
reconstruction will be discussed in Section 4.6 later in the text.
PROBLEMS
1. An accelerometer with sensitivity 10 mV/G (1.0 G is 9.801 m/s2) is subjected to a ±25 G
acceleration. The electrical output of the accelerometer is amplified by 11.5 dB before
A/D conversion with a 14-bit two’s complement encoder with an input sensitivity of
0.305 mV/bit.
a. What is the numerical range of the digitized data?
b. If the amplifier can be programmed in 1.5 dB steps, what would be the amplification
for maximum SNR? What is the SNR?
2. An 8-bit two’s complement A/D system is to have no detectable signal aliasing at a sample
rate of 100,000 samples per second. An eighth-order (−48 dB/octave) programmable cut-
off frequency LPF is available.
a. What is a possible cutoff frequency fc?
b. For a 16-bit signed A/D what would the cutoff frequency be?
c. If you could tolerate some aliasing between fc and the Nyquist rate, what is the high-
est fc possible for the 16-bit system in part b?
3. An acceptable resolution for a medical ultrasonic image is declared to be 1 mm. Assume
sound travels at 1500 m/s in the human body.
a. What is the absolute minimum A/D sample rate for a receiver if it is to detect echoes
from scatterers as close as 1 mm apart?
b. If the velocity of blood flow is to be measured in the range of ±1 m/s (we do not
need resolution here) using a 5 MHz ultrasonic sinusoidal burst, what is the minimum
required bandwidth and sample rate for an A/D convertor? (Hint: a Doppler-shifted
frequency fd can be determined by fd = f(1 + v/c), −c < v < +c; where f is the transmit-
ted frequency, c is the wave speed, and v is the velocity of the scatterer toward the
receiver.)
4. A microphone has a voltage sensitivity of 12 mV/Pa (1 Pascal = 1 Nt/m2). If a sinusoi-
dal sound of about 94 dB (approximately 1 Pa rms in the atmosphere) is to be digitally
recorded, how much gain would be needed to insure a “clean” recording for a 10 V 16-bit
signed A/D system?
5. A standard analog television in the United States has 525 vertical lines scanned in even
and odd frames 30 times/s.
a. If the vertical field of view covers a distance of 1.0 m, what is the size of the smallest
horizontal line thickness which would appear unaliased?
www.itpub.net
Sampled Data Systems 17
REFERENCES
1. N. S. Jayant and P. Noll, Digital Coding of Waveforms. Englewood Cliffs, NJ: Prentice-Hall, 1984.
2. K. Hwang, Computer Arithmetic. New York, NY: Wiley, 1979, p. 71.
3. A. Gill, Machine and Assembly Language Programming of the PDP-11. Englewood Cliffs, NJ: Prentice-
Hall, 1978.
4. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing. Englewood Cliffs, NJ: Prentice-
Hall, 1973.
5. P. M. Aziz et al., An overview of sigma-delta converters, IEEE Sig Proc Mag, Jan 1996, pp. 61–83.
6. S. Park, Principles of Sigma–Delta Modulation for Analog-to-Digital Converters, Motorola Application
Notes. Schaumburg, IL: Motorola, Inc., 1999, /D, Rev 1.
www.itpub.net
2 z-Transform
Given a complete mathematical expression for a discrete time-domain signal, why transform it to
another domain? The main reason for time–frequency transforms is that many mathematical reduc-
tions are much simpler in one domain than the other [1]. The z-transform in the digital domain is the
counterpart to the Laplace transform in the analog domain. The z-transform is an extremely useful
tool for analyzing the stability of digital sequences, designing stable digital filters, and relating digi-
tal signal processing operations to the equivalent mathematics in the analog domain. The Laplace
transform provides a systematic method for solving analog systems described by differential equa-
tions. Both the z-transform and the Laplace transform map their respective finite-difference or dif-
ferential systems of equations in the time or spatial domain to much simpler algebraic systems in the
frequency or wavenumber domains, respectively. However, the relationship between the z-domain
and the s-domain of the Laplace transform is not linear, meaning that the digital filter designer will
have to decide whether to match the system poles, zeros, or impulse response. As will be seen later
in this chapter, one can warp the frequency axis to control where and how well the digital system
matches the analog system. We begin by that assuming time t increases as life progresses into the
future, and a general signal of the form est, s = σ + jω, is stable for σ ≤ 0. A plot of our general signal
is shown in Figure 2.1.
The quantity s = σ + jω is a complex frequency where the real part σ represents the damping of
the signal (σ = −10.0 Nepers/s and ω = 50π rad/s, or 25 Hz, in Figure 2.1). All signals, both digital
and analog, can be described in terms of sums of the general waveform shown in Figure 2.1. This
includes transient characteristics governed by σ. For σ = 0, one has a steady-state sinusoid. For
σ < 0 as shown in Figure 2.1, one has an exponentially decaying sinusoid. If σ > 0, the exponentially
increasing sinusoid is seen as unstable, since eventually it will become infinite in magnitude. Signals
which change levels over time can be mathematically described using piecewise sums of stable and
unstable complex exponentials for various periods of time as needed.
The same process of generalized signal modeling is applied to the signal responses of systems
such as mechanical or electrical filters, wave propagation “systems,” and digital signal processing
algorithms. We define a “linear system” as an operator which changes the amplitude and/or phase
(time delay) of an input signal to give an output signal with the same frequencies as the input, inde-
pendent of the input signal’s amplitude, phase, or frequency content. Linear systems can be disper-
sive, where some frequencies travel through them faster than others, as long as the same system
input–output response occurs independent of the input signal. Since there are an infinite number of
input signal types, we focus on one very special input signal type called an impulse. An impulse
waveform contains the same energy level at all frequencies including 0 Hz (direct current or con-
stant voltage), and is exactly reproducible. For a digital waveform, a digital impulse simply has only
one sample nonzero. The response of linear systems to the standard impulse input is called the
system impulse response. The impulse response is simply the system’s response to a Dirac delta
function (or the unity amplitude digital domain equivalent), when the system has zero initial condi-
tions. The impulse response for a linear system is unique and a great deal of useful information
about the system can be extracted from its analog or digital domain transform [2].
19
20 Signal Processing for Intelligent Sensor Systems with MATLAB®
0.8
0.6
0.4
0.2
Response
–0.2
–0.4
–0.6
–0.8
–1
0 0.1 0.2 0.3 0.4 0.5
Seconds
FIGURE 2.1 A “general” stable signal of the form e(σ+jω)t where σ ≤ 0 indicates a stable waveform for posi-
tive time.
+∞
The Laplace transform makes use of the kernel K(s,t) = e−st, which is also in the form of our
“general” signal as shown in Figure 2.1. We present the Laplace transform L { } as a pair of integral
transforms in Equation 2.2 relating the time “t” and frequency “s” domains.
+∞
Y (s ) = L{y(t )} = ∫ y(t )e
0
− st
dt
σ + j∞ (2.2)
1
y(t ) = L {Y (s )} =
−1
2πj ∫ Y (s)e
σj∞
+ st
ds
The corresponding z-transform pair for discrete signals is seen in Equation 2.3 where t is replaced
with nT and denoted as [n], and z = est.
+∞
n 0
(2.3)
1
y[ n] = Z −11
{Y [ z ]} =
2πj ∫ Y [ z]z
Γ
n −1
dzz
The closed contour Γ in Equation 2.3 must enclose all the poles of the function Y[z] zn−1. Both
Y(s) and Y[z] are, in the most general terms, ratios of polynomials where the zeros of the numera-
tor are also zeros of the system. Since the system response tends to diverge if excited near a
zero of the denominator polynomial, the zeros of the denominator are called the system poles.
The transforms in Equations 2.2 and 2.3 are applied to signals, but if these “signals” represent
system impulse or frequency responses, our subsequent analysis will refer to them as “systems,”
or “system responses.”
www.itpub.net
z-Transform 21
There are two key points which must be discussed regarding the Laplace and z-transforms. First,
we present what is called a “one-sided” or “causal” transform. This is seen in the time integral of
Equation 2.2 starting at t = 0, and the sum in Equation 2.3 starting at n = 0. Physically, this means
that the current system response is a result of the current and past inputs, and specifically not future
inputs. Conversely, a current system input can have no effect on previous system outputs. Only time
moves forward in the real physical world (at least as we know it in the twentieth century), and so a
distinction must be made in our mathematical models to represent this fact. Our positive time move-
ment mathematical convention has a critical role to play in designating stable and unstable signals
and systems mathematically. Second, in the Laplace transform’s s-plane (s = σ + jω), only signals
and system responses with σ ≤ 0 are mathematically stable in their causal response (time moving
forward). This means est is either of constant amplitude (σ = 0), or decaying amplitude (σ < 0) as
time increases. Therefore, system responses represented by values of s on the left-hand plane (jω is
the vertical Cartesian axis) are stable causal response systems. As will be seen below, the nonlinear
mapping from the s-plane (analog signals and systems) to z-plane (digital signals and systems) maps
the stable causal left-half s-plane to the region inside a unity radius circle on the z-plane, called the
unit circle.
The comparison of the Laplace and z-transforms is most useful when considering the mapping
between the complex s-plane and the complex z-plane, where z = esT, T being the time interval in
seconds between digital samples of the analog signal. The structure of this mapping depends on the
digital sample rate and whether real or complex samples are used. An understanding of this mapping
will allow one to easily design digital systems which model (or control) real physical systems in the
analog domain. Also, adaptive system modeling in the digital domain of real physical systems can
be quantitatively interpreted and related to other information processing in the adaptive system.
However, if we have an analytical expression for a signal or system in the frequency domain, it may
or may not be realizable as a stable causal signal or system response in the time domain (digital or
analog). Again, this is due to the obliviousness of time to positive or negative direction. If we are
mostly concerned with the magnitude response, we can generally adjust the phase (by adding time
delay) to realize any desired response as a stable causal system. Table 2.1 gives a partial listing of
some useful Laplace transforms and the corresponding z-transforms assuming regularly sampled
data every T seconds (fs = 1/T samples/s).
One of the subtler distinctions between the Laplace transforms and the corresponding z-transforms
in Table 2.1 are how some of the z-transform magnitudes scale with the sample interval T. It can be
seen that the result of the scaling is that the sampled impulse responses may not match the inverse
z-transform if a simple direct s-to-z mapping is used. Since adaptive digital signal processing can be
used to measure and model physical system responses, we must be diligent to eliminate digital
system responses where the amplitude depends on the sample rate. However, in Section 2.3, it will
be shown that careful consideration of the scaling for each system resonance or pole will yield a
very close match between the digital system and its analog counterpart. At this point in our presen-
tation of the z-transform, we compare the critical mathematical properties for linear time-invariant
systems in both the analog Laplace transform and the digital z-transform.
The Laplace transform and the z-transform have many mathematical similarities, the most
important of which are the properties of linearity and shift invariance. Linear shift-invariant system
modeling is essential to adaptive signal processing since most optimizations are based on a quadratic
squared output error minimization. But even more significantly, linear time-invariant physical
systems allow a wide-range linear algebra to apply for the straightforward analysis of such systems.
Most of the world around us is linear and time invariant, provided the responses we model are rela-
tively small in amplitude and quick in time. For example, the vibration response of a beam slowly
corroding due to weather and rust is linear and time invariant for small vibration amplitudes over a
period of, say, days or weeks. But, over a period of years the beam’s corrosion changes the vibration
response, thereby making it time varying in the frequency domain. If the forces on the beam approach
its yield strength, the stress–strain relationship is no longer linear and single-frequency vibration
22 Signal Processing for Intelligent Sensor Systems with MATLAB®
TABLE 2.1
Some Useful Signal Transforms
Time Domain s Domain z Domain
1 for t ≥ 0 1 z
0 for t < 0 s (z − 1)
es0 t 1 z
s − s0 z − es 0
T
t e s0 t 1 Tz e s0
T
2
(s − s0 ) T
( z − e s0 )2
e at sin ω0 t ω0 z e − aT sin ω 0T
s 2 + 2 as + a 2 + ω 20 z 2
2 z e − a T cos ω 0T + e −2 a T
1 e − at e − bt 1 ( Az + B)z
+ +
ab a (a b) b(b a ) s(s + a )(s + b) ( z e − a T )( z e −b T )( z 1)
ae − aT (1 e −bT ) be −bT (1 e − aT )
B=
ab (b 1)
inputs into the beam will yield nonlinear multiple frequency outputs. Nonlinear signals are rich in
physical information but require very complicated models. From a signal processing point of view, it
is extremely valuable to respect the physics of the world around us, which is only linear and time
invariant within specific physical constraints, and exploit linearity and time invariance wherever
possible. Nonlinear signal processing is still something much to be developed in the future. Following
is a summary of comparison of Laplace and z-transforms.
Linearity: Both the Laplace and z-transforms are linear operators. The inverse Laplace and
z-transforms are also linear.
Delay Shift Invariance: Assuming one-sided signals f(t) = f [k] = 0 for t, k < 0 (no initial
conditions),
L{ f (t − τ )} = e− sτ F (s )
(2.5)
Z{ f [ k − N )} = z − N F[ z ]
www.itpub.net
z-Transform 23
Convolution: Linear shift-invariant systems have the following property: a multiplication of two
signals in one domain is equivalent to a convolution in the other domain.
⎧⎪ t ⎫⎪
⎩⎪ 0
∫
L{ f (t ) * g(t )} = L ⎨ f (τ)g(t − τ) dτ ⎬ = F (s )G(s )
⎭⎪
(2.6)
A more detailed derivation of Equation 2.6 will be presented in the next section. In the digital
domain, the convolution integral becomes a simple summation.
m
⎧⎪ ⎫⎪
Z{ f [ k ]* g[ k ]} = Z ⎨
⎩⎪
∑ f [k ]g[m − k ]⎬⎭⎪ = F[ z]G[ z]
k 0
(2.7)
If f [k] is the impulse response of a system and g[k] is an input signal to the system, the system
output response to the input excitation g[k] is found in the time domain by the convolution of g[k]
and f [k]. However, the system must be both linear and shift invariant (a shift of k samples in the
input gives a shift of k samples in the output), for the convolution property to apply. Equation 2.7 is
fundamental to digital systems theory and will be discussed in great detail later.
Initial Value: The initial value of a one-sided (causal) impulse response is found by taking the
limit as s or z approaches infinity.
The initial value of the digital impulse response can be found in an analogous manner.
Final Value: The final value of a causal impulse response can be used as an indication of the
stability of a system as well as to determine any static offsets.
Equation 2.10 holds so long as sF(s) is analytic in the right-half of the s-plane (no poles on the
jω-axis and for σ ≥ 0). F(s) is allowed to have one pole at the origin and still be stable at t = ∞. The
final value in the digital domain is
(1 – z−1)F[z] must also be analytic in the region on and outside the unit circle on the z-plane. The
region |z| ≥ 1, on and outside the unit circle on the z-plane, corresponds to the region σ ≥ 0, on the
jω-axis and on the right-hand s-plane. The s-plane pole F(s) is allowed to have s = 0 in equation
maps to a z-plane pole for F[z] at z = 1 since z = esT. The allowance of these poles is related to the
24 Signal Processing for Intelligent Sensor Systems with MATLAB®
restriction of causality for one-sided transforms. The mapping between the s and z planes will be
discussed in some more detail in the following text.
Frequency Translation/Scaling: Multiplication of the analog time-domain signal by an exponen-
tial leads directly to a frequency shift.
L{e− a t f (t )} = F (s + a ) (2.12)
In the digital domain, multiplication of the sequence f [k] by a geometric sequence αk results in
scaling the frequency range.
∞ −k
⎛ z⎞ ⎡z⎤
Z{α k f [ k ]} = ∑
k 0
f [k ] ⎜ ⎟
⎝ α⎠
= F⎢ ⎥
⎣α ⎦
(2.13)
Differentiation: The Laplace transform of the derivative of the function f(t) is found using
integration by parts.
⎧∂ f ⎫
L⎨ ⎬ = sF (s ) − f (0) (2.14)
⎩ ∂t ⎭
Carrying out integration by parts as in Equation 2.14 for higher-order derivatives yields the
general formula
N −1
⎧⎪ ∂ f N ⎫⎪
∑
L ⎨ N ⎬ = s N F (s ) − s N −1− k f ( k ) (0)
⎩⎪ ∂t ⎭⎪ k 0
(2.15)
where f (k)(0) is the kth derivative of f(t) at t = 0. The initial conditions for f(t) are necessary to its
Laplace transform just as they are necessary for the complete solution of an ordinary differential
equation. For the digital case, we must first employ a formula for carrying forward initial conditions
in the z-transform of a time-advanced signal.
N −1
Z{x[ n + N ]} = x N X [ z ] − ∑z
k 0
N −k
x[ k ] (2.16)
For a causal sequence, Equation 2.16 can be easily proved from the definition of the z-transform.
Using an approximation based on the definition of the derivatives, the first derivatives of a digital
sequence is
1
x[ n + 1] = ( x[ n + 1] − x[ n]) (2.17)
T
where T is the sample increment. Applying the time-advance formula in Equation 2.16 gives the
z-transform of the first derivative.
1
Z {x[ n + 1]} = {(z − 1 ) X[ z] − zx[0]} (2.18)
T
www.itpub.net
z-Transform 25
Delaying the sequence by one sample shows the z-transform of the first derivative of x[n] at
sample n.
1
Z { x[ n]} =
T
{
(1 − z −1) X[ z ] − x[0] } (2.19)
1
Z {
x[ n]} =
T
2 {
(1 − z −1) 2X[ z ] − ⎡⎣(1 − 2 z −1 ) x[0] + z −1x[1]⎤⎦ } (2.20)
The pattern of how the initial samples enter into the derivatives can be more easily seen in the
third derivative of x[n], where the polynomial coefficients weighting the initial samples can be seen
as fragments of the binomial polynomial created by the triple zero at z = 1.
1
3{
Z {
x [ n]} = (1 − z −1 )3 X [ z ] − (1 − 3z −1 + 3z −2 ) x[0] −( z −1 − 3z −2 ) x[1] − z −2 x[2]} (2.21)
T
Putting aside the initial conditions on the digital domain definitive, it is straightforward to show
that the z-transform of the Nth definitive of x[n] simply has N zeros at z = 1 corresponding to the
analogous N zeros at s = 0 in the analog domain.
1
Z { x ( N ) [ n]} = {(1 − z −1 ) N X [ z ] − initial conditions} (2.22)
TN
Mapping between the s and z Planes: As with the aliased data in Section 1.1, the effect of sam-
pling can be seen as a mapping between the series of analog frequency bands and the digital base-
band defined by the sample rate and type (real or complex). To make sampling useful, one must
band limit the analog frequency response to a bandwidth equal to the sample rate for complex
samples, or LPF to half the sample rate (called the Nyquist rate) for real samples. Consider the effect
of replacing the analog t in zn = est with nT, where n is the sample number and T = 1/fs is the sam-
pling interval in seconds.
⎛ σ 2 πf ⎞
⎜⎝ f + j f ⎟⎠ n
z n = e( σ + j ω ) n T = e s s
(2.23)
As in Equation 2.23, the analog frequency repeats every multiple of fs (a full fs Hz bandwidth is
available for complex samples). For real samples (represented by a phase-shifted sine or cosine
rather than a complex exponential), a fs Hz-wide frequency band will be centered about 0 Hz giving
an effective signal bandwidth of only fs/2 Hz for positive frequency. The real part of the complex
spectrum is symmetric for positive and negative frequencies while the imaginary part is skew sym-
metric (negative frequency amplitude is opposite in sign from positive frequency amplitude). This
follows directly from the imaginary part of ejθ being j sin θ. The amplitudes of the real and imagi-
nary parts of the signal spectrum are determined by the phase shift of the sine or cosine. For real
time-domain signals sampled at fs samples/s, the effective bandwidth of the digital signal is from 0
to fs/2 Hz. For σ ≤ 0, a strip within ±ωs/2 for the left-half of the complex s-plane maps into a region
inside a unit radius circle on the complex z-plane. For complex sampled systems, each multiple of
fs Hz on the s-plane corresponds to a complete trip around the unit circle on the z-plane. In other
words, the left-half of the s-plane is subdivided into an infinite number of parallel strips, each
Exploring the Variety of Random
Documents with Different Content
XVII
BRITISH LOSSES AT MAYA:
JULY 25, 1813
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com