100% found this document useful (1 vote)
12 views

Full download (Ebook) Signal Processing for Intelligent Sensor Systems with MATLAB®, Second Edition by David C. Swanson ISBN 9781420043044, 1420043048 pdf docx

The document provides information about the ebook 'Signal Processing for Intelligent Sensor Systems with MATLAB®, Second Edition' by David C. Swanson, including its ISBN and download link. It also lists several other related ebooks available for download on the same website. The content covers various topics in digital signal processing, including fundamentals, frequency domain processing, and adaptive filtering techniques.

Uploaded by

depsahkevade
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
12 views

Full download (Ebook) Signal Processing for Intelligent Sensor Systems with MATLAB®, Second Edition by David C. Swanson ISBN 9781420043044, 1420043048 pdf docx

The document provides information about the ebook 'Signal Processing for Intelligent Sensor Systems with MATLAB®, Second Edition' by David C. Swanson, including its ISBN and download link. It also lists several other related ebooks available for download on the same website. The content covers various topics in digital signal processing, including fundamentals, frequency domain processing, and adaptive filtering techniques.

Uploaded by

depsahkevade
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Visit https://ebooknice.

com to download the full version and


explore more ebooks

(Ebook) Signal Processing for Intelligent Sensor


Systems with MATLAB®, Second Edition by David C.
Swanson ISBN 9781420043044, 1420043048

_____ Click the link below to download _____


https://ebooknice.com/product/signal-processing-for-
intelligent-sensor-systems-with-matlab-second-
edition-4565966

Explore and download more ebooks at ebooknice.com


Here are some suggested products you might be interested in.
Click the link to download

(Ebook) Signal Processing for Intell. Sensor Systems with MATLAB by D.


Swanson ISBN 9781439879504, 1439879508

https://ebooknice.com/product/signal-processing-for-intell-sensor-
systems-with-matlab-4094464

(Ebook) Biota Grow 2C gather 2C cook by Loucas, Jason; Viles, James


ISBN 9781459699816, 9781743365571, 9781925268492, 1459699815,
1743365578, 1925268497

https://ebooknice.com/product/biota-grow-2c-gather-2c-cook-6661374

(Ebook) Systems and Signal Processing with MATLAB®: Two Volume Set,
3rd Edition by Taan S. ElAli ISBN 9780367533595, 0367533596

https://ebooknice.com/product/systems-and-signal-processing-with-
matlab-two-volume-set-3rd-edition-51055758

(Ebook) Signals, Systems, Transforms, and Digital Signal Processing


with MATLAB by Michael Corinthios ISBN 9781420090482, 1420090488

https://ebooknice.com/product/signals-systems-transforms-and-digital-
signal-processing-with-matlab-2384698
(Ebook) Intelligent Sensor Networks: The Integration of Sensor
Networks, Signal Processing and Machine Learning by Fei Hu (editor),
Qi Hao (editor) ISBN 9781439892817, 1439892814

https://ebooknice.com/product/intelligent-sensor-networks-the-
integration-of-sensor-networks-signal-processing-and-machine-
learning-33486826

(Ebook) Surface Acoustic Wave Filters, Second Edition: With


Applications to Electronic Communications and Signal Processing by
David Morgan ISBN 9780123725370, 0123725372

https://ebooknice.com/product/surface-acoustic-wave-filters-second-
edition-with-applications-to-electronic-communications-and-signal-
processing-1134706

(Ebook) Discrete systems and digital signal processing with MATLAB by


ElAli, Taan S ISBN 9781439828199, 9781439897768, 9781439897775,
9781466551824, 1439828199, 143989776X, 1439897778, 1466551828

https://ebooknice.com/product/discrete-systems-and-digital-signal-
processing-with-matlab-5144608

(Ebook) Verified Signal Processing Algorithms in Matlab and C by Arie


Dickman ISBN 9783030933623, 3030933628

https://ebooknice.com/product/verified-signal-processing-algorithms-
in-matlab-and-c-42640188

(Ebook) Conceptual Digital Signal Processing with MATLAB by Keonwook


Kim ISBN 9789811525834, 9789811525841, 9811525838, 9811525846

https://ebooknice.com/product/conceptual-digital-signal-processing-
with-matlab-22477384
Signal Processing for Intelligent Sensor Systems with
MATLAB Second Edition David C. Swanson Digital
Instant Download
Author(s): David C. Swanson
ISBN(s): 9781420043044, 1420043048
Edition: 2
File Details: PDF, 22.17 MB
Year: 2011
Language: english
Signal
Processing
Processi g foro
Intelligent
Sensor Systems
ms
with MATLAB
T AB ®

Second Edition

David C. Swanson
www.itpub.net
®
www.itpub.net
Signal
Processing for
Intelligent
Sensor Systems
with MATLAB ®

Second Edition

David C. Swanson

Boca Raton London New York

CRC Press is an imprint of the


Taylor & Francis Group, an informa business
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the
accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products
does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular
use of the MATLAB® software.

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2012 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20110520

International Standard Book Number-13: 978-1-4398-7950-4 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com

www.itpub.net
This book is dedicated to all who aspire to deeply understand signal processing
for sensors, not just enough to pass an exam or assignment, or to complete a
project, but deep enough to experience the joy of natural revelation. This takes
more than just effort. You have to love the journey. This was best said by one
of America’s greatest inventors, George Washington Carver, in the quote
“Anything will give up its secrets if you love it enough…”
www.itpub.net
Contents
Preface������������������������������������������������������������������������������������������������������������������������������������������� xiii
Acknowledgments���������������������������������������������������������������������������������������������������������������������������� xv
Author��������������������������������������������������������������������������������������������������������������������������������������������xvii

Part I Fundamentals of Digital Signal Processing

Chapter 1 Sampled Data Systems.................................................................................................. 3


1.1 A/D Conversion..................................................................................................3
1.2 Sampling Theory................................................................................................6
1.3 Complex Bandpass Sampling............................................................................. 9
1.4 Delta–Sigma Analog Conversion..................................................................... 12
1.5 MATLAB® Examples....................................................................................... 14
1.6 Summary, Problems, and References............................................................... 15
Problems...................................................................................................................... 16
References................................................................................................................... 17

Chapter 2 z-Transform.................................................................................................................. 19
2.1 Comparison of Laplace and z-Transforms........................................................ 19
2.2 System Theory.................................................................................................. 27
2.3 Mapping of s-Plane Systems to the Digital Domain........................................ 30
2.4 MATLAB® Examples....................................................................................... 39
2.5 Summary..........................................................................................................40
Problems...................................................................................................................... 41
References................................................................................................................... 41

Chapter 3 Digital Filtering........................................................................................................... 43


3.1 FIR Digital Filter Design................................................................................. 43
3.2 IIR Filter Design and Stability......................................................................... 47
3.3 Whitening Filters, Invertibility, and Minimum Phase..................................... 49
3.4 Filter Basis Polynomials................................................................................... 52
3.4.1 Butterworth Filters.............................................................................. 52
3.4.2 Chebyshev Type I Filters..................................................................... 55
3.4.3 Chebyshev Type II Filters................................................................... 56
3.4.4 Elliptical Filters................................................................................... 58
3.4.5 Bessel Filters....................................................................................... 59
3.4.6 High-Pass, Band-Pass, and Band-Stop Filter Transformations........... 59
3.4.7 MA Digital Integration Filter.............................................................. 59
3.5 MATLAB® Examples.......................................................................................60
3.6 Summary.......................................................................................................... 62
Problems...................................................................................................................... 63
References................................................................................................................... 63

ix
x Contents

Chapter 4 Digital Audio Processing............................................................................................ 65


4.1 Basic Room Acoustics...................................................................................... 65
4.2 Artificial Reverberation and Echo Generators................................................. 69
4.3 Flanging and Chorus Effects............................................................................ 72
4.4 Bass, Treble, and Parametric Filters................................................................. 74
4.5 Amplifier and Compression/Expansion Processors......................................... 76
4.6 Digital-to-Analog Reconstruction Filters.........................................................80
4.7 Audio File Compression Techniques................................................................ 82
4.8 MATLAB® Examples....................................................................................... 88
4.9 Summary.......................................................................................................... 91
Problems......................................................................................................................92
References...................................................................................................................92

Chapter 5 Linear Filter Applications........................................................................................... 95


5.1 State Variable Theory....................................................................................... 95
5.1.1 Continuous State Variable Formulation..............................................97
5.1.2 Discrete State Variable Formulation...................................................99
5.2 Fixed-Gain Tracking Filters........................................................................... 101
5.3 2D FIR Filters................................................................................................. 107
5.4 Image Upsampling Reconstruction Filters..................................................... 115
5.5 MATLAB® Examples..................................................................................... 117
5.6 Summary........................................................................................................ 119
Problems.................................................................................................................... 120
References................................................................................................................. 121

Part II Frequency Domain Processing

Chapter 6 Fourier Transform..................................................................................................... 127


6.1 Mathematical Basis for the Fourier Transform.............................................. 127
6.2 Spectral Resolution......................................................................................... 130
6.3 Fast Fourier Transform................................................................................... 135
6.4 Data Windowing............................................................................................. 138
6.5 Circular Convolution Issues........................................................................... 143
6.6 Uneven-Sampled Fourier Transforms............................................................ 146
6.7 Wavelet and Chirplet Transforms................................................................... 153
6.8 MATLAB® Examples..................................................................................... 162
6.9 Summary........................................................................................................ 165
Problems.................................................................................................................... 167
References................................................................................................................. 168

Chapter 7 Spectral Density........................................................................................................ 169


7.1 Spectral Density Derivation........................................................................... 169
7.2 Statistical Metrics of Spectral Bins................................................................ 172
7.2.1 Probability Distributions and PDFs.................................................. 173
7.2.2 Statistics of the NPSD Bin................................................................ 175
7.2.3 SNR Enhancement and the Zoom FFT............................................. 176

www.itpub.net
Contents xi

7.2.4 Conversion of Random Variables...................................................... 177


7.2.5 Confidence Intervals for Averaged NPSD Bins................................ 179
7.2.6 Synchronous Time Averaging........................................................... 180
7.2.7 Higher-Order Moments..................................................................... 181
7.2.8 Characteristic Function..................................................................... 182
7.2.9 Cumulants and Polyspectra............................................................... 182
7.3 Transfer Functions and Spectral Coherence................................................... 188
7.4 Intensity Field Theory.................................................................................... 199
7.4.1 Point Sources and Plane Waves.........................................................200
7.4.2 Acoustic Field Theory.......................................................................200
7.4.3 Acoustic Intensity.............................................................................. 203
7.4.4 Structural Intensity............................................................................206
7.4.5 Electromagnetic Intensity..................................................................208
7.5 Intensity Display and Measurement Techniques............................................209
7.5.1 Graphical Display of the Acoustic Dipole........................................209
7.5.2 Calculation of Acoustic Intensity from Normalized
Spectral Density................................................................................ 213
7.5.3 Calculation of Structural Intensity for Compressional and
Bending Waves.................................................................................. 215
7.5.4 Calculation of the Poynting Vector................................................... 217
7.6 MATLAB® Examples..................................................................................... 218
7.7 Summary........................................................................................................ 219
Problems.................................................................................................................... 220
References................................................................................................................. 221

Chapter 8 Wavenumber Transforms.......................................................................................... 223


8.1 Spatial Transforms.......................................................................................... 223
8.2 Spatial Filtering and Beamforming................................................................ 225
8.3 Image Enhancement Techniques.................................................................... 233
8.4 JPEG and MPEG Compression Techniques...................................................240
8.5 Computer-Aided Tomography........................................................................ 243
8.6 Magnetic Resonance Imaging........................................................................ 249
8.7 MATLAB® Examples..................................................................................... 254
8.8 Summary........................................................................................................ 258
Problems.................................................................................................................... 261
References................................................................................................................. 261

Part III Adaptive System Identification and Filtering

Chapter 9 Linear Least-Squared Error Modeling...................................................................... 265


9.1 Block Least Squares....................................................................................... 265
9.2 Projection-Based Least Squares..................................................................... 269
9.3 General Basis System Identification............................................................... 271
9.3.1 Mechanics of the Human Ear............................................................ 273
9.3.2 Least-Squares Curve Fitting............................................................. 275
9.3.3 Pole–Zero Filter Models.................................................................... 276
9.4 MATLAB® Examples..................................................................................... 279
9.5 Summary........................................................................................................280
xii Contents

Problems....................................................................................................................280
References................................................................................................................. 281

Chapter 10 Recursive Least-Squares Techniques........................................................................ 283


10.1 RLS Algorithm and Matrix Inversion Lemma..............................................284
10.1.1 Matrix Inversion Lemma..................................................................284
10.1.2 Approximations to RLS.................................................................... 286
10.2 LMS Convergence Properties......................................................................... 287
10.2.1 System Modeling Using Adaptive System Identification.................. 287
10.2.2 Signal Modeling Using Adaptive
Signal-Whitening Filters................................................................... 291
10.3 Lattice and Schur Techniques........................................................................ 295
10.4 Adaptive Least-Squares Lattice Algorithm.................................................... 301
10.4.1 Wiener Lattice...................................................................................307
10.4.2 Double/Direct Weiner Lattice........................................................... 310
10.5 MATLAB® Examples..................................................................................... 312
10.6 Summary........................................................................................................ 314
Problems.................................................................................................................... 315
References................................................................................................................. 316

Chapter 11 Recursive Adaptive Filtering..................................................................................... 317


11.1 Adaptive Kalman Filtering............................................................................. 318
11.2 IIR Forms for LMS and Lattice Filters.......................................................... 332
11.3 Frequency Domain Adaptive Filters.............................................................. 347
11.4 MATLAB® Examples..................................................................................... 353
11.5 Summary........................................................................................................ 355
Problems.................................................................................................................... 357
References................................................................................................................. 357

Part IV Wavenumber Sensor Systems

Chapter 12 Signal Detection Techniques..................................................................................... 363


12.1 Rician PDF.....................................................................................................364
12.1.1 Time-Synchronous Averaging........................................................... 365
12.1.2 Envelope Detection of a Signal in Gaussian Noise........................... 367
12.2 RMS, CFAR Detection, and ROC Curves..................................................... 374
12.3 Statistical Modeling of Multipath................................................................... 381
12.3.1 Multisource Multipath....................................................................... 382
12.3.2 Coherent Multipath............................................................................ 383
12.3.3 Statistical Representation of Multipath............................................. 385
12.3.4 Random Variations in Refractive Index............................................ 388
12.4 MATLAB® Examples..................................................................................... 391
12.5 Summary........................................................................................................ 392
Problems.................................................................................................................... 394
References................................................................................................................. 394

www.itpub.net
Contents xiii

Chapter 13 Wavenumber and Bearing Estimation....................................................................... 397


13.1 Cramer–Rao Lower Bound............................................................................. 398
13.2 Bearing Estimation and Beam Steering.........................................................403
13.2.1 Bearings from Phase Array Differences...........................................403
13.2.2 Multiple Angles of Arrival................................................................407
13.2.3 Wavenumber Filters........................................................................... 410
13.3 Field Reconstruction Techniques................................................................... 418
13.4 Wave Propagation Modeling.......................................................................... 428
13.5 MATLAB® Examples..................................................................................... 436
13.6 Summary........................................................................................................ 438
Problems.................................................................................................................... 439
References................................................................................................................. 439

Chapter 14 Adaptive Beamforming and Localization................................................................. 441


14.1 Array “Null-Forming”.................................................................................... 443
14.2 Eigenvector Methods of MUSIC and MVDR................................................ 447
14.3 Coherent Multipath Resolution Techniques...................................................460
14.3.1 Maximal Length Sequences.............................................................. 462
14.4 FMCW and Synthetic Aperture Processing................................................... 472
14.5 MATLAB® Examples..................................................................................... 476
14.6 Summary........................................................................................................ 478
Problems....................................................................................................................480
References................................................................................................................. 481

Part V Signal Processing Applications

Chapter 15 Noise Reduction Techniques..................................................................................... 485


15.1 Electronic Noise............................................................................................. 485
15.2 Noise Cancellation Techniques...................................................................... 497
15.3 Active Noise Attenuation................................................................................504
15.4 MATLAB® Examples..................................................................................... 519
15.5 Summary........................................................................................................ 520
Problems.................................................................................................................... 521
References................................................................................................................. 522

Chapter 16 Sensors and Transducers........................................................................................... 523


16.1 Simple Transducer Signals............................................................................. 524
16.2 Acoustic and Vibration Sensors..................................................................... 530
16.2.1 Electromagnetic Mechanical Transducer.......................................... 530
16.2.2 Electrostatic Transducer.................................................................... 537
16.2.3 Condenser Microphone.....................................................................546
16.2.4 Micro-Electromechanical Systems................................................... 549
16.2.5 Charge Amplifier............................................................................... 550
16.2.6 Reciprocity Calibration Technique................................................... 552
xiv Contents

16.3 Chemical and Biological Sensors................................................................... 555


16.3.1 Detection of Small Chemical Molecules........................................... 556
16.3.2 Optical Absorption Chemical Spectroscopy..................................... 558
16.3.3 Raman Spectroscopy......................................................................... 560
16.3.4 Ion Mobility Spectroscopy................................................................ 562
16.3.5 Detecting Large Biological Molecules..............................................564
16.4 Nuclear Radiation Sensors............................................................................. 566
16.5 MATLAB® Examples..................................................................................... 569
16.6 Summary........................................................................................................ 570
Problems.................................................................................................................... 572
References................................................................................................................. 572

Chapter 17 Intelligent Sensor Systems........................................................................................ 575


17.1 Automatic Target Recognition Algorithms.................................................... 578
17.1.1 Statistical Pattern Recognition.......................................................... 578
17.1.2 Adaptive Neural Networks................................................................ 583
17.1.3 Syntactic Pattern Recognition........................................................... 590
17.2 Signal and Image Features............................................................................. 598
17.2.1 Basic Signal Metrics.......................................................................... 599
17.2.2 Pulse-Train Signal Models................................................................ 601
17.2.3 Spectral Features...............................................................................602
17.2.4 Monitoring Signal Distortion............................................................603
17.2.5 Amplitude Modulation......................................................................605
17.2.6 Frequency Modulation......................................................................607
17.2.7 Demodulation via Inverse Hilbert Transform...................................609
17.3 Dynamic Feature Tracking and Prediction.................................................... 618
17.4 Intelligent Sensor Agents................................................................................ 630
17.4.1 Internet Basics................................................................................... 631
17.4.2 IP Masquerading/Port Forwarding................................................... 632
17.4.3 Security versus Convenience............................................................. 632
17.4.4 Role of the DNS Server..................................................................... 633
17.4.5 Intelligent Sensors on the Internet.................................................... 633
17.4.6 XML Documents and Schemas for Sensors..................................... 636
17.4.7 Architectures for Net-Centric Intelligent Sensors............................. 639
17.5 MATLAB® Examples.....................................................................................640
17.6 Summary........................................................................................................640
Problems.................................................................................................................... 642
References................................................................................................................. 643

www.itpub.net
Preface
The second edition of Signal Processing for Intelligent Sensor Systems enhances many of the unique
­features of the first edition with more answered problems, web access to a large collection of
MATLAB® scripts used throughout the book, and the addition of more audio engineering, transduc-
ers, and sensor networking technology. All of the key algorithms and development methodologies
have been kept from the first edition, and hopefully all of the typographical errors have been fixed.
The addition of a chapter on Digital Audio processing reflects a growing interest in digital surround
sound (5.1 audio) techniques for entertainment, home theaters, and virtual reality systems. Also,
new sections are added in the areas of sensor networking, use of meta-data architectures using
XML, and agent-based automated data mining and control. This later information really ties large-
scale networks of intelligent sensors together as a network of thin file servers. Intelligent algorithms,
either resident in the sensor/file-server nodes, or run remotely across the network as intelligent
agents, can then provide an automated situational awareness. The many algorithms presented in
Signal Processing for Intelligent Sensor Systems can then be applied locally or network-based to
realize elegant solutions to very complex detection problems.
It was nearly 20 years ago that I was asked to consider writing a textbook on signal processing
for sensors. At the time I typically had over a dozen textbooks on my desk, each with just a few
small sections bookmarked for frequent reference. The genesis of this book was to bring together
all these key subjects into one text, summarize the salient information needed for design and appli-
cation, and organize the broad array of sensor signal processing subjects in a way to make it acces-
sible to engineers in school as well as those practicing in the field. The discussion herein is somewhat
informal and applied and in a tone of engineer-to-engineer, rather than professor-to-student. There
are many subtle nuggets of critical information revealed that should help most readers quickly
master the algorithms and adapt them to meet their requirements. This text is both a learning
resource and a field reference. In support of this, every data graph in the text has a MATLAB
m-script in support of it and these m-scripts are kept simple, commented, and made available to
readers for download from the CRC Press website for the book (http://www.crcpress.com/product/
isbn/9781420043044). Taylor & Francis Group (CRC Press) acquired the rights to the first edition
and have been relentless in encouraging me to update it in this second edition. There were also a
surprising number of readers who found me online and encouraged me to make an updated second
edition. Given the high cost of textbooks and engineering education, we are excited to cut the price
significantly, make the book available electronically online, as well as for “rent” electronically which
should be extremely helpful to students on a tight budget. Each chapter has a modest list of solved
problems (answer book available from the publisher) and references for more information.
The second edition is organized into five parts, each of which could be used for a semester course
in signal processing, or to supplement a more focused course textbook. The first two parts,
“Fundamentals of Digital Signal Processing” and “Frequency Domain Processing,” are appropriate
for undergraduate courses in Electrical and/or Computer Engineering. Part III “Adaptive System
Identification and Filtering” can work for senior-level undergraduate or a graduate-level course, as
is Part IV on “Wave Number Sensor Systems” that applies the earlier techniques to beamforming,
image processing, and signal detection systems. If you look carefully at the chapter titles, you will
see these algorithm applications grouped differently from most texts. Rather than organizing these
subjects strictly by application, we organize them by the algorithm, which naturally spans several
applications. An example of this is the recursive least-squares algorithm, projection operator sub-
space decomposition, and Kalman filtering of state vectors, which all share the same basic recursive
update algorithm. Another example is in Chapter 13 where we borrow the two-dimensional FFT

xv
xvi Preface

usually reserved for image processing and compression and use it to explain available beam pattern
responses for various array shapes.
Part V of the book covers advanced signal processing applications such as noise cancellation,
transducers, features, pattern recognition, and modern sensor networking techniques using XML
messaging and automation. It covers the critical subjects of noise, sensors, signal features, pattern
matching, and automated logic association, and then creates generic data objects in XML so that all
this information can be found. The situation recognition logic emerges as a cloud application in the
network that automatically mines the sensor information organized in XML across the sensor nodes.
This keeps the sensors as generic websites and information servers and allows very agile develop-
ment of search engines to recognize situations, rather than just find documents. This is the current
trend for sensor system networks in homeland security, business, and environmental and demo-
graphic information systems. It is a nervous system for the planet, and to that end I hope this contri-
bution is useful.

MATLAB® is a registered trademark of The MathWorks, Inc. For product information, please
contact:

The MathWorks, Inc.


3 Apple Hill Drive
Natick, MA 01760-2098 USA
Tel: 508 647 7000
Fax: 508-647-7001
E-mail: info@mathworks.com
Web: www.mathworks.com

www.itpub.net
Acknowledgments
I am professionally indebted to all the research sponsors who supported my colleagues, students,
and me over the years on a broad range of sensor applications and network automation. It was
through these experiences and by teaching that I obtained the knowledge behind this textbook. The
Applied Research Laboratory at The Pennsylvania State University is one of the premier engineer-
ing laboratories in the world, and my colleagues there will likely never know how much I have
learnt from them and respect them. A special thanks goes to Mr. Arnim Littek, a great engineer in
the beautiful country of New Zealand, who thought enough of the first edition to send me a very
detailed list of typographical errors and suggestions for this edition. There were others, too, who
found me through the Internet, and I really loved the feedback which served as an inspiration to
write the second edition. Finally to my wife Nadine, and children Drew, Anya, Erik, and Ava, your
support means everything to me.

xvii
www.itpub.net
Author
David C. Swanson has over 30 years of experience with sensor electronics and signal processing
algorithms and 15 years of experience with networking sensors. He has been a professor in the
Graduate Program in Acoustics at The Pennsylvania State University since 1989 and has done
extensive research in the areas of advanced signal processing for acoustic and vibration sensors
including active noise and vibration control. In the late 1990s, his research shifted to rotating equip-
ment monitoring and failure prognostics, and since 1999 has again shifted into the areas of chemi-
cal, biological, and nuclear detection. This broad range of sensor signal processing applications
culminates in his book Signal Processing for Intelligent Sensor Systems, now in its second edition.
Dr. Swanson has written over 100 articles for conferences and symposia, dozens of journal articles
and patents, and three chapters in books other than his own. He has also worked in industry for
Hewlett-Packard and Textron Defense Systems, and has had many sponsored industrial research
projects. He is a fellow of the Acoustical Society of America, a board-certified member of the
Institute of Noise Control engineers and a member of the IEEE. His current research is in the areas
of advanced biomimetic sensing for chemicals and explosives, ion chemistry signal processing, and
advanced materials for neutron detection. Dr. Swanson received a BEE (1981) from the University
of Delaware, Newark, and an MS (1984) and PhD (1986) from The Pennsylvania State University,
University Park, where he currently lives with his wife and four children. Dr. Swanson enjoys music,
football, and home brewing.

xix
www.itpub.net
Part I
Fundamentals of Digital
Signal Processing
It was in the late 1970s that the author first learned about digital signal processing as a freshman
electrical engineering student. Digital signals were a new technology and generally only existed
inside computer programs and as hard disk files on cutting edge engineering projects. At the time,
and reflected in the texts of that time, much of the emphasis was on the mathematics of a sampled
signal, and how sampling made the signal different from the analog signal equivalent. Analog signal
processing is very much a domain of applied mathematics, and looking back over 40 years later, it
is quite remarkable how the equations we process easily today in a computer program were imple-
mented eloquently in analog electronic circuits. Today there is little controversy about the equiva-
lence of digital and analog signals except perhaps among audio extremists/purists. Our emphasis in
this part is on explaining how signals are sampled, compressed, and reconstructed, how to filter
signals, how to process signals creatively for images and audio, and how to process signal informa-
tion “states” for engineering applications. We present how to manage the nonlinearity of converting
a system defined mathematically in the analog s-plane to an equivalent system in the digital z-plane.
These nonlinearities become small in a given low-frequency range as one increases the digital
sample rate of the digital system, but numerical errors can become a problem if too much oversam-
pling is done. There are also options for warping the frequency scale between digital and analog
systems.
We present some interesting and useful applications of signal processing in the areas of audio
signal processing, image processing, and tracking filters. This provides for a first semester course to
cover the basics of digital signals and provide useful applications in audio and images in addition to
the concept of signal kinematic states that are used to estimate and control the dynamics of a signal
or system. Together these applications cover most of the signal processing people encounter in
everyday life. This should help make the material interesting and accessible to students new to the
field while avoiding too much theory and detailed mathematics. For example, we show frequency
response functions for digital filters in this part, but we do not go into spectral processing of signals
until Part II. This also allows some time for MATLAB® use to develop where students can get used
to making m-scripts and plots of simple functions. The application of fixed-gain tracking filters on
a rocket launch example will make detailed use of signal state estimation and prediction as well as
computer graphics in plotting multiple functions correctly. Also, using a digital photograph and
2 Signal Processing for Intelligent Sensor Systems with MATLAB®

two-dimensional low- and high-pass filters provide an interesting introduction to image processing
using simple digital filters. Over 40 years ago, one could not imagine teaching signal processing
fundamentals while covering such a broad range of applications. However, any cell phone today has
all of these applications built in, such as sampling, filtering, and compression of the audio signal,
image capture and filtering, and even a global positioning system (GPS) for estimating location,
speed, and direction.

www.itpub.net
1 Sampled Data Systems
Figure 1.1 shows a basic general architecture that can be seen to depict most adaptive signal process-
ing systems. The number of inputs to the system can be very large, especially for image processing
sensor systems. Since an adaptive signal processing system is constructed using a computer, the
inputs generally fall into the categories of analog “sensor” inputs from the physical world and digital
inputs from other computers or human communication. The outputs also can be categorized into
digital information, such as identified patterns, and analog outputs that may drive actuators (active
electrical, mechanical, and/or acoustical sources) to instigate physical control over some part of the
outside world. In this chapter, we examine the basic constructs of signal input, processing using
digital filters, and output. While these very basic operations may seem rather simple compared to
the algorithms presented later in the text, careful consideration is needed to insure a high-fidelity
adaptive processing system. Figure 1.1 also shows how the adaptive processing can extract the
salient information from the signal and automatically arrange it into XML (eXtensible Markup
Language) databases, which allows broad use by network processes. Later in the book we will dis-
cuss this from the perspective of pattern recognition and web services for sensor networks. The next
chapter will focus on fundamental techniques for extracting information from the signals.
Consider a transducer system that produces a voltage in response to some electromagnetic or
mechanical wave. In the case of a microphone, the transducer sensitivity would have units of
volts/Pascal. For the case of a video camera pixel sensor, it would be volts per lumen/m 2, while
for an infrared imaging system the sensitivity might be given as volts per degree Kelvin. In any
case, the transducer voltage is conditioned by filtering and amplification in order to make the best
use of the analog-to-digital converter (ADC) system. While most adaptive signal processing sys-
tems use floating-point numbers for computation, the ADC converters generally produce fixed-
point (integer) digital samples. The integer samples from the ADC are further converted to
floating-point format by the signal processor chip before subsequent processing. This relieves the
algorithm developer from the ­problem of controlling numerical dynamic range to avoid underflow
or overflow errors in fixed-point processing unless lesser expensive fixed-point processors are
used. If the processed signals are to be output, then floating-point samples are simply reconverted
to ­integer and an analog voltage is produced using a digital-to-analog converter (DAC) system
and ­filtered and amplified.

1.1 A/D CONVERSION


Quite often, adaptive signal processing systems are used to dynamically calibrate and adjust input
and output gains of their respective ADC and DAC devices. This extremely useful technique requires
a clear understanding of how most data acquisition systems really work. Consider a generic succes-
sive approximation 8-bit ADC as seen in Figure 1.2. The operation of the ADC actually involves an
internal DAC that produces an analog voltage for the “current” decoded digital output. A DAC sim-
ply sums the appropriate voltages corresponding to the bits set to 1. If the analog input to the ADC
does not match the internal DAC output, the binary counter counts up or down to compensate. The
actual voltage from the transducer must be sampled and held constant (on a capacitor) while the
successive approximation completes. On completion, the least significant bit (LSB) of the digital
output number will randomly toggle between 0 and 1 as the internal D/A analog output voltage
converges about the analog input voltage. The “settling time” for this process increases with the
number of bits quantized in the digital output. The shorter the settling time, the faster the digital

3
4 Signal Processing for Intelligent Sensor Systems with MATLAB®

XML
database
Input Extracted
sensing ADC information
system Adaptive
signal Web
processing services
Output system
control DAC Commands and
actuator configuration

FIGURE 1.1 A generic architecture for an adaptive signal processing system, including sensor inputs, ­control
outputs, and information formatting in XML databases for access through the Internet.

output sample rate may be. The toggling of the LSB as it approximates the analog input signal leads
to a low level of uniformly distributed (between 0 and 1) random noise in the digitized signal. This
is normal, expected, and not a problem as long as the sensor signal strengths are sufficient enough
such that the quantization noise is small compared to signal levels. It is important to understand how
transducer and data acquisition systems work so that the adaptive signal processing algorithms can
exploit and control their operation.
While there are many digital coding schemes, the binary number produced by the ADC is usu-
ally coded in either offset binary or in two’s complement formats [1]. Offset binary is used for either
all-positive or all-negative data such as absolute temperature. The internal DAC in Figure 1.2 is set
to produce a voltage Vmin that corresponds to the number 0, and Vmax for the biggest number or 255
(11111111), for the 8-bit ADC. The largest number produced by an M-bit ADC is therefore 2M − 1.
The smallest number, or LSB, will actually be wrong about 50% of the time due to the approxi­mation
process. Most data acquisition systems are built around either 8-, 12-, 16-, or 24-bit ADCs giving
maximum offset binary numbers of 255, 4095, 65535, and 16777215, respectively. If a “noise-less”
signal corresponds to a number of, say 1000, on a 12-bit A/D, the signal-to-noise ratio (SNR) of the
quantization is 1000:1, or approximately 60 dB.
Signed numbers are generally encoded in two’s complement format where the most significant
bit (MSB) is 1 for negative numbers and 0 for positive numbers. This is the normal “signed integer”
format in programming languages such as “C.” If the MSB is 1 indicating a negative number, the

Digital output

Internal
8-bit
8-bit
counter
DAC

Analog a
If a > b: count down
input If b > a: count up
b

Sample and hold Comparator

FIGURE 1.2 A generic successive approximation type 8-bit ADC showing the internal DAC converter to
compare the counter result to the input voltage.

www.itpub.net
Sampled Data Systems 5

magnitude of the negative binary number is found by complementing (changing 0–1 or 1–0) all of
the bits and adding 1. The reason for this apparently confusing coding scheme has to do with the
binary requirements of logic-based addition and subtraction circuitry in all of today’s computers
[2,3]. The logical simplicity of two’s complement arithmetic can be seen when considering that
the sum of 2 two’s complement numbers, N1 and N2, is done exactly the same as for offset binary
numbers, except any carryover from the MSB is simply ignored. Subtraction of N1 from N2 is done
simply by forming the two’s complement of N1 (complementing the bits and adding 1), and then
­adding the two numbers together ignoring any MSB carryover. An 8-, 12-, 16-, or 24-bit two’s
complement ADC with numbers over ranges of (+127, −128), (+2047, −2048), (+32767, −32768), and
(+8388607, −8388608), respectively.
Table 1.1 shows two’s complement binary for a 3-bit ±3.5 V A/D and shows the effect of sub­
tracting the number +2 (010 or +2.5 V) from each of the possible 3-bit numbers. Note that the
complement of +2 is (101) and adding 1 gives the “two’s complement” of (110), which is equal to
numerical −2 or −1.5 V in Table 1.1.
As can be seen in Table 1.1, the numbers and voltages with an asterisk are rather grossly in error.
This type of numerical error is the single most reason to use floating-point rather than fixed-point
signal processors. It is true that fixed-point signal processor chips are very inexpensive, lower power,
and faster at fixed-point arithmetic. However, a great deal of attention must be paid to insuring that
no numerical errors of the type in Table 1.1 occur in a fixed-point processor. Fixed-point processing
severely limits the numerical dynamic range of the adaptive algorithms used. In particular, algo-
rithms involving many divisions, matrix operations, or transcendental functions such as logarithms
or trigonometric functions are generally not good candidates for fixed-point processing. All the
subtractions are off by at least 0.5 V, or half the LSB. A final point worth noting from Table 1.1 is
that while the analog voltages of the ADC are symmetric about 0 V, the coded binary numbers are
not, giving a small numerical offset from the two’s complement coding. In general, the design of
analog circuits with nearly zero offset voltage is a difficult enough task that one should always
assume some nonzero offset in all digitized sensor data.
The maximum M-bit two’s complement positive number is 2M−1 − 1 and the minimum negative
number is −2M−1. This is because one of the bits is used to represent the sign of the number and one
number is reserved to correspond to zero. We want zero to be “digital zero” and we could just leave
it at that but it would make addition and subtraction logically more complicated. That is why two’s
complement format is used for signed integers. Even though the ADC and analog circuitry offset is
small, it is good practice in any signal processing system to numerically remove it. This is simply
done by recursively computing the mean of the A/D samples and subtracting this time-averaged
mean from each ADC sample.

TABLE 1.1
Effect of Subtracting 2 from the Range of Numbers from a 3-bit Two’s Complement A/D
Voltage N Binary N Binary N2 Voltage N2
+3.5 011 001 +1.5
+2.5 010 000 +0.5
+1.5 001 111 −0.5
+0.5 000 110 −1.5
−0.5 111 101 −2.5
−1.5 110 100 −3.5
−2.5 101 011* +1.5*
−3.5 100 010* +0.5*
6 Signal Processing for Intelligent Sensor Systems with MATLAB®

1.2 SAMPLING THEORY


We now consider the effect of the periodic rate of ADC relative to the frequency of the waveform of
interest. There appear to be certain advantages to randomly spaced ADC conversions or “dithering”
[1], but this separate issue will not be addressed here. According to Fourier’s theorem, any waveform
can be represented as a weighted sum of complex exponentials of the form Amejωmt; −∞ < m < +∞.
A low-frequency waveform will have plenty of samples per wavelength and will be well represented
in the digital domain. However, as one considers higher-frequency components of the waveform
­relative to the sampling rate, the number of samples per wavelength declines. As will be seen below
for a real sinusoid, at least two equally spaced samples per wavelength are needed to adequately
represent the waveform in the digital domain. Consider the arbitrary waveform in equation

A j ωt A − j ωt
x(t ) = A cos(ωt ) = e + e . (1.1)
2 2

We now sample x(t) every T seconds giving a sampling frequency of fs Hz (samples per second).
The digital waveform is denoted as x[n], where n refers to the nth sample in the digitized sequence
in equation

x[ n] = x(nT ) = A cos(ωnT )
⎛ 2π f ⎞ (1.2)
= A cos ⎜ n ⎟.
⎝ fs ⎠

Equation 1.2 shows a “digital frequency” of Ω = 2πf/fs, which has the same period as an analog
waveform of frequency f so long as f is less than fs/2. Clearly, for the real sampled cosine waveform,
a digital frequency of 1.1π is basically indistinguishable from 0.9π except that the period of the 1.1π
waveform will actually be longer than the analog frequency f! Figures 1.3 and 1.4 graphically illus-
trate this phenomenon well-known as aliasing. Figure 1.3 shows a 100-Hz analog waveform sampled
1000 times/s. Figure 1.4 shows a 950-Hz analog signal with the same 1000 Hz sample rate. Since the
periods of the sampled and analog signals match only when f ≤ fs/2, the frequency components of the
analog waveform are said to be unaliased, and adequately represented in the digital domain [4].
Restricting real analog frequencies to be less than fs/2 have become widely known as the Nyquist
sampling criterion. This restriction is generally implemented by a low-pass filter (LPF) with −3 dB
cutoff frequency in the range of 0.4 fs to insure a wide margin of attenuation for frequencies above
fs/2. However, as will be discussed in the rest of this chapter, the “antialiasing” filters can have
environment-dependent frequency responses which adaptive signal processing systems can
­intelligently compensate.
It will be very useful for us to explore the mathematics of aliasing to fully understand the phe-
nomenon, and to take advantage of its properties in high-frequency bandlimited ADC systems.
Consider a complex exponential representation of the digital waveform in Equation 1.3 showing
both positive and negative frequencies

x[ n] = A cos(Ω n )
A A (1.3)
= e + j Ω n + e − j Ω n.
2 2

While Equation 1.3 compares well with 1.1, there is a big difference due to the digital sampling.
Assuming that no antialiasing filters are used, the digital frequency of Ω = 2πf/fs (from the analog
waveform sampled every T seconds) could represent a multiplicity of analog frequencies

A cos(Ω n) = A cos(Ωn ± 2 π m); m = 0, 1, 2,…. (1.4)

www.itpub.net
Sampled Data Systems 7

0.8

0.6

0.4

0.2

–0.2

–0.4

–0.6

–0.8

–1
0 0.005 0.01 0.015
Seconds

FIGURE 1.3 A 75-Hz sinusoid (solid line) is sampled at 1 kHz (1 ms per sample) as seen by each asterisk (*)
showing that the digital signal accurately represents the frequency and amplitude of the analog signal.

or the real signal in Equation 1.3, both the positive and negative frequencies have images at
­±2πm; m = 0, 1, 2,… . Therefore, if the analog frequency f is outside the Nyquist bandwidth of

0 − fs/2 Hz, one of the images of ±f will appear within the Nyquist bandwidth, but at the wrong
(aliased) ­frequency. Since we want the digital waveform to a linear approximation to the original
analog waveform, the frequencies of the two must be equal. One must always suppress frequencies

0.8

0.6

0.4

0.2

–0.2

–0.4

–0.6

–0.8

–1
0 0.005 0.01 0.015
Seconds

FIGURE 1.4 A 950-Hz sinusoid sampled at 1 kHz clearly show the aliasing effect as the digital samples (*)
appear as a 50-Hz signal.
8 Signal Processing for Intelligent Sensor Systems with MATLAB®

o­ utside the Nyquist bandwidth to be sure that no aliasing occurs. In practice, it is not possible to
make an analog signal filter that perfectly passes signals in the Nyquist band while completely
suppressing all frequencies outside this range. One should expect a transition zone near the
Nyquist band upper frequency where unaliased frequencies are attenuated and some aliased fre-
quency “images” are detectable. Most spectral analysis equipment will implement an antialias
filter with a −3 dB cutoff frequency of about 1/3 the sampling frequency. The frequency range
from 1/3 fs to 1/2 fs is usually not displayed as part of the observed spectrum so the user does not
notice the antialias filter’s transition region and the filter very effectively suppresses frequencies
above fs/2.
Figure 1.5 shows a graphical representation of the digital frequencies and images for a sample
rate of 1000 Hz and a range of analog frequencies including those of 100 and 950 Hz in Figures 1.3
and 1.4, respectively. When the analog frequency exceeds the Nyquist rate of fs/2 (π on the Ω axis),
one of the negative frequency images (dotted lines) appears in the Nyquist band with the wrong
(aliased) frequency, violating assumptions of system linearity.

100 Hz

Ω
–3π –2π π 0 +π +2π +3π

300 Hz

Ω
–3π –2π π 0 +π +2π +3π

495 Hz

Ω
–3π –2π π 0 +π +2π +3π

Aliased
400 Hz
600 Hz

Ω
–3π –2π π 0 +π +2π +3π

Aliased
50 Hz 950 Hz

Ω
–3π –2π π 0 +π +2π +3π

FIGURE 1.5 A graphical view of 100, 300, 495, 600, and 950 Hz analog signals sampled at 1 kHz in the
frequency domain showing the aliased “images” of the positive and negative frequency components where the
shaded box represents the digital signal bandwidth.

www.itpub.net
Sampled Data Systems 9

1.3 COMPLEX BANDPASS SAMPLING


Bandpass sampling systems are extremely useful to adaptive signal processing systems which use
high-frequency sensor data but with a very narrow bandwidth of interest. Some excellent examples
of these systems are active sonar, radar, and ultrasonic systems for medical imaging or nondestruc-
tive testing and evaluation of materials. These systems in general require highly directional transmit
and receive transducers that physically means that the wavelengths used must be much smaller than
the size of the transducers. The transmitted and received “beams” (comparable to a flashlight beam)
can then be used to scan a volume for echoes from relatively big objects (relative to wavelength) with
different impedances than the medium. The travel time from transmission to the received echo is
related to the object’s range by the wave speed.
Wave propagation speeds for active radar and sonar vary from a speedy 300 m/μs for electro-
magnetic waves, to 1500 m/s for sound waves in water, to a relatively slow 345 m/s for sound waves
in air at room temperature. Also of interest is the relative motion of the object along the beam. If the
object is approaching, the received echo will be shifted higher in frequency due to Doppler, and
lower in frequency if the object is moving away. The use of Doppler, time of arrival, and bearing of
arrival provide the basic target tracking inputs to active radar and sonar systems. Doppler radar has
also become a standard meteorological tool for observing wind patterns. Doppler ultrasound has
found important uses in monitoring fluid flow both in industrial processes and in the human cardio-
vascular system.
Given the sensor system’s need for high-frequency operation and relatively narrow signal band-
width, a digital data acquisition system can exploit the phenomenon of aliasing to drastically reduce
the Nyquist rate from twice the highest frequency of interest down to the bandwidth of interest. For
example, suppose a Doppler ultrasound system operates at 1 MHz to measure fluid flow of approxi-
mately ±0.15 m/s. If the speed of sound is approximately 1500 m/s, one might expect a Doppler
shift of only ±100 Hz. Therefore, if the received ultrasound is bandpass filtered from 999.9 kHz to
1.0001 MHz, it should be possible to extract the information using a sample rate on the order of
1 kHz rather than the over 2 MHz required to sample the full frequency range. From an information
point of view, bandpass sampling makes a lot of sense because only 0.01% of the 1.0001 MHz fre-
quency range is actually required.
We can show a straightforward example using real aliased samples for the above case of a 1-MHz
frequency with Doppler bandwidth of ±100 Hz. First, the analog signal is bandpass filtered attenuat-
ing all frequencies outside the 999.9 kHz to 1.0001 MHz frequency range of interest. By sampling
at a rate commensurate with the signal bandwidth rather than absolute frequency, one of the aliased
images will appear in the baseband between 0 Hz and the Nyquist rate. As seen in Figure 1.5, as the
analog frequency increases to the right, the negative images all move to the left. Therefore, one of
the positive images of the analog frequency is sought in the baseband. Figure 1.6 depicts the aliased
bands in terms of the sample rate fs.
Hence, if the 1 MHz, ±100 Hz signal is bandpass filtered from 999.9 kHz to 1.0001 MHz, we can
sample at a rate of 1000.75 Hz putting the analog signal in the middle of the 999th positive image
band. Therefore, one would expect to find a 1.0000 MHz signal aliased at 250.1875 Hz, 1.0001 MHz
aliased at 350.1875 Hz, and 999.9 kHz at 150.1875 Hz in the digital domain. The extra 150 Hz at the

–mfs –2fs –fs 0 fs 2fs mfs

FIGURE 1.6 Analog frequencies bandpass filtered in the mth band will naturally appear in the baseband
from 0 to fs/2 Hz, just shifted in frequency.
10 Signal Processing for Intelligent Sensor Systems with MATLAB®

top and bottom of the digital baseband allow for a transition zone of the antialiasing filters. Practical
use of this technique requires precise bandpass filtering and selection of the sample rate. However,
Figure 1.6 should also raise concerns about the effects of high-frequency analog noise “leaking” into
digital signal processing systems at the point of ADC. The problem of aliased electronic noise is par-
ticularly acute in systems where many high-speed digital signal processors operate in close proximity
to high-impedance analog circuits and the ADC subsystem has a large number of ­resolution bits.
For the case of a very narrow bandwidth at a high frequency it is obvious to see the numerical
savings, and it is relatively easy to pick a sample rate where only a little bandwidth is left unused.
However, for wider analog signal bandwidths a more general approach is needed where the band-
width of interest is not required to lie within a multiple of the digital baseband. To accomplish this
we must insure that the negative images of the sampled data do not mix with the positive images for
some arbitrary bandwidth of interest. The best way to do this is to simply get rid of the negative
frequency and its images entirely by using complex (real plus imaginary) samples.
How can one obtain complex samples from the real output of the ADC? Mathematically, one can
describe a “cosine” waveform as the real part of a complex exponential. However, in the real world
where we live (at least most of us some of the time), the sinusoidal waveform is generally observed
and measured as a real quantity. Some exceptions to this are simultaneous measurement of spatially
orthogonal (e.g., horizontal and vertical polarized) wave components such as polarization of elec-
tromagnetic waves, surface Rayleigh waves, or orbital vibrations of rotating equipment, all of which
can directly generate complex digital samples. To generate a complex sample from a single real
ADC convertor, we must tolerate a signal-phase delay which varies with frequency. However, since
this phase response of the complex sampling process is known, one can easily remove the phase
effect in the frequency domain.
The usual approach is to gather the real part as before and to subtract in the imaginary part using
a T/4 delayed sample

x R [ n] = A cos(2 π f nT + ϕ),
(1.5)
I ⎡ T⎤
j x [ n] = − A cos(2 π f ⎢ nT + ⎥ + ϕ).
⎣ 4⎦

The parameter φ in Equation 1.5 is just an arbitrary phase angle for generality. For the frequency
f = fs, Equation 1.5 reduces to

x R [ n] = A cos(2 πn + ϕ),
⎛ π⎞
j x I [ n] = − A cos ⎜ 2 πn + ϕ + ⎟
⎝ 2⎠
= A sin(2 πn + ϕ) (1.6)

so that for this particular frequency, the phase of the imaginary part is actually correct. We now
have a usable bandwidth fs, rather than fs/2 as with real samples. However, each complex sample is
actually two real samples, keeping the total information rate (number of samples per second) con-
stant! As the frequency decreases toward 0, a phase error bias will increase toward a phase lag of
π/2. However, since we wish to apply complex sampling to high-frequency bandpass systems, the
phase bias can be changing very rapidly with frequency, but it will be fixed for the given sample
rate. The complex samples in terms of the digital frequency Ω and analog frequency f are

x R [ n] = A cos(Ω n + ϕ), (1.7)


j x I [ n] = − A cos(Ωn + ϕ + π f / 2 fs ),

www.itpub.net
Sampled Data Systems 11

giving a sampling phase bias (in the imaginary part only) of

π⎛ f ⎞
Δθ = − 1− ⎟ . (1.8)
2 ⎜⎝ fs ⎠

For adaptive signal processing systems that require phase information, usually two or more chan-
nels have their relative phases measured. Since the phase bias caused by the complex sampling is
identical for all channels, the phase bias can usually be ignored if relative channel phase is needed.
The scheme for complex sampling presented here is sometimes referred to as “quadrature sam-
pling” or even “Hilbert transform sampling” due to the mathematical relationship between the real
and imaginary parts of the sampled signal in the frequency domain.
Figure 1.7 shows how any arbitrary bandwidth can be complex sampled at a rate equal to the
bandwidth in Hertz, and then digitally “demodulated” into the Nyquist baseband. If the signal band-
width of interest extends from f1 to f 2 Hz, an analog bandpass filter is used to band limit the signal
and complex samples are formed as seen in Figure 1.7 at a sample rate of fs = f 2 − f1 samples per
second. To move the complex data with frequency f1 down to 0 Hz and the data at f 2 down to fs Hz,
all one needs to do is multiply the complex samples by e−jΩ1n, where Ω1 is simply 2πf1/fs. Therefore,
the complex samples in Equation 1.5 are demodulated as seen in equation

x R [ n] = A cos(Ωn + ϕ)e − jΩ n ,1

⎡ 1⎤ (1.9)
j x I [ n] = − A cos(Ω ⎢ n + ⎥ + ϕ)e − jΩ n .1

⎣ 4⎦

Analog signal reconstruction can be done by remodulating the real and imaginary samples by f1
in the analog domain. Two oscillators are needed, one for the cos(2πf1t) and the other for the
sin(2πf1t). A real analog waveform can be reconstructed from the analog multiplication of the DAC
real sample times the cosine minus the DAC imaginary sample times the sinusoid. As with the
complex sample construction, some phase bias will occur. However, the technique of modulation
and demodulation is well established in amplitude-modulated (AM) radio. In fact, one could have
just as easily demodulated (i.e., via an analog heterodyne circuit) a high-frequency signal, band-
limited it to a low-pass frequency range of half the sample rate, and ADC it as real samples.
Reconstruction would simply involve DAC, low-pass filtering, and remodulation by a cosine

Complex bandwidth
of interest

0 fs f1 f2

–jΩ1n
e

Complex baseband

0 2π 4π 6π

FIGURE 1.7 An arbitrary high-frequency signal may be bandpass filtered and complex sampled and demod-
ulated to a meaningful baseband for digital processing.
12 Signal Processing for Intelligent Sensor Systems with MATLAB®

­ aveform. In either case, the net signal information rate (number of total samples per second) is
w
constant for the same signal bandwidth. It is merely a matter of algorithm convenience and desired
analog circuitry complexity from which the system developer must decide how to handle high-­
frequency band-limited signals.

1.4 DELTA–SIGMA ANALOG CONVERSION


A relatively new type of ADC emerged during the 1990s called a “Delta–Sigma” ADC, or DSC
(denoted here for brevity).* To the user, a DSC offers the profound advantage of not only eliminating
the antialiasing filters but also having the cutoff frequency of the antialiasing filter track a program-
mable sampling rate. However, the DSC also carries a latency delay for each conversion and has a
different effective number of bits (ENOB), both of which vary with the selected sample rate by
increasing at lower sample rates. There is also a minimum sample rate, usually around 8 kHz, below
which the DSC can operate, but an external antialiasing filter is required. In this section we explain
the subtleties of the DSC in a general way, but the reader should note that the many different manu-
facturers of DSCs have slightly different algorithms and proprietary circuitry that may differ from
our simplified presentation [5,6].
The first thing helpful to understand the DSC is that one can oversample and integrate to increase
the ENOB and LPF at the same time. Suppose we have 8-bit signed (range +127 to −128) samples of
a 50-Hz sine wave sampled at 10 kHz. We can add every two successive samples together to give
9-bit samples (range +254 to −256) at a rate of 5 kHz. Repeat the process of halving the sample rate
for each bit added to the samples a few more times and you have 12-bit samples at a rate of 612.5 Hz,
and so on. The process of adding successive samples together is essentially a LPF. Frequencies in
the waveform near the Nyquist rate (near two-samples per wavelength) are nearly cancelled by the
successive sample adding process, while low frequencies are amplified. Assuming the LSB noise is
uniform and the analog signal electronic noise is zero mean Gaussian, the noise adds incoherently,
so that the zero mean stays zero while the signal is added. It can be seen that simple oversampling
and integrating samples gives a 6-dB improvement to the available SNR of the ADC for each having
the output sample rate of the process. This can be seen by using a simple equation for the maximum
possible SNR for an N-bit ADC based on the quantization noise being spread evenly over the signal
Nyquist bandwidth defined by fs/2, where fs is the sample rate.

SNR = (6.02 N + 1.76) dB. (1.10)

The “6.02” is 20 times the base-10 logarithm of 2, and 1.76 is 10 times the base-10 logarithm of
1.5, which is apparently added into account for quantization noise in the LSB giving the correct bit
setting 50% of the time. Hence, for a 16-bit sample, one might use Equation 1.10 to say that the SNR
is over 97 dB, which is not correct. N should refer to the number of precision bits, which are 15 for
a 16-bit sample because the LSB is wrong 50% of the time. Therefore, for a single-ended 16-bit
sample the maximum SNR is approximately 92.06 dB. For signed integer (two’s compliment) sam-
ple where the SNR is measured for sinusoids in white noise the maximum SNR is only 86.04 dB,
because one bit is used to represent the sign. The ENOB is simply the SNR divided by 6.02.
The DSC actually gets theoretical 9 dB SNR improvement each halving of sample rate due to
something called quantization noise shaping inherent in the delta modulator circuit, and by increas-
ing the number of bits in the binary sample by 1 with each addition in the digital filtering. The
integrator in the delta modulator and the feedback differencing operation have the effect of shifting
the quantization noise to higher frequencies while enhancing the signal more at lower frequencies.
Because of this, it makes sense to add a bit with each addition in the low-pass decimation filter,

* Also called a “Sigma–Delta” ADC.

www.itpub.net
Sampled Data Systems 13

g­ iving three additional bits with each halving of the sample rate. Hence for a 6.4 MHz 1-bit sample
bitstream (12.8 MHz modulation clock), one gets 12-bit samples at a rate of 400 kHz. However, the
low-frequency signal enhancement means that the signal bandwidth is not flat, but rather rolls off
significantly near the Nyquist rate. Hence, most DSC designs also employ a cascade of digital filters
to correct this rolloff in the passband and enhance the filtering in the stopband. The additions in
these filters add 2 bits per halving of the sample rate and provide an undistorted waveform (linear
phase response) with a little added delay. The 12-bit samples at 400 kHz emerge delayed but with
16-bits at a 100 kHz sample rate and neatly filtered at a Nyquist cutoff frequency of 50  kHz. The
DSC has a built in low-pass antialiasing filter, usually a simple R-C filter with a cutoff around
100 kHz, which attenuates by about 36 dB at 6.4 MHz, six octaves higher at the 1-bit delta modula-
tor input. Any aliased signal images are therefore 72 dB attenuated back down at 100 kHz, and
more as you go lower in frequency. At 25 kHz, aliased signals are 84 dB attenuated, so for audio-
band recording with 16-bit samples there is effectively no aliasing problem.
At the heart of a DSC is a device called a “delta modulator” that can be seen depicted in Figure 1.8.
The delta modulator produces a 1-bit digital signal called a bitstream at a very high sample rate
where one can convert a frame of N-bits to a log2N-bit word. The analog voltage level at the end of
the frame will be a filtered sum of the bits within the frame. Hence, if the analog input in Figure 1.8
was very close to Vmax, the bitstream would be nearly all ones; if it were close to 0, the bitstream
would be nearly all zeros; and if it were near Vmax/2, about 50% of the bits within the frame would
be 1’s. The delta-modulated bitstream can be found today on “super audio DVD discs,” which typi-
cally have 24-bit samples at sample rates of 96 kHz, and sometimes even 192 kHz, much higher
resolution than the 16-bit 44.1 kHz samples of the standard compact disc.*
The DSC has some very interesting frequency response properties. The action of the integrator
and latch give a transfer function which essentially filters out low-frequency quantization noise,
improving the theoretical SNR about 3 dB each time the bandwidth is halved. The quantization
noise attenuation allows one to keep additional bits from the filtering and summing, which yields a
theoretical 9 dB improvement overall each time the frame rate is halved. This makes generating
large sample words at lower frequencies very accurate. However, the noise-shaping effect also
makes the upper end of the signal frequency response roll off well below the Nyquist rate. DSC
manufacturers correct for this using a digital filter to restore the high-frequency response, but this
also brings a time delay to the DSC output sample. This will be discussed in more detail in Chapter 3
in the sections on digital filtering with finite impulse response (FIR) filters. For some devices this
delay can be on the order of 32 samples, and hence the designer must be careful with this detail for

Comparator
Input signal Integrator
(1-bit ADC)
(0 –Vmax)


+ Sample Output
+ and
– hold
– latch
Difference
amplifier
1-bit DAC Modulation
Vmax clock

Gnd

FIGURE 1.8 A delta modulator is used to convert an analog voltage to a 1-bit “bitstream” signal where the
amplitude of the signal is proportional to the number of 0s and 1s in a given section of the bitstream.

* The author was very skeptical of this technology until he actually heard it. The oversampling and bigger bit-depth really
does make a difference since most movie and music recordings are compilations of sounds with a wide range of loudness
dynamics.
14 Signal Processing for Intelligent Sensor Systems with MATLAB®

applications that require real-time signal inputs and outputs, such as control loop applications. The
maximum theoretical SNR of a DSC can be estimated by considering the noise shaping of the delta
modulator and the oversampling ratio (OSR).

SNR = (6.02 N + 1.76) + 10 log10 (OSR ), (1.11)

where OSR is the ratio of the 1-bit sample rate fs divided by the N-bit decimated sample rate fsN. This
SNR improvement is more of a marketing nuance than a useful engineering parameter because one
only has a finite dynamic range available based on the number of bits in the output samples. For our
6.4 MHz sampled bitstream processed down to 16-bit samples at 100 kHz, the theoretical SNR
from Equation 1.11 is 116.1 dB using N = 16 and 104.1 dB using N = 14 (1 bit for sign and ignoring
the LSB). What does all this marketing rhetoric mean? It means that the DSC does not introduce
quantization noise, and so the effective SNR is about 90 dB for 16-bit signed samples. However, by
using more elaborate filters some DSC will produce more useful bits because of this higher theoreti-
cal limit. It is common to see 24-bit samples from a DSC which have SNRs in the range of 120 dB
for audio bandwidths. The 24-bit sample word format conveniently maps to 3 bytes per sample, even
though the actual SNR is not using all 24 bits. An SNR of 120 dB is ratio of about 1 million to 1.
Since most signals are recorded with a maximum of ±10 V or less, and the analog electronic noise
floor at room temperature is of the order of microvolts for audio bandwidths (unless one employs
cooling to reduce thermal noise in electronic devices), an ENOB of around 20 can be seen as
­adequate to exceed the dynamic range of most sensors and electronics. As such, using a 24-bit
DSC with effectively 20 bits of real SNR, one no longer needs to be concerned with setting the
­voltage gain to match the sensor signal to the ADC! For most applications where the signal is
­simply recorded and used, the DSC filter delay is not important either. As a result of the accuracy
and convenience of the DSC, it is now the most common ADC in use.

1.5 MATLAB® EXAMPLES


Throughout the second edition of this text, we present in each chapter a summary of useful
MATLAB® scripts for implementing the plots of the chapter figures and for further study. The book
website at www.taylorandfrancis.com and/or www.crcpress.com will have downloadable m-scripts
for the user to enjoy further study.* The main m-script used to make Figures 1.3 and 1.4 is called
“demoa2d.m” and contains some very simple statements to generate simple plots in MATLAB. This
is a good script to study if the user has no experience with MATLAB and needs to generate a simple
plot from an equation. Table 1.2 contains the complete m-script.
The m-script in Table 1.2 can be seen as an overlay of two digital sinusoid plots, one in black to
simulate the analog signal and the other using black asterisk “*” symbols to depict the digital
­signal. The “analog” is simply sampled at a much higher rate and drawn with smooth lines. The
statement “Ta = 0:T_analog:Tstop;” creates a row vector with elements [0, T_analog, 2*T_analog,
3*T_­analog, ... , Tstop]. The semicolon “;” at the end of each line in the script stops the MATLAB
execution from dumping the result on the command line as the script executes. The vector “Ta”
represents the sample times in seconds for each of the “analog waveform” samples to plot. We then
define a row vector “ya” the same size as Ta, but filled with zeros. The parameter “w0” depicts the
radian ­frequency where “pi” is by default set to 3.1415927... in MATLAB. The statement
“ya = cos(w0.*Ta);” is both really convenient and very confusing to new MATLAB users. When
you use “.*” for a ­multiply, MATLAB multiplies each element as you would in a “dot product.” In
our case the scalar “w0” multiplies each element of “Ta.” The built-in math functions (and there are
many of them) generally extend from scalar arguments to vectors, matrices, and even multidimen-
sional matrices. This is very powerful and saves the user from the drudgery of writing many nested

* Provided the user pledges to only use the author’s m-scripts for good, not evil.

www.itpub.net
Sampled Data Systems 15

TABLE 1.2
m-Script Example for Generating Simple Graphs of Sampled Sinusoids
% MATLAB m-file for Figures 1.2 and 1.3 A2D-Demo
fs = 1000; % sample rate
Ts = 1/fs; % sample time interval
fs_analog = 10000; % “our” display sample rate (analog signal points)
npts_analog = 200; % number of analog display points
T_analog = 1/fs_analog; % “our” display sample interval
f0 = 950; % use 75 Hz for Fig 1.3 and 950 Hz for Fig 1.4
Tstop = 0.015; % show 15 msecs of data
Ta = 0:T_analog:Tstop; % analog “samples”
Td = 0:Ts:Tstop; % digital samples
ya = zeros(size(Ta)); % zero out data vectors same length as time
yd = zeros(size(Td));
w0 = 2*pi*f0;
ya = cos(w0.*Ta); % note scalar by vector multiply (.*) gives vector in
% the cosine argument and a vector in the output ya
yd = cos(w0.*Td);
figure(1); % initialize a new figure window for plotting
plot(Ta,ya,’k’); % plot in black
hold on; % keep the current plot and add another layer
plot(Td,yd,’k*’); % plot in black “*”
hold off; % return figure to normal state
xlabel(‘Seconds’);

for-loops. It also executes substantially faster than a for-loop and leaves a script that is very easily
read. The “.*” dot product extends to vectors and matrices. Conversely, one has to consider matrix
algebra rules when multiplying and dividing matrices and vectors. If “x” and “y” are both row
­vectors, the statement “x*y” will generate an error. Using the transpose operator on “y” will do a
Hermitian transpose (flip a row vector into a column and replace the elements with complex conju-
gates) so that “x*y′” will yield a scalar result. If you do not want a complex conjugate (it does not
matter for real signals) the correct syntax is “x*y′”. The “dot-transpose” means just transpose the
vector without the conjugate operation. Once one masters this “vector concept” in the m-scripts
generating plots of all the signal processing in this book presented in m-scripts will become very
straightforward. The “plot” statement has to have the x and y components defined as identical-sized
vectors to execute properly. The most common difficulty the author has seen is these vectors not
matching (rows–columns need to be flipped or vectors of different lengths) in functions like “plot”.
The statement “hold on” allows one to overlay plots, which can also be done by adding multiple
x − y vector pairs to the plot argument. On the MATLAB command line one can enter “help plot” to
get more details as well as through the help window. The reason MATLAB is part of this book is
that it has emerged as one of the most effective ways to quickly visualize and test signal processing
algorithms. The m-scripts are deliberately kept very simple for brevity and to expose the algorithm
coding details, but many users will embed the algorithms into very sophisticated MATLAB-based
graphical user interfaces (GUIs) or port the algorithms to other languages such as C, C++, C#,
Visual Basic, and Web-based script languages such as Java script or Flash script.

1.6 SUMMARY, PROBLEMS, AND REFERENCES


This section has reviewed the basic process of analog waveform digitization and sampling. The
binary numbers from an ADC can be coded into offset-binary or into two’s complement formats for
16 Signal Processing for Intelligent Sensor Systems with MATLAB®

use with unsigned or signed integer arithmetic, respectively. Floating-point digital signal processors
subsequently convert the integers from the ADC to their internal floating-point format for process-
ing, and then back to the appropriate integer format for DAC conversion. Even though floating-point
arithmetic has a huge numerical dynamic range, the limited dynamic range of the ADC and DAC
convertors must always be considered. Adaptive signal processing systems can, and should, adap-
tively adjust input and output gains while maintaining floating-point data calibration. This is much
less of an issue when using ADC and DAC with over 20 bits of precision. Adaptive signal calibration
is straightforwardly based on known transducer sensitivities, signal conditioning gains, and the
voltage sensitivity and number of bits in the ADC and DAC convertors. The LSB is considered to
be a random noise source both numerically for the ADC convertor and electronically for the DAC
convertor. Given a periodic rate for sampling analog data and reconstruction of analog data from
digital samples, analog filters must be applied before ADC and after DAC conversion to avoid
unwanted signal aliasing. The DSC has a built-in antialiasing filter, and one can alter the clock of
the device over a fairly wide range and still have high-fidelity samples down to a frequency of
approximately 8 kHz. Below that, an external antialias filter is needed. For real digital data, the
sample rate must be at least twice the highest frequency which passes through the analog “antialias-
ing” filters. For complex samples, the complex-pair sample rate equals the bandwidth of interest,
which may be demodulated to baseband if the bandwidth of interest was in a high-frequency range.
The frequency response of DAC conversion as well as sophisticated techniques for analog signal
reconstruction will be discussed in Section 4.6 later in the text.

PROBLEMS
1. An accelerometer with sensitivity 10 mV/G (1.0 G is 9.801 m/s2) is subjected to a ±25 G
acceleration. The electrical output of the accelerometer is amplified by 11.5 dB before
A/D conversion with a 14-bit two’s complement encoder with an input sensitivity of
0.305  mV/bit.
a. What is the numerical range of the digitized data?
b. If the amplifier can be programmed in 1.5 dB steps, what would be the amplification
for maximum SNR? What is the SNR?
2. An 8-bit two’s complement A/D system is to have no detectable signal aliasing at a sample
rate of 100,000 samples per second. An eighth-order (−48 dB/octave) programmable cut-
off frequency LPF is available.
a. What is a possible cutoff frequency fc?
b. For a 16-bit signed A/D what would the cutoff frequency be?
c. If you could tolerate some aliasing between fc and the Nyquist rate, what is the high-
est fc possible for the 16-bit system in part b?
3. An acceptable resolution for a medical ultrasonic image is declared to be 1 mm. Assume
sound travels at 1500 m/s in the human body.
a. What is the absolute minimum A/D sample rate for a receiver if it is to detect echoes
from scatterers as close as 1 mm apart?
b. If the velocity of blood flow is to be measured in the range of ±1 m/s (we do not
need resolution here) using a 5 MHz ultrasonic sinusoidal burst, what is the minimum
required bandwidth and sample rate for an A/D convertor? (Hint: a Doppler-shifted
frequency fd can be determined by fd = f(1 + v/c), −c < v < +c; where f is the transmit-
ted frequency, c is the wave speed, and v is the velocity of the scatterer toward the
receiver.)
4. A microphone has a voltage sensitivity of 12 mV/Pa (1 Pascal = 1 Nt/m2). If a sinusoi-
dal sound of about 94 dB (approximately 1 Pa rms in the atmosphere) is to be digitally
recorded, how much gain would be needed to insure a “clean” recording for a 10 V 16-bit
signed A/D system?
5. A standard analog television in the United States has 525 vertical lines scanned in even
and odd frames 30 times/s.
a. If the vertical field of view covers a distance of 1.0 m, what is the size of the smallest
horizontal line thickness which would appear unaliased?

www.itpub.net
Sampled Data Systems 17

b. A high-definition television provides 1080 vertical lines of resolution. What is the


spatial resolution?
6. A new car television commercial is being produced where the wheels of the car have 12
stylish holes spaced every 30° around the rim. If the wheels are 0.7 m in diameter, how
fast can the car move before the wheels start appearing to be rotating backward?
7. Suppose a very low-frequency high SNR is being sampled at a high rate by a limited
dynamic range 8-bit signed A/D convertor. If one simply adds consecutive pairs of sam-
ples together one has 9-bit data at half the sample rate. Adding consecutive pairs of the
9-bit samples together gives 10-bit data at 1/4 the 8-bit sample rate, and so on.
a. If one continued on to get 16-bit data from the original 8-data sampled at 10,000 Hz,
what would the data rate be for the 16-bit data?
b. Suppose we had a very fast device that samples data using only 1-bit, 0 for negative
and 1 for positive. How fast would the 1-bit A/D have to sample to produce 16-bit data
at the standard digital audio rate of 44,100 samples per second?
8. A DSC with noise shaping and FIR filters to correct for frequency response roll-off adds
about two bits of precision for each halving of the internal sample rate.
a. If the delta modulator produces a bitstream at 6.4 MHz, what is the actual ENOB for
this device at 50 kHz sample rate?
b. What is the theoretical limit for the number of DSC bits available at 50 kHz?
9. An electronic sensor system produces a full-scale voltage of ±1 V and has a flat spectral
noise density specified at 50 nV/√Hz. If the bandwidth of the data acquisition is 20 kHz,
what is the SNR and the required number of bits for the ADC?

REFERENCES
1. N. S. Jayant and P. Noll, Digital Coding of Waveforms. Englewood Cliffs, NJ: Prentice-Hall, 1984.
2. K. Hwang, Computer Arithmetic. New York, NY: Wiley, 1979, p. 71.
3. A. Gill, Machine and Assembly Language Programming of the PDP-11. Englewood Cliffs, NJ: Prentice-
Hall, 1978.
4. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing. Englewood Cliffs, NJ: Prentice-
Hall, 1973.
5. P. M. Aziz et al., An overview of sigma-delta converters, IEEE Sig Proc Mag, Jan 1996, pp. 61–83.
6. S. Park, Principles of Sigma–Delta Modulation for Analog-to-Digital Converters, Motorola Application
Notes. Schaumburg, IL: Motorola, Inc., 1999, /D, Rev 1.
www.itpub.net
2 z-Transform
Given a complete mathematical expression for a discrete time-domain signal, why transform it to
another domain? The main reason for time–frequency transforms is that many mathematical reduc-
tions are much simpler in one domain than the other [1]. The z-transform in the digital domain is the
counterpart to the Laplace transform in the analog domain. The z-transform is an extremely useful
tool for analyzing the stability of digital sequences, designing stable digital filters, and relating digi-
tal signal processing operations to the equivalent mathematics in the analog domain. The Laplace
transform provides a systematic method for solving analog systems described by differential equa-
tions. Both the z-transform and the Laplace transform map their respective finite-difference or dif-
ferential systems of equations in the time or spatial domain to much simpler algebraic systems in the
frequency or wavenumber domains, respectively. However, the relationship between the z-domain
and the s-domain of the Laplace transform is not linear, meaning that the digital filter designer will
have to decide whether to match the system poles, zeros, or impulse response. As will be seen later
in this chapter, one can warp the frequency axis to control where and how well the digital system
matches the analog system. We begin by that assuming time t increases as life progresses into the
future, and a general signal of the form est, s = σ + jω, is stable for σ ≤ 0. A plot of our general signal
is shown in Figure 2.1.
The quantity s = σ + jω is a complex frequency where the real part σ represents the damping of
the signal (σ = −10.0 Nepers/s and ω = 50π rad/s, or 25 Hz, in Figure 2.1). All signals, both digital
and analog, can be described in terms of sums of the general waveform shown in Figure 2.1. This
includes transient characteristics governed by σ. For σ = 0, one has a steady-state sinusoid. For
σ < 0 as shown in Figure 2.1, one has an exponentially decaying sinusoid. If σ > 0, the exponentially
increasing sinusoid is seen as unstable, since eventually it will become infinite in magnitude. Signals
which change levels over time can be mathematically described using piecewise sums of stable and
unstable complex exponentials for various periods of time as needed.
The same process of generalized signal modeling is applied to the signal responses of systems
such as mechanical or electrical filters, wave propagation “systems,” and digital signal processing
algorithms. We define a “linear system” as an operator which changes the amplitude and/or phase
(time delay) of an input signal to give an output signal with the same frequencies as the input, inde-
pendent of the input signal’s amplitude, phase, or frequency content. Linear systems can be disper-
sive, where some frequencies travel through them faster than others, as long as the same system
input–output response occurs independent of the input signal. Since there are an infinite number of
input signal types, we focus on one very special input signal type called an impulse. An impulse
waveform contains the same energy level at all frequencies including 0 Hz (direct current or con-
stant voltage), and is exactly reproducible. For a digital waveform, a digital impulse simply has only
one sample nonzero. The response of linear systems to the standard impulse input is called the
­system impulse response. The impulse response is simply the system’s response to a Dirac delta
function (or the unity amplitude digital domain equivalent), when the system has zero initial condi-
tions. The impulse response for a linear system is unique and a great deal of useful information
about the system can be extracted from its analog or digital domain transform [2].

2.1 COMPARISON OF LAPLACE AND z-TRANSFORMS


Equation 2.1 describes a general integral transform where y(t) is transformed to Y(s) using the
kernel K(s,t).

19
20 Signal Processing for Intelligent Sensor Systems with MATLAB®

25 Hz–10 Nepers sinusoid


1

0.8

0.6

0.4

0.2
Response

–0.2

–0.4

–0.6

–0.8

–1
0 0.1 0.2 0.3 0.4 0.5
Seconds

FIGURE 2.1 A “general” stable signal of the form e(σ+jω)t where σ ≤ 0 indicates a stable waveform for posi-
tive time.

+∞

Y (s) = ∫ K (s, t ) y(t ) dt



(2.1)

The Laplace transform makes use of the kernel K(s,t) = e−st, which is also in the form of our
“general” signal as shown in Figure 2.1. We present the Laplace transform L { } as a pair of integral
transforms in Equation 2.2 relating the time “t” and frequency “s” domains.
+∞

Y (s ) = L{y(t )} = ∫ y(t )e
0
− st
dt

σ + j∞ (2.2)
1
y(t ) = L {Y (s )} =
−1
2πj ∫ Y (s)e
σj∞
+ st
ds

The corresponding z-transform pair for discrete signals is seen in Equation 2.3 where t is replaced
with nT and denoted as [n], and z = est.
+∞

Y [ z ] = Z{y[ n]} = ∑ y[n]z −n

n 0
(2.3)
1
y[ n] = Z −11
{Y [ z ]} =
2πj ∫ Y [ z]z
Γ
n −1
dzz

The closed contour Γ in Equation 2.3 must enclose all the poles of the function Y[z] zn−1. Both
Y(s) and Y[z] are, in the most general terms, ratios of polynomials where the zeros of the numera-
tor are also zeros of the system. Since the system response tends to diverge if excited near a
zero of the denominator polynomial, the zeros of the denominator are called the system poles.
The transforms in Equations 2.2 and 2.3 are applied to signals, but if these “signals” represent
system impulse or frequency responses, our subsequent analysis will refer to them as “systems,”
or “system responses.”

www.itpub.net
z-Transform 21

There are two key points which must be discussed regarding the Laplace and z-transforms. First,
we present what is called a “one-sided” or “causal” transform. This is seen in the time integral of
Equation 2.2 starting at t = 0, and the sum in Equation 2.3 starting at n = 0. Physically, this means
that the current system response is a result of the current and past inputs, and specifically not future
inputs. Conversely, a current system input can have no effect on previous system outputs. Only time
moves forward in the real physical world (at least as we know it in the twentieth century), and so a
distinction must be made in our mathematical models to represent this fact. Our positive time move-
ment mathematical convention has a critical role to play in designating stable and unstable signals
and systems mathematically. Second, in the Laplace transform’s s-plane (s = σ + jω), only signals
and system responses with σ ≤ 0 are mathematically stable in their causal response (time moving
forward). This means est is either of constant amplitude (σ = 0), or decaying amplitude (σ < 0) as
time increases. Therefore, system responses represented by values of s on the left-hand plane (jω is
the vertical Cartesian axis) are stable causal response systems. As will be seen below, the nonlinear
mapping from the s-plane (analog signals and systems) to z-plane (digital signals and systems) maps
the stable causal left-half s-plane to the region inside a unity radius circle on the z-plane, called the
unit circle.
The comparison of the Laplace and z-transforms is most useful when considering the mapping
between the complex s-plane and the complex z-plane, where z = esT, T being the time interval in
seconds between digital samples of the analog signal. The structure of this mapping depends on the
digital sample rate and whether real or complex samples are used. An understanding of this ­mapping
will allow one to easily design digital systems which model (or control) real physical systems in the
analog domain. Also, adaptive system modeling in the digital domain of real physical systems can
be quantitatively interpreted and related to other information processing in the adaptive system.
However, if we have an analytical expression for a signal or system in the frequency domain, it may
or may not be realizable as a stable causal signal or system response in the time domain (digital or
analog). Again, this is due to the obliviousness of time to positive or negative direction. If we are
mostly concerned with the magnitude response, we can generally adjust the phase (by adding time
delay) to realize any desired response as a stable causal system. Table 2.1 gives a partial listing of
some useful Laplace transforms and the corresponding z-transforms assuming regularly sampled
data every T seconds (fs = 1/T samples/s).
One of the subtler distinctions between the Laplace transforms and the corresponding z-­transforms
in Table 2.1 are how some of the z-transform magnitudes scale with the sample interval T. It can be
seen that the result of the scaling is that the sampled impulse responses may not match the inverse
z-transform if a simple direct s-to-z mapping is used. Since adaptive digital signal processing can be
used to measure and model physical system responses, we must be diligent to eliminate digital
­system responses where the amplitude depends on the sample rate. However, in Section 2.3, it will
be shown that careful consideration of the scaling for each system resonance or pole will yield a
very close match between the digital system and its analog counterpart. At this point in our presen-
tation of the z-transform, we compare the critical mathematical properties for linear time-invariant
systems in both the analog Laplace transform and the digital z-transform.
The Laplace transform and the z-transform have many mathematical similarities, the most
­important of which are the properties of linearity and shift invariance. Linear shift-invariant system
modeling is essential to adaptive signal processing since most optimizations are based on a quadratic
squared output error minimization. But even more significantly, linear time-invariant physical
­systems allow a wide-range linear algebra to apply for the straightforward analysis of such systems.
Most of the world around us is linear and time invariant, provided the responses we model are rela-
tively small in amplitude and quick in time. For example, the vibration response of a beam slowly
corroding due to weather and rust is linear and time invariant for small vibration amplitudes over a
period of, say, days or weeks. But, over a period of years the beam’s corrosion changes the vibration
response, thereby making it time varying in the frequency domain. If the forces on the beam approach
its yield strength, the stress–strain relationship is no longer linear and single-frequency vibration
22 Signal Processing for Intelligent Sensor Systems with MATLAB®

TABLE 2.1
Some Useful Signal Transforms
Time Domain s Domain z Domain

1 for t ≥ 0 1 z
0 for t < 0 s (z − 1)

es0 t 1 z
s − s0 z − es 0
T

t e s0 t 1 Tz e s0
T

2
(s − s0 ) T
( z − e s0 )2

e at sin ω0 t ω0 z e − aT sin ω 0T
s 2 + 2 as + a 2 + ω 20 z 2
2 z e − a T cos ω 0T + e −2 a T

e −at cos(ω 0 t θ) (cos θ (s + a ) + ω 0 sin θ z cos θ( z α ) z β sin θ


(s + a )2 + ω 20 ( z α )2 + β 2
α = e − aT cos ω 0 T
β = e − aT sin ω 0 T

1 e − at e − bt 1 ( Az + B)z
+ +
ab a (a b) b(b a ) s(s + a )(s + b) ( z e − a T )( z e −b T )( z 1)

b(1 e − aT ) a(1 e −bT )


A=
ab (b a )

ae − aT (1 e −bT ) be −bT (1 e − aT )
B=
ab (b 1)

inputs into the beam will yield nonlinear multiple frequency outputs. Nonlinear signals are rich in
physical information but require very complicated models. From a signal processing point of view, it
is extremely valuable to respect the physics of the world around us, which is only linear and time
invariant within specific physical constraints, and exploit linearity and time invariance wherever
­possible. Nonlinear signal processing is still something much to be developed in the future. Following
is a summary of comparison of Laplace and z-transforms.
Linearity: Both the Laplace and z-transforms are linear operators. The inverse Laplace and
z-transforms are also linear.

L{af (t ) + bg(t )} = aF (s ) + bG(s )


(2.4)
Z{af [ k ] + bg[ k ]} = aF[ z ] + bG[ z ]

Delay Shift Invariance: Assuming one-sided signals f(t) = f [k] = 0 for t, k < 0 (no initial
conditions),

L{ f (t − τ )} = e− sτ F (s )
(2.5)
Z{ f [ k − N )} = z − N F[ z ]

www.itpub.net
z-Transform 23

Convolution: Linear shift-invariant systems have the following property: a multiplication of two
signals in one domain is equivalent to a convolution in the other domain.

⎧⎪ t ⎫⎪

⎩⎪ 0

L{ f (t ) * g(t )} = L ⎨ f (τ)g(t − τ) dτ ⎬ = F (s )G(s )
⎭⎪
(2.6)

A more detailed derivation of Equation 2.6 will be presented in the next section. In the digital
domain, the convolution integral becomes a simple summation.

m
⎧⎪ ⎫⎪
Z{ f [ k ]* g[ k ]} = Z ⎨
⎩⎪
∑ f [k ]g[m − k ]⎬⎭⎪ = F[ z]G[ z]
k 0
(2.7)

If f [k] is the impulse response of a system and g[k] is an input signal to the system, the system
output response to the input excitation g[k] is found in the time domain by the convolution of g[k]
and f [k]. However, the system must be both linear and shift invariant (a shift of k samples in the
input gives a shift of k samples in the output), for the convolution property to apply. Equation 2.7 is
fundamental to digital systems theory and will be discussed in great detail later.
Initial Value: The initial value of a one-sided (causal) impulse response is found by taking the
limit as s or z approaches infinity.

lim f (t ) = lim sF (s ) (2.8)


t→0 s→∞

The initial value of the digital impulse response can be found in an analogous manner.

f [0] = lim F[ z ] (2.9)


z→∞

Final Value: The final value of a causal impulse response can be used as an indication of the
stability of a system as well as to determine any static offsets.

lim f (t ) = lim sF (s ) (2.10)


t→∞ s→0

Equation 2.10 holds so long as sF(s) is analytic in the right-half of the s-plane (no poles on the
jω-axis and for σ ≥ 0). F(s) is allowed to have one pole at the origin and still be stable at t = ∞. The
final value in the digital domain is

lim f [ k ] = lim(1 − z −1 )F[ z ] (2.11)


k→∞ z →1

(1 – z−1)F[z] must also be analytic in the region on and outside the unit circle on the z-plane. The
region |z| ≥ 1, on and outside the unit circle on the z-plane, corresponds to the region σ ≥ 0, on the
jω-axis and on the right-hand s-plane. The s-plane pole F(s) is allowed to have s = 0 in equation
maps to a z-plane pole for F[z] at z = 1 since z = esT. The allowance of these poles is related to the
24 Signal Processing for Intelligent Sensor Systems with MATLAB®

restriction of causality for one-sided transforms. The mapping between the s and z planes will be
discussed in some more detail in the following text.
Frequency Translation/Scaling: Multiplication of the analog time-domain signal by an exponen-
tial leads directly to a frequency shift.

L{e− a t f (t )} = F (s + a ) (2.12)

In the digital domain, multiplication of the sequence f [k] by a geometric sequence αk results in
scaling the frequency range.

∞ −k
⎛ z⎞ ⎡z⎤
Z{α k f [ k ]} = ∑
k 0
f [k ] ⎜ ⎟
⎝ α⎠
= F⎢ ⎥
⎣α ⎦
(2.13)

Differentiation: The Laplace transform of the derivative of the function f(t) is found using
­integration by parts.

⎧∂ f ⎫
L⎨ ⎬ = sF (s ) − f (0) (2.14)
⎩ ∂t ⎭

Carrying out integration by parts as in Equation 2.14 for higher-order derivatives yields the
­general formula

N −1
⎧⎪ ∂ f N ⎫⎪

L ⎨ N ⎬ = s N F (s ) − s N −1− k f ( k ) (0)
⎩⎪ ∂t ⎭⎪ k 0
(2.15)

where f (k)(0) is the kth derivative of f(t) at t = 0. The initial conditions for f(t) are necessary to its
Laplace transform just as they are necessary for the complete solution of an ordinary differential
equation. For the digital case, we must first employ a formula for carrying forward initial conditions
in the z-transform of a time-advanced signal.

N −1

Z{x[ n + N ]} = x N X [ z ] − ∑z
k 0
N −k
x[ k ] (2.16)

For a causal sequence, Equation 2.16 can be easily proved from the definition of the z-transform.
Using an approximation based on the definition of the derivatives, the first derivatives of a digital
sequence is

1
x[ n + 1] = ( x[ n + 1] − x[ n]) (2.17)
T

where T is the sample increment. Applying the time-advance formula in Equation 2.16 gives the
z-transform of the first derivative.

1
Z {x[ n + 1]} = {(z − 1 ) X[ z] − zx[0]} (2.18)
T

www.itpub.net
z-Transform 25

Delaying the sequence by one sample shows the z-transform of the first derivative of x[n] at
sample n.

1
Z { x[ n]} =
T
{
(1 − z −1) X[ z ] − x[0] } (2.19)

The second derivative can be seen to be

1
Z {
x[ n]} =
T
2 {
(1 − z −1) 2X[ z ] − ⎡⎣(1 − 2 z −1 ) x[0] + z −1x[1]⎤⎦ } (2.20)

The pattern of how the initial samples enter into the derivatives can be more easily seen in the
third derivative of x[n], where the polynomial coefficients weighting the initial samples can be seen
as fragments of the binomial polynomial created by the triple zero at z = 1.

1
3{
Z {
x [ n]} = (1 − z −1 )3 X [ z ] − (1 − 3z −1 + 3z −2 ) x[0] −( z −1 − 3z −2 ) x[1] − z −2 x[2]} (2.21)
T

Putting aside the initial conditions on the digital domain definitive, it is straightforward to show
that the z-transform of the Nth definitive of x[n] simply has N zeros at z = 1 corresponding to the
analogous N zeros at s = 0 in the analog domain.

1
Z { x ( N ) [ n]} = {(1 − z −1 ) N X [ z ] − initial conditions} (2.22)
TN

Mapping between the s and z Planes: As with the aliased data in Section 1.1, the effect of sam-
pling can be seen as a mapping between the series of analog frequency bands and the digital base-
band defined by the sample rate and type (real or complex). To make sampling useful, one must
band limit the analog frequency response to a bandwidth equal to the sample rate for complex
samples, or LPF to half the sample rate (called the Nyquist rate) for real samples. Consider the effect
of replacing the analog t in zn = est with nT, where n is the sample number and T = 1/fs is the sam-
pling interval in seconds.

⎛ σ 2 πf ⎞
⎜⎝ f + j f ⎟⎠ n
z n = e( σ + j ω ) n T = e s s
(2.23)

As in Equation 2.23, the analog frequency repeats every multiple of fs (a full fs Hz bandwidth is
available for complex samples). For real samples (represented by a phase-shifted sine or cosine
rather than a complex exponential), a fs Hz-wide frequency band will be centered about 0 Hz giving
an effective signal bandwidth of only fs/2 Hz for positive frequency. The real part of the complex
spectrum is symmetric for positive and negative frequencies while the imaginary part is skew sym-
metric (negative frequency amplitude is opposite in sign from positive frequency amplitude). This
follows directly from the imaginary part of ejθ being j sin θ. The amplitudes of the real and imagi-
nary parts of the signal spectrum are determined by the phase shift of the sine or cosine. For real
time-domain signals sampled at fs samples/s, the effective bandwidth of the digital signal is from 0
to fs/2 Hz. For σ ≤ 0, a strip within ±ωs/2 for the left-half of the complex s-plane maps into a region
inside a unit radius circle on the complex z-plane. For complex sampled systems, each multiple of
fs Hz on the s-plane corresponds to a complete trip around the unit circle on the z-plane. In other
words, the left-half of the s-plane is subdivided into an infinite number of parallel strips, each
Exploring the Variety of Random
Documents with Different Content
XVII
BRITISH LOSSES AT MAYA:
JULY 25, 1813

Killed. Wounded. Missing.


Off. Men. Off. Men. Off. Men. Total.
2nd Division Staff — — 2 — — — 2
Cameron’s Brigade:
1/50th Foot 3 21 10 158 2 55 249
1/71st Foot 2 16 4 120 1 53 196
1/92nd Foot — 34 19 268 — 22 343
Pringle’s Brigade:
1/28th Foot 1 8 6 112 1 31 159
2/34th Foot 1 21 4 54 6 82 168
1/39th Foot 2 10 7 111 2 54 186
Two companies 5/60th,
attached to above brigades 2 5 — 11 1 25 44
7th Division troops:
1/6th — 2 2 17 — — 21
1/82nd — 8 4 67 — — 79
Brunswick-Oels — 8 3 15 — 15 41
Total of the three brigades 11 133 61 933 13 337 1,488
BRITISH LOSSES AT RONCESVALLES
N.B.—Only those of Ross’s and Campbell’s brigades are
available in detail, those of Byng’s brigade are nowhere found, but
are known to have been slight—under 100 in all. It is most curious
that not one officer-casualty appears to have occurred either in the
1/3rd or in the 1st Provisional Battalion, Byng’s only units engaged at
Roncesvalles.
Killed. Wounded. Missing.
Off. Men. Off. Men. Off. Men. Total.
Ross’s Brigade:
1/7th Foot 1 6 — 24 — — 31
20th Foot 1 14 8 105 — 11 139
1/23rd Foot — 6 4 32 — — 42
1 company Brunswick-Oels — 2 — 2 — — 4
Portuguese Brigade (A. Campbell):
11th Line
23rd Line — 3 — 20 — 6 29
7th Caçadores
XVIII
BRITISH LOSSES AT SORAUREN:
JULY 28

Killed. Wounded. Missing.


Off. Men. Off. Men. Off. Men. Total.
2nd Division:
Byng’s Brigade:
1/3rd Foot — — — 2 — — 2
1/57th Foot — 2 2 59 — — 63
1st Provisional Batt.
— — 1 4 — — 5
(2/31st & 2/66th)
Brigade Total — 2 3 65 — — 70
3rd Division: no losses.
4th Division (Cole):
Anson’s Brigade:
3/27th Foot 2 41 9 195 — 7 254
1/40th Foot 1 19 4 105 — — 129
1/48th Foot 2 10 8 104 — 11 135
2nd Provisional Batt.
— 1 1 18 — — 20
(2nd & 2/53rd)
Ross’s Brigade:
1/7th Foot 1 46 10 159 1 — 217
1/20th Foot 1 23 5 79 — — 108
1/23rd Foot 2 16 4 59 — — 81
1 company
— 1 — 3 — 1 5
Brunswick-Oels
Divisional Total 9 157 41 722 1 19 949
6th Division (Pack):
Stirling’s Brigade:
1/42nd Foot — 3 — 19 — — 22
1/79th Foot — 4 1 30 — — 35
1/91st Foot — 12 6 92 — 2 112
1 company 5/60th — 1 — 4 — — 5
Lambert’s Brigade:
1/11th Foot — 5 4 42 — — 51
1/32nd Foot — — 1 23 — — 24
1/36th Foot — — 2 16 — — 18
1/61st Foot — 2 2 58 — — 62
Divisional Total — 27 16 284 — 2 329
Artillery — — — 6 — — 6
General Staff 2 — 2 — — — 4
Total British 11 186 62 1,077 1 21 1,358
Total Portuguese 3 160 45 850 — 44 1,102
General Total 14 346 107 2,034 1 65 2,460
XIX
BRITISH LOSSES AT SECOND SORAUREN:
JULY 30

Killed. Wounded. Missing.


Off. Men. Off. Men. Off. Men. Total.
2nd Division:
Byng’s Brigade:
1/3rd Foot 1 3 1 25 — — 30
1/57th Foot — 2 2 33 — — 37
1st Provisional Batt.
1 5 6 52 — — 64
(2/31st & 2/66th)
Brigade Total 2 10 9 110 — — 131
3rd Division (Picton):
Brisbane’s Brigade:
1/45th Foot — — 1 7 — — 8
5/60th (4 companies) — 2 1 28 — — 31
74th Foot 1 6 4 38 — — 49
1/88th Foot — — — 1 — — 1
Brigade Total 1 8 6 74 — — 89
Colville’s Brigade: no casualties.
4th Division (Cole):
Anson’s Brigade:
3/27th Foot — — — — — — —
1/40th Foot — — 1 6 — — 7
1/48th Foot — — — — — — —
2nd Provisional Batt.
— — — 6 — — 6
(2nd & 2/53rd)
Brigade Total — — 1 12 — — 13
Ross’s Brigade: no casualties.
6th Division (Pakenham):
Stirling’s Brigade:
1/42nd Foot — 1 — 7 — — 8
1/79th Foot — 1 — 17 — — 18
1/91st Foot — 1 1 7 — — 9
Lambert’s Brigade:
1/11th Foot — 2 — 20 — 1 23
1/32nd Foot — 3 2 28 — — 33
1/36th Foot — 6 1 19 — — 26
1/61st Foot — 1 2 10 — — 13
Divisional Total — 15 6 108 — 1 130
7th Division (Dalhousie):
Barnes’s Brigade:
1/6th Foot — — 1 5 — 1 7
3rd Provisional Batt.
— 1 — 2 — — 3
(2/24th & 2/58th)
Brunswick-Oels
— 2 — 1 — 14 17
(9 companies)
Inglis’s Brigade:
51st Foot — 2 — 22 — — 24
68th Foot 1 3 3 16 — — 23
1/82nd Foot — 9 7 76 — — 92
Chasseurs Britanniques 1 12 9 19 — 4 45
Divisional Total 2 29 20 141 — 19 211
Artillery — 1 — 8 — — 9
General British Total 5 62 42 448 — 20 583
Portuguese total unascertainable; the casualties of 2nd Sorauren
and Beunza, fought on the same day, being lumped together in the
return at 1,120. But at 2nd Sorauren they were fairly light: from the
fact that Stubbs’s Portuguese brigade had 6 officer-casualties this
day, Lecor’s 4, and Madden’s 3, it might be guessed that the ‘other
ranks’ casualties may have been about 300 in all.
COMBAT OF BEUNZA. JULY 30
Killed. Wounded. Missing.
Off. Men. Off. Men. Off. Men. Total.
Fitzgerald’s (late Cameron’s) Brigade:
1/50th — 3 2 14 2 9 30
1/71st — 8 1 28 — 13 50
1/92nd — 9 1 26 — 1 37
Pringle’s Brigade:
1/28th — — — — — — —
2/34th 1 5 1 15 — 9 31
1/39th — — — 3 — — 3
General Staff — — 1 — — — 1
Cavalry (14th Light Dragoons
and 1st Hussars K.G.L.) — — 1 2 — 2 5
Total 1 25 7 88 2 34 157
Portuguese losses heavy. Ashworth’s brigade had 12 officer-
casualties, Da Costa’s 18; this at the usual rate should mean about
600 casualties of all ranks.
XX
BRITISH CASUALTIES IN MINOR
ENGAGEMENTS:
JULY 31-AUGUST 2, 1813

COMBAT OF VENTA DE URROZ (or DONNA MARIA). JULY 31


Killed. Wounded. Missing.
Off. Men. Off. Men. Off. Men. Total.
2nd Division, Staff — — 1[1078] — 1[1079] — 2
Fitzgerald’s Brigade:
1/50th Foot — 6 — 26 — 14 46
1/71st Foot — 2 1 34 — — 37
1/92nd Foot — 12 6 69 — 4 91
Pringle’s Brigade:
1/28th Foot — 1 — 1 — — 2
2/34th Foot — 1 — 13 — 2 16
1/39th Foot — — — 4 — — 4
7th Division:
Inglis’s Brigade:
51st Foot — 5 — 40 — 6 51
68th Foot — 5 — 25 — — 30
1/82nd Foot — — — 3 — — 3
Chasseurs
Britanniques — 9 1 15 — 8 33
Total — 41 9 230 1 34 315
COMBAT OF ECHALAR. AUGUST 2
Killed. Wounded. Missing.
Off. Men. Off. Men. Off. Men. Total.
General Staff — — 1 — — — 1
4th Division:
Ross’s Brigade:
1/7th Foot — — — 4 — — 4
20th Foot 1 — 3 26 — — 30
1/23rd Foot — — — 3 — — 3
7th Division:
Barnes’s Brigade:
1/6th Foot 1 12 3 119 — 3 138
3rd Provisional Batt. — 15 9 115 — 2 141
Brunswick-Oels — 1 4 7 — 2 14
Light Division:
1/43rd — — — 1 — — 1
1/95th — 1 1 10 — — 12
3/95th — 1 — 13 — — 14
Total 2 30 21 298 — 7 358
XXI
PORTUGUESE LOSSES IN THE BATTLES
OF THE PYRENEES

Killed. Wounded. Missing.


Off. Men. Off. Men. Off. Men. Total.
Da Costa’s Brigade:
2nd Line All at
3 85 9 81 — 21 200
Beunza,
14th Line
1 23 5 36 — 19 84 July 30.
A. Campbell’s Brigade:
4th Line 2 24 7 78 — 3 114 Almost all
10th Line at two
2 75 7 116 — 13 213
battles
10th of
Caçadores — 3 3 12 — 10 28 Sorauren.
Ashworth’s Brigade (2nd Division):
6th Line — 29 5 63 — 8 105
All at
18th Line 1 51 4 82 — 12 150
Beunza,
6th July 30.
Caçadores 1 13 1 37 — 10 62
Lecor’s Brigade (7th Division):
7th Line — — — — — 4 4 Almost all
19th Line at
— — 2 — — — 2
2nd
2nd Sorauren,
Caçadores 1 12 1 44 — — 58 July 30.
Madden’s Brigade (6th Division):
8th Line — — — 3 — — 3 At the two
12th Line 2 53 2 208 — 4 269 battles of
9th — 15 3 86 — — 104 Sorauren.
Caçadores
Power’s Brigade (3rd Division):
9th Line — — — — — 2 2
21st Line — 5 — 9 — — 14 All at 2nd
11th Sorauren.
Caçadores — 1 — 5 — — 6
Stubbs’s Brigade (4th Division):
11th Line 1 34 1 105 — 1 142
At the two
23rd Line 1 17 6 26 — — 50
battles of
7th Sorauren.
Caçadores 2 47 5 67 — — 121
Total 16 489 60 1,060 — 107 1,732
No Artillery losses save those at the combat of Maya, which were
about 15. No Cavalry losses at all, though D’Urban’s brigade was in
the field at Sorauren.
N.B.—It will be noted that these losses are appreciably lower
than those stated in Wellington’s general return. The figures given
above are from Beresford’s corrected returns at Lisbon.
XXII
FRENCH LOSSES IN THE CAMPAIGN
OF THE PYRENEES
[From Soult’s Official Return, lent me by Mr. Fortescue.]

Killed. Wounded. Prisoners.


Off. Men. Off. Men. Off. Men. Total.
I. Reille’s Wing:
1st Division (Foy) 6 78 9 393 — 69 555
7th Division (Maucune) 14 189 27 500 25 1,102 1,857
9th Division
(Lamartinière) 10 79 16 657 3 216 981
Total Reille’s Wing 30 346 52 1,550 28 1,387 3,393
II. D’Erlon’s ‘Centre’:
2nd Division
(Darmagnac) 13 191 65 1,925 1 30 2,225
3rd Division (Abbé) 9 130 21 560 1 29 750
6th Division (Maransin) 11 105 34 783 — 126 1,059
Total D’Erlon’s ‘Centre’ 33 426 120 3,268 2 185 4,034
III. Clausel’s Wing:
4th Division (Conroux) 16 145 35 1,432 12 747 2,387
5th Division
(Vandermaesen) 16 153 30 978 2 301 1,480
8th Division (Taupin) 6 125 38 1,007 — 26 1,202
Total Clausel’s Wing 38 423 103 3,417 14 1,074 5,069
IV. Cavalry — 12 2 33 1 19 67
General Total of
Army 101 1,207 277 8,268 45 2,665 12,563
No figures for Artillery, Engineers, Train, or other auxiliary
services, or for General Staff. Martinien’s lists supply 4 casualties of
generals (Conroux, Schwitter, Rignoux, Meunier), and 12 of staff
officers. There must have been appreciable casualties in the other
services, especially men captured from the Train at the Yanzi
disaster.
Soult’s figures are always unreliable (as witness Albuera). The
details above contain some ‘moral impossibilities’—e. g. the Return
gives 63rd Line of Abbé’s Division 193 casualties, not including one
officer. But Martinien’s lists supply one officer-casualty at Maya, two
at Beunza, two at Yanzi. Similarly 58th Line of Conroux has in the
Official Return 473 casualties, including only 5 officers—1 wounded
and 4 prisoners. A reference to Martinien shows 2 officers killed (one
the colonel!) and 5 wounded—adding the 4 prisoners we get 11
officer-casualties to 473 men: quite a possible percentage, which
Soult’s is not.
Captain Vidal de la Blache (i. p. 280) gives a casualty list differing
slightly from the above. It runs: Foy 556, Maucune 2,457,
Lamartinière 981, Darmagnac 2,225, Abbé 253 [quite impossible],
Maransin 1,059, Conroux 2,387, Vandermaesen 1,480, Taupin
1,202, Cavalry 72; total 12,071.
INDEX

Abbé, general, governor of Navarre, fails to relieve Tafalla, 262;


joined by Barbot, 263;
by Taupin, 264;
at Maya, 629;
at Beunza, 703;
at Venta de Urroz, 710;
at Yanzi, 723.
Aboville, Auguste Gabriel, general, his explosion at Burgos, 358.
Adam, Frederick, colonel, at combat of Biar, 287-90;
at Castalla, 291-7;
protests against abandonment of the siege of Tarragona, 511.
Alava, Miguel, general, wounded at Villa Muriel, 80.
Alba de Tormes, combat of, 122-3;
defended by major José Miranda, 125.
Albeyda, combat of, 282.
Alcoy, combat of, 282.
Alicante, Maitland at, 4, 162;
Mackenzie commands Anglo-Spanish army at, 58, 163;
Bentinck’s proposal to withdraw troops from, 220;
Murray at, 275;
Bentinck brings back expeditionary force to, 521.
Alten, Charles, major-general commanding Light Division, left in
command at Madrid, 4;
in the Bastan, 540;
moves the Light Division to Lecumberri, 690;
ordered to harass Soult’s retreat, 715-24;
at Yanzi, 727;
at Ivantelly, 735.
Alten, Victor, major-general, on retreat from Madrid, 98, 119, 130,
135, 144;
pursues Villatte, 317;
at Vittoria, 395, 419;
in the Vittoria pursuit, 458.
Altobiscar, combat of, see Roncesvalles.
Alvarez, Pedro, colonel, his defence of Castro-Urdiales, 272-3.
Anson, George, major-general, his brigade on the Douro, 9-10, 15;
on retreat from Burgos, 68;
at Venta del Pozo, 71-6.
Anson, William, major-general, at Vittoria, 419;
at Sorauren, 656, 673;
at second Sorauren, 694.
Aranjuez, evacuated by Hill, 97.
Ariñez, village of, in battle of Vittoria, taken by Picton, 418-21.
Artificers, Royal Military, converted into Royal Sappers and Miners,
26;
at siege of Burgos, 50.
Ashworth, Charles, brigadier-general, at Vittoria, 419;
in the Bastan, 530;
at Maya, 626;
at combat of Beunza, 703.
Astorga, long siege of, 6-12;
surrenders to Castaños, 11;
Foy at, 11.
Aussenac, general, joins Souham, 54;
operations of his brigade, 116 note;
in Biscay, 267.
Avy, Antoine, general, at Vittoria, 393, 411, 428.

Babila Fuente, combat of, 319.


Balaguer, fort, besieged by Murray, 491;
fall of, 499.
Ballasteros, Francisco, general, Wellington’s orders to, 58, 59;
his attempted coup d’état, 60, 61, 66, 178, 198;
exiled by the Cortes, 62;
Wellington’s comments on, 300.
Barbot, general, his troops defeated at combat of Lerin, 263;
checked at Roncesvalles, 615.
Barnes, Edward, major-general, his gallant counter-attack at Maya,
637;
at second Sorauren, 697;
charge of his brigade at Echalar, 733.
Bathurst, Henry, Earl, Secretary for War, correspondence of
Wellington with, 12, 25, 28, 64, 112, 117, 164 note, 197, 211, 214,
217, 219, 220, 225-6;
establishes Beresford’s claims to seniority over Graham and
Cotton, 230;
Wellington sends plan of campaign for 1813 to, 301-4;
Wellington’s remarks after Vittoria, 453, 468;
suggests Wellington’s transfer to Germany, 558.
Bayas, skirmish on the, 379.
Behobie, bridge of, broken by Foy, 487.
Bejar, Foy’s failure at, 240-1.
Bentinck, Lord Frederick, at siege of Tarragona, 507, 511.
Bentinck, Lord William, sends troops to Alicante, 164;
proposes to withdraw them, 222-3;
sends a raiding force to Tuscany, 223;
recalls troops from Alicante to Sicily, 284;
arrives to take command at Balaguer, 520;
his dispatches to Wellington and Bathurst, 521.
Beresford, Sir William, marshal, his management of army in
Portugal, 210;
Wellington’s choice of, as successor to himself, 228;
Duke of York’s decision against his claims, 229;
Lord Bathurst establishes his right to seniority, 230;
receives Wellington’s plan of campaign for 1813, 303.
Bertoletti, general, commands at Tarragona, 492;
defends the town against Murray, 496-514.
Beunza, combat of, 703, 704;
casualties at, 739.
Biar, combat of, 288-90.
Bilbao, taken and lost by the Spaniards, 254;
again attacked, 254;
relieved by Palombini, 261;
again attacked by Spaniards, 267.
Bloye, captain (R.N.), at Castro-Urdiales, 271-2, 273.
Bock, Eberhard, general, on retreat from Burgos, 69;
at combat of Venta del Pozo, 71-4;
brings his brigade to join Graham, 323;
crosses the Esla, 330;
at Vittoria, 396, 437.
Bourbon, the Cardinal, appointed head of Spanish Regency, 205-7.
Boyer, general, at combat of Venta del Pozo, 72-4;
ordered to return to France, 246;
his raid across the Esla, 327;
at Vittoria, 404, 436;
summoned to join Napoleon, 531.
Bradford, Henry,brigadier-general, sufferings of his brigade on retreat
to Rodrigo, 154;
at Vittoria, 396, 424, 437;
at combat of Villareal, 474, 475;
at combat of Tolosa, 478;
at St. Sebastian, 567, 571.
Brisbane, Thomas, major-general, his brigade at Vittoria, 413, 418,
421.
Burgos, description of, 21-4;
siege of, 17-51;
relieved by Souham, 68;
Wellington’s comments on, 299;
operations round, in June, 1813, 346-63;
abandoned by King Joseph, 357.
Burgoyne, John, major (R.E.), senior engineer at Burgos, 18 n., 28
n., 30 n., 41 n.;
his observations, 44-5, 47, 49-51;
his notes on siege of St. Sebastian, 573, 578.
Byng, John, major-general, at Vittoria, 400, 419;
at Roncesvalles, 557, 611;
retires, 622-3;
at Sorauren, 656, 673;
at second Sorauren, 694.
Cadiz, Wellington at, 201-6.
Cadogan, Hon. Henry, colonel, at Vittoria, 400;
death of, 401.
Caffarelli, Louis Marie, general, commands Army of the North, 2;
strengthens Burgos, 23;
joins Souham at Briviesca, 48-54;
in pursuit of Wellington, 85;
returns to Burgos, 111;
opposed by the Guerrilleros, 166;
his failure to restore order in Biscay, 191-3, 252-8;
superseded by Clausel, 193;
returns to France, 217, 262.
Cameron, John, colonel, succeeds to Cadogan’s brigade at Vittoria,
417, 419, 429;
in pursuit of the French, 439;
in the Bastan, 530;
at Maya, 542, 626, 633, 634.
Campbell, Archibald,major-general, at Roncesvalles, joins Ross,
621;
at Sorauren, 656;
at Beunza, 704.
Campbell, Colin, captain, his account of storm of St. Sebastian, 580,
581, 582.
Campbell, James, general, at Alicante, 162;
supersedes Clinton, 165, 275;
superseded by Murray, 275.
Carvajal, Spanish minister of war, Wellington’s letter to, 198-200.
Casapalacios, general, commanding Franco-Spaniards at Vittoria,
394-426;
under Soult, 595.
Cassagne, general, at Vittoria, 393, 402, 414, 429;
his division in the Bastan, 534.
Cassan, general, governor of Pampeluna, 528.
Castalla, battle of, 291-6.
Castaños, Francisco Xavier, general, commands Army of Galicia,
joins Wellington before Burgos, 15-16;
on retreat, 64, 156;
winter quarters of, 184;
Wellington’s approval of, 199;
appointed captain-general in Galicia, Castile, and Estremadura,
305;
deposed by Regency, 523.
Castro-Urdiales, fortified by the Spaniards, 260;
Clausel at, 265;
siege and storm of, 271-3.
Chinchilla, siege of, 63;
taken by French, 66.
Ciudad Rodrigo, Wellington’s retreat on, 153;
winter quarters at, 180.
Clarke, Henri, duc de Feltre, French minister of war, appoints
Masséna to command Army of Portugal, 33;
orders withdrawal from Madrid, 243;
recalls troops to France, 248, 249;
his orders to Clausel, 259;
forwards King Joseph’s complaints to Russia, 88;
lectures the king on his strategy, 243, 248, 249;
his views on the Northern insurrection, 252;
urges the king to send troops to Biscay and Navarre, 259-60;
his misconceptions of Wellington’s strength and designs, 245,
251;
his instructions to King Joseph after his retreat into France, 546.
Clausel, Bertrand, general, commands Army of Portugal, 2;
reorganizes his army, 6-8;
advances to Valladolid, 8-9;
retreats northward, 14, 15, 17;
superseded by Souham, 33;
in operations round Salamanca, 124-42;
supersedes Caffarelli with Army of the North, 193, 258, 262;
his failure to subdue the North, 259, 270;
his pursuit of Mina, 268, 269, 334;
ordered to join King Joseph, 386;
fails to reach Vittoria before the battle, 454;
evades Wellington’s pursuit, 460-9;
reaches Saragossa, 465;
arrives in France, 469, 527;
appointed to command of the left wing under Soult, 594;
at Roncesvalles, 615;
at Sorauren, 657, 663, 665-77;
in second battle of Sorauren, 692-7;
retreat, 707;
at Sumbilla, 718;
at Echalar, 734.
Clinton, Henry, general commanding 6th Division, 2;
joined by Wellington, 13;
at Burgos, 42;
Wellington’s dissatisfaction with, 52;
resumes command of 6th Division, after Vittoria, 462;
pursues Clausel, 463, 464.
Clinton, William, major-general, takes command at Alicante, 164;
superseded by Campbell, 165, 275;
at battle of Castalla, 291-7;
at Tarragona, 492, 565;
thwarted by Murray, 509-10.
Collier, Sir George, captain (R.N.), blockades St. Sebastian, 567;
lands naval guns, 569.
Cole, Sir Lowry, major-general, retreats from Madrid, 102;
at Roncesvalles, 585, 604, 611, 617, 620;
Wellington censures his retreat, 622, 623;
at combat of Linzoain, 651;
at Sorauren, 656;
at second Sorauren, 694.
Colville, Hon. Charles, major-general, at Vittoria, 411, 417, 429, 435.
Conroux, general, at Vittoria, 401;
alarmist reports of, 538, 540;
at Sorauren, 663, 668;
in second battle of Sorauren, 692-6;
at Sumbilla, 719;
at Echalar, 734.
Constantin, Foy defeats Silveira at, 11.
Copons, Francisco, captain-general of Catalonia, commands 1st
Army in Catalonia, 308;
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebooknice.com

You might also like