Sei sulla pagina 1di 354

SIGNALS AND SYSTEMS IN BIOMEDICAL ENGINEERING

Signal Processing and Physiological Systems Modeling



TOPICS IN BIOMEDICAL ENGINEERING INTERNATIONAL BOOK SERIES

Series Editor: Evangelia Micheli- Tzanakou Rutgers University

Piscataway, New Jersey

Signals and Systems in Biomedical Engineering:

Signal Processing and Physiological Systems Modeling Suresh R. Devasahayam

A Continuation Order Plan is available for this series. A continuation order will bring delivery of each new volume immediately upon publication. Volumes are billed only upon actual shipment. For further information please contact

the publisher. .

SIGNALS AND SYSTEMS IN BIOMEDICAL ENGINEERING

Signal Processing and Physiological Systems Modeling

Suresh R. Devasahayam

School of Biomedical Engineering f 'ndiari Institute of Technology Bombay, India

KLUWER ACADEMIC/PLENUM PUBLISHERS

New York, Boston, Dordrecht, London,_Moscow

Library of Congress Cataloging-in-Publication Data Devasahayam, Suresh R.

Signals and systems in biomedical engineering: signal processing and physiological systems modeling I Suresh R. Devasahayam,

p.em.

Includes bibliographical references and index.q ISBN 0-306-46391-1

I. Biomedical engineering. 2. Signal processing. 3. Physiology=Mathematical models.

l. Title.

R856 .D476 2000 610' .28-·dc21

00-036370

The software and programs for teaching signal processing will be found on the CD-ROM mounted inside the back cover,

Copyright © 2000 by Suresh R. Devasahayam.

The publisher makes no warranty of any kind, expressed or implied, with regard to the software reproduced on the CD-ROM or the accompanying documentation. The publisher shall not be liable in any event for incidental or consequential damages 0, loss in connection with, or arising out of, the furnishing, performance, or use of the software.

A PC and Microsoft Windows 95® or later operating system are required to use the CD-ROM.

The programs for teaching signal processing are in an installable form on the CD-ROM disk. Install by running the program "SETUP.EXE".

For further information, contact the author via e-mail atsurdev@cc.iitb.emet.in

ISBN 0-306-46391-1

©2000 Kluwer Academic/Plenum Publishers, New York 233 Spring Street, New York, N.Y. 10013

http://www.wkap.nl/

1098765432

A C.I.P. record for. this book is available from the Library of Congress

All rights reserved

No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written permission from the Publisher

Printed in the United States of America

Series Preface

In the past few years Biomedical Engineering has received a great deal of attention as one of the emerging technologies in the last decade and for years to come, as witnessed by the many books, conferences, and their proceedings. Media attention, due to the applications-oriented advances in Biomedical Engineering, has also increased. Much of the excitement comes from the fact that technology is rapidly changing and new technological adventures become available and feasible every day. For many years the physical sciences contributed to medicine in the form of expertise in radiology and slow but steady contributions to other more diverse fields, such as computers in surgery and diagnosis, neurology, cardiology, vision and visual prosthesis, audition and hearing aids, artificial limbs, biomechanics, and biomaterials. The list goes on.

lt is therefore hard for a person unfamiliar with a subject to separate the substance from the hype. Many of the applications of Biomedical Engineering are rather complex and difficult to understand even by the not so novice in the field. Much of the hardware and software tools available are either too simplistic to be useful or too complicated to be understood and applied. In addition, the lack of a common language between engineers and computer scientists and their counterparts in the medical profession, sometimes becomes a barrier to progress.

This series of books is initiated with the above in mind: it addresses the biomedical engineer, the students of biomedical engineering, the computer scientist, and any other technically oriented person who wants to learn about Biomedical Engineering, as well as anyone who wants to solve problems dealing with health, medicine, and engineering. It addresses the physician and the health professional as well, introducing them to engineering jargon. Medical practitioners face problems that need solutions, yet technological

v

VI

Series Preface

advances and their complexity leave most of them in awe. The engineer, physicist, or computer scientist on the other hand does not have the medical knowledge needed to solve the problems at hand all of the time. It is through books, like the ones in this series, that the gap can be bridged, a common understanding of the problems can be achieved, and solutions come to light.

This series aims to attract books on topics that have not been addressed; by experts in the field that never had an incentive to write, books that will extend and complement the ones in existence.

We hope to bring this synergy to fruition through the books in the series.

Evangelia Micheli- Tzanakou Series Editor

Preface

Biomedical Signal Processing involves the use of signal processing techniques for the interpretation of physiological measurements and the understanding of physiological systems. Although the analytical techniques of signal processing are obtained largely from developments in telecommunications and applied mathematics, the nature of physiological data requires substantial biological understanding for its interpretation. It is a well recognized idea that every instance of the use of signal processing techniques is predicated on explicit or implicit models. In the case of physiological data the interpretation of the data contains certain assumptions about the underlying physiological processes which we may call the model of the system. Whether one uses a model that corresponds to physical and chemical entities (a biophysical model) or simply a: model defining an inputoutput relationship (a black-box model) the assumed model determines the nature of noise reduction or feature extraction that is performed.

The lecture notes that have formed this book were written for courses that I taught at llT -Bombay on Signal Processing and Physiological Systems Modeling to graduate students in Biomedical Engineering. These courses have evolved over the years and at present they are taught over 1 Y2 semesters in two courses called Signals and Systems, and Physiological Systems Modeling which may be regarded a single 1 V2 semester course. The class comprises students with engineering degrees as well as students with medical degrees. Therefore, it was something of a challenge to structure the course so that all the students would find it sufficiently engaging. The aim of the course is to introduce the students to physiological signal analysis with explicit understanding of the underlying conceptual models. The measurable goal of the course is to see that students can read a typical paper published in

Vll

viii

Preface

the standard journals of biomedical engineering. Therefore, the last couple of weeks of this course consists of discussing two or three recent papers on physiological modeling and signal analysis. Although a number of books are available on signal processing, including several on Biomedical Signal Processing, I found that no single book or even a small set of books could satisfactorily serve as a text for this course. My solution was to use several books as reference texts supplemented with lecture notes and journal papers. I gave the students detailed programming exercises so that their understanding of the material would be firmly established. In the mid 1990' s I found that my dilemma of a suitable text for Biomedical Signal Analysis was shared by many others, evidenced by the publication of several books on "Biomedical Signal Processing". However, these books treated the subject as a specialization of Signal Processing and Electronics Communications. In my opinion this subtracted from the principal biomedical engineering enterprise of being an interdisciplinary program which recognized the importance of model-based data interpretation. Therefore, my lecture notes grew with advances in the subject.

Beginning with a broad introduction to signals and systems the book proceeds to contemporary techniques in digital signal processing. While maintaining continuity of mathematical concepts, the emphasis is on practical implementation and applications. The book only presumes knowledge of college mathematics and is suitable for a beginner in the subject; however, a student with a previous course in analog and digital signal processing will find that less than a third of the book contains a bare treatment of classical signal processing.

Not surprisingly a lot of the examples and models that I use in teaching are informed by my own interests in skeletal muscle physiology and electrophysiology. Therefore, some of the modeling of muscles and myoelectric activity arose from data collected in my experimental work. In this book most of the diagrams of myoelectric activity are real data recorded in my lab; on the other hand, although I have recorded muscle force from isolated muscles, single motor units and also on human volunteers, I chose to use only simulated force data in this book. I have also expanded on models of other physiological systems mentioned in the literature to introduce the student to the rich variety of experimental and analytical techniques used in the study of physiological systems.

Suresh. Devasahayam

Acknowledgments

I would first like to thank my proximate critics, the students who took my courses in physiological systems modeling and signal analysis; their questions and comments were invaluable in the enrichment of my own understanding and in the development of this book. I am indebted to my colleagues at lIT-Bombay for lively exchange of ideas in teaching and research. In particular, I would like to thank Rohit Manchanda with whom I have taught several courses over the last decade; his critical suggestions have been invaluable in the enhancement of my teaching methods including a lot of the material in this book. I would also like to thank Vikram Gadre for many discussions on signal processing. The evolution of my course material owes no small debt to the flexibility that I was allowed in designing courses and teaching them, for which I must thank Dr. Subir Kar and Dr. Rakesh LaI. I also wish to acknowledge the support received from the Ministry of Human Resources Development and the All India Council for Technical Education for projects that contributed to the development of teaching materiaL My specific interest in experimental physiology and signal analysis was molded by my graduate advisor Dr.Sandercock at the University of Illinois at Chicago who introduced me to physiological research and encouraged my ever widening interest in the subject.

Moving to the more personal, I wish to mention Andrew Krylow not only because he shared with me his youthful and exuberant Jove for scientific inquiry during my graduate student days, but also because his untimely death in 1998 has indeed bereaved me.

My interest in engineering and science lowe to both my parents who taught me by example the pleasure of working with my hands and using tools; my father also imparted to me his fascination with all gadgets and the

IX

x

Acknowledgments

idea that they can be dismantled, understood and often fixed. I remember with fondness my grandfather whose desire to inculcate a love of reading rivaled my parents' and who knew most of my mathematics books more intimately than I did.

It is a pleasure to acknowledge the contribution of all these people to this work. I must, of course, add that the responsibility for any errors in this book is entirely mine.

There are also many others who have contributed not only directly to the contents of this book but also to the broader circumstances of writing it. In this respect my acknowledgment here is quite incomplete.

SRD

Contents

L Introduction to Systems Analysis and Numerical Methods 1

1.1. THE SYSTEMS ApPROACH TO PHYSIOLOGICAL ANALYSIS ., 1

1.1.1. Physiological Signals and Systems 2

1.1.2. Linear Systems Modeling in Physiology 3

1.2. NUMERICAL METHODS FOR DATA ANALYSIS ANDSIMULATION 3

1.2.1. Numerical Integration and Differentiation .4

1.2.2. Graphical Display , 7

1.3. EXAMPLES OF PHYSIOLOGICAL MODELS 8

EXERCISES 9

2. Continuous Time Signals and Systems 11

2.1. PHYSIOLOGICAL MEASUREMENT AND ANALYSIS 11

2.2. TIME SIGNALS 12

2.2.1. Examples of Physiological Signals 12

2.2.2. Operations on Time Signals 13

2.3. INpUT - OUTPUT SYSTEMS 16

2.3.1. Properties of Systems 16

2.3.2. Linear Time-Invariant Systems 18

2.3.3. Impulse Response of a Linear Time-Invariant Systern 20

2.3.4. The Convolution Integral 21

2.3.5. Properties of Convolution 22

, EXERCISES 30

3. Fourier Analysis for Continuous Time Processes 31

3.1. DECOMPOSITION OF PERIODIC SIGNALS , 31

Xl

xu

Contents

3.1.1. Synthesis of an ECG Signal Using Pure Sinusoids 32

3.2. FOURIER CONVERSIONS 36

3.2.1. Periodic Continuous Time Signals: Fourier Series 36

3.2.2. Aperiodic Continuous Time Signals: Fourier Transform 42

3.2.3. Properties of the Fourier Domain 44

3.3. SYSTEM TRANSFER FUNCTION 46

3.3.1. The Laplace Transform 47

3.3.2. Properties of the Laplace Transform 47

3.3.3. Frequency Response ofLTI Systems 52

3.3.4. Pole-Zero Plots and Bode Plots 53

3.3.5. Frequency Filters 62

3.3.6. Phase Shifts and Time Delays 64

3.4. SYSTEMS REPRESENTATION OF PHYSIOLOGICAL PROCESSES 64

EXERCISES 65

4. Discrete Time Signals and Systems 67

4.1. DISCRETIZATION OF CONTINUous-TIME SIGNALS 67

4.1.1. Sampling and Quantization 68

4.1.2. The Sampling Theorem 69

4.1.3. Reconstruction of a Signal from Its Sampled Version 73

4.1.4. Quantization of Sampled Data 74

4.1.5. Data Conversion Time - Sample and Hold 75

4.2. DISCRETE-TIME SIGNALS 77

4.2.1. Analogue to Digital Conversion 77

4.2.2. Operations on Discrete-Time Signals 77

4.3. DISCRETE-TIME SYSTEMS 78

4.3.1. The Impulse Response of a Discrete LTI System 79

4.3.2. The Convolution Sum 79

4.3.3. Properties of the Discrete Convolution Operation 80

4.3.4. Examples of the Convolution Sum 80

4.3.5. Frequency Filtering by Discrete-Time Systems 81

4.3.6. Determination of Impulse Response from I/O Relation 86

4.4. RANDOM SIGNALS 87

4.4.1. Statistical Descriptions of Random Signals 91

4.4.2. Ensemble Average and Time Average 92

4.4.3. Stationary Processes 94

4.4.4. Auto-correlation and Cross-Correlation of Discrete Signals .. 95

,EXERCISES : 97

PROGRAMMING EXERCISE 98

Contents

xiii

5. Fourier Analysis for Discrete-Time Processes 10 1

5.1. DISCRETEFOURIERCONVERSIONS 101

5.1.1. Periodic Discrete Time Signals: Discrete Fourier Series 101

5.1.2. Aperiodic Discrete-Time Signals: DTFT 102

5.1.3. Numerical Implementation of Fourier Conversion: DFT 106

5.1.4. Inter-Relations among Fourier Conversions 109

5.2. ApPLYING THE DISCRETE FOURIER TRANSFORM 112

5.2.1. Properties of the DFT 112

5.2.2. Windowing 115

5.2.3. The Fast Fourier Transform 117

5.2.4. Convolution Using the FFT - Circular Convolution 120

5.3. THE Z-TRANSFORM 122

5.3.1. Properties of the Z-transform 12~

5.3.2. The Bilinear Transformation 125

5.4. DISCRETE FOURIER TRANSFORM OF RANDOM SIGNALS 127

5.4.1. Estimating the Power Spectrum 127

5.4.2. Transfer Function Estimation or System Identification 129

EXERCISES 130

PROGRAMMING EXERCISES ,. 132

6. Time-Frequency and Wavelet Analysis 139

6.1. TIME-VARYING PROCESSES 139

6.2. THE SHORT TIME FOURIER TRANSFORM 140

6.2.1. The Continuous Time STFT and the Gabor Transform 142

6.3. WAVELET DECOMPOSITION OF SIGNALS 143

6.3.1. Multi-Resolution Decomposition ,. 144

6.3.2. Hierarchical Filter Bank for Wavelet Decomposition 145

6.3.3. The Daubechies 4-Coefficient Wavelet Filters 147

6.4. THE WAVELET TRANSFORM 152

6.4.1. Interpretation of the Wavelet Transform ,. 157

6.4.2. The Inverse Wavelet Transform 158

6.5. COMPARISON OF FOURIER AND WAVELET TRANSFORMS 161

EXERCISES , 164

7. Estimation of Signals in Noise 165

7.1. NOISE REDUCTION BYFILTERING 165

7.1.1. Mean Square Error Minimization 166

7.1.2. Optimal Filtering 169

7.2. TIME SERIES ANALYSIS 172'

7.2.1. Systems with Unknown Inputs - Autoregressive Model 173

7.2.2. Time-Series Model Estimation 174

XIV

Contents

7.2.3. Recursive Identification of a Non-Stationary Model 177

7.2.4. Time-Series Modeling and Estimation in Physiology 179

EXERCISES 179

8. Feedback Systems 181

8.1. PHYSIOLOGICAL SYSTEMS WITH FEEDBACK 181

8.2. ANALYSIS OF FEEDBACK SYSTEMS 183

8.2.1. Advantages of Feedback Control 184

8.2.2. Analysis of Closed-Loop System Stability using Bode Plots189

8.3. DIGITAL CONTROL IN FEEDBACK SYSTEMS 193

EXERCISES 195

9. Model Based Analysis of Physiological Signals 197

9.1. MODELING PHYSIOLOGICAL SYSTEMS 197

9.1.1. Biophysical Models and Black Box Models 197

9.1.2. Purpose of Physiological Modeling and Signal Analysis 198

9.1.3. Linearization of Nonlinear Models 198

9 .1.4. Validation of Model Behavior against Experiment.. 200

9.2. MODEL BASED NOISE REDUCTION AND FEATURE EXTRACTION. 200

9.2.1. Time Invariant System with Measurable Input-Output.. 201

9.2.2. Time-Invariant System with Unknown Input.. 203

9.2.3. Time Varying System with Measurable Input-Output 205

9.2.4. Time Varying System with Unknown Input.. 206

EXERCISES 207

10. Modeling the Nerve Action Potential.. 209

10.1. ELECTRICAL BEHAVIOR OF EXCITABLE TISSUE 209

10.1.1. Excitation of Nerves: The Action Potential.. 210

10.1.2. Extracellular and Intracellular Compartments 211

10.1.3. Membrane Potentials 211

10.1.4. Electrical Equivalent of the Nerve Membrane 213

10.2. THE VOLTAGE CLAMP EXPERIMENT 216

10.2.1. Opening the Feedback Loop of the Membrane 217

10.2.2. Results of the Hodgkin-Huxley Experiments 218

10.3. INTERPRETING THE VOLTAGE-CLAMP EXPERIMENTAL DATA 220

10.3.1. Step Responses of the Ionic Conductances 220

10.3.2. Hodgkin and Huxley's Nonlinear Model 221

10.3.3. The Voltage Dependent Membrane Constants 225

10.3.4. Simulation of the Hodgkin-Huxley Model 226

10.4. A MODEL FOR THE STRENGTH-DURATION CURVE 228

EXERCISES 230

Contents

xv

PROGRAMMING EXERCISE 231

11. Modeling Skeletal Muscle Contraction 235

11.1. SKELETAL MUSCLE CONTRACTION 235

11.2. PROPERTIES OF SKELETAL MUSCLE 236

11.2.1. Isometric Properties of Skeletal Muscle 237

11.2.2. The Sliding Filament Hypothesis 240

11.2.3. The Sarcomere as the Unit of Muscle Contraction 241

11.3. THE CROSS-BRIDGE THEORY OF MUSCLE CONTRACTION 243

11.3.1. The Molecular Force Generator 245

11.3.2. Isotonic Experiments and the Force-Velocity curve 247

11.3.3. Huxley's Model of Isotonic Muscle Contraction 250

11.4. A LINEAR MODEL OF MUSCLE CONTRACTION 255

11.4.1. Linear Approximation of the Force-Velocity Curve 255

11.4.2. A Mechanical Analogue Model for Muscle 255

11.5. APPLICATIONS OF SKELETAL MUSCLE MODELING 260

11.5.1. A Model of Intrafusal Muscle Fibers 260

11.5.2. Other Applications of Muscle Modeling 262

EXERCISES 263

PROGRAMMING EXERCISE 264

12. Modeling Myoelectric Activity 267

12.1. ELECTROMYOGRAPHY 267

12.1.1. Functional Organization of Skeletal Muscle 268

12.1.2. Recording the EMG 268

12.2. A MODEL OF THE ELECTROMYOGRAM , .. 272

12.2.1. Bipolar Recording Filter Function 275

12.2.2. The Motor Unit 279

12.2.3. The Interference EMG 283

EXERCISES ,. 289

PROGRAMMING EXERCISE 291

13. System Identification in Physiology 295

13.1. BLACK Box MODELING OF PHYSIOLOGICAL SYSTEMS 295

13.2. SENSORY RECEPTORS 295

13.2.1. Firing Rate - Demodulation of Frequency Coding 296

13.2.2. Estimating Receptor Transfer Function 301

13.3. PUPIL CONTROL SYSTEM 303

13.3.1. Opening the Loop 303

13.3.2. Estimating the Loop Transfer Function 306

13.3.3. Instability of the Pupil Controller 306

XVI

Contents

13.4. ApPLICATIONS OF SYSTEM IDENTlFICA TION IN PHYSIOLOGY 307

EXERCISES 307

14. Modeling the Cardiovascular System 309

14.1. THE CIRCULATORY SYSTEM 309

14.1.1. Modeling Blood Flow 311

14.1.2. Electrical Analogue of Flow in Vessels 311

14.1.3. Simple Model of Systemic Blood Flow 314

14.1.4. Modeling Coronary Circulation 317

14.2. OTHER ApPLICATIONS OF CARDIOVASCULAR MODELING 318

EXERCISES 319

IS. A Model of the Immune Response to Disease 321

15.1. BEHAVIOR OF THE IMMUNE SYSTEM 321

15.2. LINEARIZED MODEL OF THE IMMUNE RESPONSE 323

15.2.1. System Equations for the Immune Response 325

15.2.2. Stability of the System 326

15.2.3. Extensions to the Model 327

EXERCISES 328

APPENDIX 329

BIDLIOGRAPHY 331

Index 335

Chapter 1

Introduction to Systems Analysis and Numerical Methods

1.1 The Systems Approach to Physiological Analysis

The recording and analysis of physiological signals has permeated all aspects of the study of biological organisms, from basic science to clinical diagnosis. For example, the clinical recording of various biopotential signals has become an essential component in the diagnosis of all organs involving excitable tissue; information from pressure and flow signals is an important part of cardiovascular care; even in the diagnosis of digestive and excretory disorders signal analysis provides valuable assistance. Implicit in the analysis of these signals is an understanding of the mechanisms involved in their physiological generation. It is important to note that even if an explicit model is not postulated the very act of using an analytic procedure implies a certain model for the physiological process. This fact is often ignored to the detriment of the analysis. When explicit models are postulated every measurement acts to either support it or weaken it. In fact, analyses predicated on explicitly defined models make for good science in that clear testable hypotheses are available.

Physiological modeling involves the development of mathematical, electrical, chemical or other analogues whose behavior closely approximates the behavior of a particular physiological system. It is, of course, desirable that every aspect of the model corresponds to features of the physiological system under consideration. Such models may be called biophysical models. However, in most physiological systems only a few of the features are observable. Therefore, a model based on empirical relations between these observable features often has as much utility as a detailed physical one. Such models may be termed black-box models since they make no attempt to

1

2

Chapter 1

describe the internal mechanisms of these systems. To clinical practitioners whose main interest is in the classification of the status of any physiological system as normal or pathological, the black box model is not only sufficient but may even be preferable on account of its relative simplicity. However, for physiologists and other basic scientists, biophysical models are more interesting. It is, of course, very satisfying to the scientist when biophysical models can be shown to be reducible to utilitarian black-box models. We shall concentrate on black-box modeling using techniques of linear-systems analysis. However, the links between physiological components and model aspects are also emphasized and some physiological models are discussed in reasonable detail. In some cases the reduction of biophysical models to linear systems models is also shown.

1.1.1 Physiological Signals and Systems

Any physical quantity that varies as a function of an independent variable can be considered a signal. The independent variable is usually time, although not always so; space or distance is also frequently used as the independent variable. The variation of potentials, mechanical force or position, pressure, volume, etc., as a function of time are all commonly used physiological signals. These signals are generated by physiological processes following well defined physical laws. All physiological systems accept various inputs from other organs, the external environment, etc., and produce outputs in response to these stimuli. This concept of a physiological system producing one or more outputs in response to one or more inputs is the basic idea of systems modeling. The input-output relation is characteristic of the system. There are mathematical procedures for characterizing signals using analytical functions. We can also mathematically describe a system that acts on a set of input functions to produce a set of output functions. If the input and output are measured, then, the system characteristics can be estimated. Such estimation of the system characteristics when performed under diseased states as well as normal conditions help to characterize the disease quantitatively in comparison to the normal state. Alternatively, given the system description it should be possible to predict the system output for any arbitrary set of inputs. Thus the system description allows estimation of the output given the input or conversely estimation of the input if the output is known. The primary task in signal analysis is to characterize signals and the systems that produce or process them.

Introduction

3

1.1.2 Linear Systems Modeling in Physiology

In general most real systems have complex properties and no simple characterization is possible. However, it is usually convenient to restrict the study of systems to some limited conditions where the system may be said to be linear. In this book we shall see some of the conditions under which physiological systems do and do not submit to linear systems analysis. Although we shall concentrate on analytical tools for linear systems, it is extremely important to understand when a linear model may be inadequate and even misleading. The advantage of linear systems analysis is that a very large set of analytical tools are available. Nonlinear techniques not only have to be tailored specifically to each situation, but also tend to be more complex and computationally tedious than linear techniques.

1.2 Numerical Methods for Data Analysis and Simulation

The availability of cheap computing power makes a lot of models available easily to physiologists and medical practitioners. Modern computers have not only good computational capabilities but also very good graphical displays, thereby making the output of models convenient for nonmathematical users. Of course, graphical presentation in itself uses visual analogy for physical behavior. Since modern computers are all discrete numerical machines while physiological systems are fundamentally continuous, some approximations are required in order to use discrete modeling for continuous-time systems. The issue of discretization is dealt with in some detail in the early part of the book, and subsequently a number of digital techniques for the analysis of signals and systems are discussed.

The earliest models of physiological system were physical analogies.

Even now many students in high school are introduced to the ideas of respiration and blood flow using physical models involving air flow and water in tubes respectively. Physical models are useful in extending intuitive knowledge in one area to another. However, physical models are limited by constraints of implementation to rather simple systems. Mathematical models are also similarly limited by our ability to solve the necessary sets of equations, but this limitation is much less confining than the physical construction of analogous models.

Most mathematical descriptions of physiological systems use differential equations. Therefore, the analysis of these systems requires solving differential equations. Such solutions of differential equations can in principle be done analytically (i.e., on paper), physically (i.e., by building a physical analogue), or numerically (i.e., on a digital computer). In the early

4

Chapter 1

days of computational models differential equations were solved using analogue computers. The analogue computers were electronic circuits whose behavior mimicked that of the system being modeled. Modern computer models use digital computers to solve the system equations. Contemporary digital computers are Turing machines that do not physically imitate the system behavior but instead "run programs" that make the general purpose computer imitate the system being modeled. A very important aspect of digital computers is that they inherently require discretized representations of the modeled system. This places some important constraints on the modeling process. The constraints are exemplified in the numerical implementation of integration and differentiation which are basic components of systems models.

1.2.1 Numerical Integration and Differentiation

A continuous function of time can be discretized by using a sequence of numbers corresponding to the value of the function at discrete points in time. Since most systems can be described using differential equations, numerical differentiation and integration are fundamental to computer implementation of mathematical models of systems.

The derivative of the function at a point in time t can be calculated from the discretized function in more than one way. By definition:

dx(t) dt

(1.1)

where Sx is the change in x(t) over the time interval /).t. However, when differentiation is implemented on a digital computer /).1 is necessarily finite. In the limit /)./---').0 we can write

ro [x(1 + M) - x(t)]

= [xCt + /).t / 2) - xit - /).t / 2)] [x(t) - x(t - At)]

(1.2)

If /).1 is finite these are only approximations. Therefore, using a digital computer only an approximation to dx/dt is possible. In practice, with a finite discretization interval T=81 the following approximation may be used:

dX(/) X(/) - x(1 - T)

::::::

dt T

(1.3)

Introduction

5

Since the time variable t is stored in a computer only at discrete values (in steps of interval T) it is common to write t=n-T where n is an integer. The derivative approximation ofEq.(3.1) will then be written as:

dx(t) x(n . T) - x(n - 1 . T)

~

dt T

(1.4)

x(t)

b

a

n-1 n n+1

tIT

Figure 1-1. Numerical derivative.

This approximation for the discrete time derivative is, of course, not unique and it is also possible to use the following two formulations:

dx(t) x(n + 1 . T) - x(n . T)

:;;:::

dt T

(1.5)

dx(t) x(n + 1· T) - x(n - 1 . T)

~

dt 2·T

(1.6)

These three different approximations to differentiation is illustrated in Figure 1-1. The time axis is marked in steps of tiT. As can be noticed each of the approximations yields a slightly different estimate of the derivative. The differences among these methods is more pronounced in a region where the rate of change of the derivative is large.

6

Chapter I

Numerical integration can be obtained using similar procedures. The definite integral of a function x(t) over the range t, to tz is calculated as the area under the curve. Again we can consider that the function x(t) is discretized in the digital computer with time steps of T. The total area is the sum of the areas for each time step. If, for example, t, =(n-2)·T and t2=(n+ Ty-T, then:

12

J x(t)dt ~ area of x(/) between (n - 2) . T and (n + 1) . T

x(t)

n-2 n+1

tIT

Figure 1-2. Numerical integration.

This idea of numerical integration is illustrated in Figure 1-2. The simplest method of calculating the area is to assume that in the interval Tthe function x(t) changes little and x(t)dt in the interval from nT to (n+ 1)T can be approximated by a rectangle, x(n1)·T. Then the integration of the function x(t) from t, to t2, is approximated as

12

Jx(t)dt :::::

I)

n

Lx(k.1J. T

k~n-2

(1.7)

Introduction

7

A better approximation is to assume that in each interval the function varies linearly between the sampled values and the area between can be approximated by a trapezoid,

J x(t)dt ~ t x(f0:1· T) + x(k .1) . T

II k=n-2 2

(1.8)

The trapezoidal approximation connects adjacent values of x(t) by a straight line. Other higher order interpolations between adjacent points can be used for improved approximation to the integral. Such approximations other than a trapezoid for the area can be used with some additional expense of computational power (i.e., by performing more calculations) .

Figure 1-2 shows the strips between sampled points being approximated by a trapezoidal shape shown shaded. The total area of the shaded areas will be the required definite integral value.

1.2.2 Graphical Display

An important reason for the popularity of computer models of physiological systems is the availability of good quality graphical displays. It is possible to present a visual display that mimics either symbolically or by means of analogy with another physical system, any system whose behavior is of interest. For example, in the study of atomic and molecular interactions, combinations of spheres are often used to provide a visual analogy. A computer system is very useful in providing time-varying displays that represent the physico-chemical actions. Such animations are becoming increasingly common in teaching concepts in science. Such visualization of physical analogy has always been popular in basic science teaching. The teaching of planetary motion or atomic theory in schools using spheres revolving around spheres is commonplace. Although animated pictures are especially useful in developing intuitions about complex systems at introductory levels they are often unwieldy in simulating more complex system behavior. Since, calculation of mathematical functions obtained from any model must precede the visual display, it is usual to simply display the functions and have the student or experimenter imagine the visual imagery if desired.

The use of systems modeling produces various signals that can be observed practically in the physiological system as well intermediate signals that may be inaccessible to observation. The signals calculated from the models can be plotted as functions of time on the computer screen. Many

8

Chapter 1

signals can be plotted using different colors on multicolor monitors to make viewing convenient.

1.3 Examples of Physiological Models

In the last six chapters of this book we shall examine a number of physiological models. The experimental arrangements used for obtaining data are described briefly in order to put the models in proper perspective. The models are then developed in mathematical detail. If possible the models are linearized, since that will allow the underlying systems to be understood easily and intuitively.

First we- shall consider the nerve action potential. This is a physiological system that has been extensively studied. The Hodgkin-Huxley model of the action potential is an excellent example of a biophysical model that is often employed for clinical understanding. Although this is a model that is commonly found in texts, we shall take a slightly different approach, by looking at the Hodgkin Huxley experiments in terms of opening the feedback loop of the system. Although the membrane is a nonlinear system it can be simulated directly by solving the differential equations at each point in time. The implementation of the model is a good exercise in numerical simulation.

Next we shall look at muscle force production. Huxley's biophysical model of muscle contraction was proposed in 1957, and has been the subject of many enhancements as well as some controversies. Nevertheless, the original model remains the most widely discussed due to its elegance and simplicity. The linearization of Hill's force-velocity data upon which Huxley's model is based, yields a very simple and versatile representation of muscle. Incidentally, the model of muscle as comprising elastic elements and damping elements was used by Hill to design his experiments. Functional neuromuscular stimulation can be analyzed in terms of a system with a linearized muscle model. When thinking about the linearized muscle model, it becomes obvious that artificial joints will work better with a damper incorporated in them. Natural joints with muscles are, of course, superior on account of having a continuously controllable damping coefficient.

Next we shall consider a model of the electromyogram. The considerations here are more of the geometry of the muscle and recording arrangement than with physiological entities. Numerical simulation of the model provides valuable insight into the electromyogram and its interpretation. Therefore, this is a model of particular clinical value.

System identification in physiology is a topic of particular interest to engineers and physicists. Many scientists like Norbert Wiener liked to regard biological systems in terms of control systems. This point of view provides a

Introduction

9

framework for experimental design and modeling of many physiological systems.

Finally, we shall look at some miscellaneous models in physiology, namely of the immune system and the cardiovascular system. The immune system is not often included in models of physiology. But a simple model of it provides remarkable insight into disease processes and chronic and acute illnesses in terms of system stability. The cardiovascular system has been widely investigated and also modeled. In fact, its outstanding character is the number of different ways in which it can be looked at; which is not surprising when you take into account its centrality to the body and its easy analogy with many engineering systems. We shall mainly model blood flow through passive vessels. Such models have value in providing physiological insight as well as in understanding pathology of the circulatory system.

EXERCISES

1. Consider the following differential equation:

dx(t) + 2x(t) _ 1 = 0 dt

This can be solved by integration. To solve it numerically using numerical integration we can calculate the values in time steps of 6.t and write:

x[(n + 1)· 6.t] - x[n· 6.t] + 2x[n. 6.t] -1 = 0 I!!.t

The value of x(t+6.t)=x[(n+ l)·l!!.t] is to be calculated repeatedly for n=1,2,3,4, ..... Calculatex(t) for t=0 to 5, using (i) 6.t=0.1, (ii) I!!.t=0.2. Use the initial condition x(O)=O.

Compare the numerical solution with the analytical solution evaluated at different values of time.

2. The use of numerical calculations for derivatives and integrals involves not only discretizing the time variable t, but it also involves the use of finite sized representation of the numbers themselves. Discuss briefly how this "precision" of number representation will affect the results of numerical calculations.

Chapter 2

Continuous Time Signals and Systems

2.1 Physiological Measurement and Analysis

The goal of physiological measurement is to understand the working of a particular physiological system - like the cardiovascular system, the muscular system, the nervous system, the urinary system, etc. - and in the case of diseases, diagnose it with reference to that in healthy individuals. It is useful to regard the physiological system as a process (or set of processes) in which one or more physical parameters are amenable to measurement.

physical parameter

electrical signal

Physiological process

Analogue·to-digital conversion

Digital processing

Figure 2-1. Block diagram ofa general physiological measurement system.

These physical parameters are then transduced by appropriate transducers in a measuring system to obtain electrical signals. Next, the electrical signals are subjected to various processing - by analogue circuitry as well as digital computers - and finally presented in some reduced form for interpretation by the investigator. During this examination and interpretation by the investigator some sort of relationship between these various stages is either

11

12

Chapter 2

explicitly or implicitly assumed. A sample sequence of such relationships is illustrated symbolically in Figure 2-1.

In general, a signal is any physical quantity expressed as a function of time. Therefore, the electrical signal output by the transducer, the electrical quantity obtained from the analogue processing, the graphical output of the chart recorder, etc., are all signals.

2.2 Time Signals

2.2.1 Examples of Physiological Signals

A signal is any physical quantity that varies as a function of another independent variable. The most common signals are those that vary as a function of time. Signals may be described by certain features which characterize them; the most elementary of such characteristics is whether the signal is regularly repeating or not. Three examples are shown.

1 sec

Figure 2-2. ECG signal over three cardiac cycles.

1 sec

Figure 2-3. Three cycles of the aortic pressure waveform.

The EeG signal (electrocardiogram) obtained by recording the electrical activity of the heart is a regularly repeating signal, Figure 2-2. The aortic pressure signal obtained by continuously monitoring the intra-aortic pressure is also a regularly repeating signal, Figure 2-3. The EMG signal (electromyogram) obtained by recording the electrical activity of skeletal muscle is an irregular signal, Figure 2-4. The signals shown in Figure 2-2, and Figure 2-4 are voltage variations as a function oftime; i.e., voltage is the

Continuous Time Signals and Systems

13

dependent variable and time is the independent variable; and the signal III Figure 2-3 is a pressure variation over time.

Figure 2-4. EMG signal recorded for about I s during voluntary movement.

We may write these three signals as x(t), where x refers to the ECG potential, aortic pressure or myoelectric potential; t is the time variable. This may be treated as any other mathematical function and mathematical operations may be performed on it. Although the majority of interesting physiological signals are functions of time, we must note that there are several that are functions of other independent variables. Many are functions of length; for example pressure in a blood vessel may be expressed as a function of length along the vessel. In all cases similar mathematical operations may be performed on them.

2.2.2 Operations on Time Signals

Some common operations on time signals are discussed below. The signal, xCt), shown in Figure 2-5, shall be used to illustrate the operations. An arbitrary point in time is taken as the reference and denoted as time zero, t=0. Events before this are said to have occurred in the past, i.e., at t<O, and events after this are said to occur in the future, i.e., t>O.

Figure 2-5. Illustrative signal x(t).

Time shift: A signal x(t) delayed by 1 unit of time is represented as x(t-l) and if it occurs I unit of time earlier then it is x(t+ I). An example of such

14

Chapter 2

time shifting is when a recorded signal is played back after some time. In general ifx(t) is delayed by time 't it is expressed as x(t-'t), Figure 2-6a.

(a)

x(t-'t)

(b)

x(-t)

o 't

o

t

---+ t

Figure 2-6. Operations: (a) time shift, and (b) time reversal.

Time reversal: If a signal, x(t), is presented reversed in time it is represented as xC-t), Figure 2-6b. For example if a tape recorded signal is played back by running the tape backwards, the signal will be time reversed.

Time scaling: A signal may be compressed or expanded in time. If the signal x(t) is compressed in time by a factor of two we get xC2t), Figure 2-7a. And if the signal is expanded by a factor of two then we obtain xCY2t), Figure 2-7b. This can be achieved, for example, by playing back a recorded signal at twice the speed or half the speed respectively."

(a)

(b)

x(tJb)

__.. t

o

-"t

o

Figure 2-7. Operations: (a) time shrinking, and (b) time expansion.

Amplitude operations: A function may have an algebraic operations performed on its amplitude; e.g., multiplication: 2·xCt), addition: 2+xCt), etc.

Even and Odd functions:

A function is said to be even if: xCt) = xC -r), and a function is said to be odd if: xCI) =-x( -t). Any function can be broken into a sum of two signals - one odd and the other even.

Continuous Time Signals and Systems

15

The even part of a function x(t) is

xe(t)=Y2[ x( t)+x(-t)]

(2.1)

The odd part of a function x(t) is

xo(t)=Y2[x(t)-x( -t)]

(2.2)

o

Figure 2-8. Even part of xCt).

Figure 2-9. Odd part ofx(t).

You can verify that the sum xeCt)+xo(t) from Figure 2-8 and Figure 2-9 will be equal to the signal x(t) in Figure 2-5. This ability to decompose any signal into an even part and an odd part is very useful, as we shall see later, since odd and even signals have some interesting properties.

Periodic signals: A signal x(t) is said to be periodic if for some time interval T, x(t)=x(t+k1) where k is any integer. For example, consider a sinusoid, sine rot), where ro=2nf, and f= liT. It is easy to verify

16

Chapter 2

trigonometrically that sin(2n:t/T)=sin[2rc(t+D/11=sin[2rc(t+2T)/7], etc. Thus, sin(2n:fl) is periodic and the period of the function is T=llf

Periodic signals are a particularly interesting type of functions, since they only need to be specified over any interval equal to one period for the entire signal to be known; the entire signal is simply an infinite repetition of the same periodic cycle. A large number of analytical techniques are available for the study of periodic signals. Since some especially elegant analytical tools are available for periodic signals it is sometimes convenient to treat signals that are not truly periodic as if they were periodic.

2.3 Input - Output Systems

Any set of processes that affect the nature of a signal may be called a system. A system may take one or more signals and produce one or more outputs. The simplest system takes one signal as its input and delivers one output signal, Figure 2-10.

Input .I S

signal----Jol1 ystem

Output I--~~signal

Figure 2-10. A simple input-output system.

2.3.1 Properties of Systems

Memory: A system is said to have memory if its output depends on past values of the input. In other words, any system that retains some effect of past inputs is said to have memory. A clinical thermometer is a common instrument that retains past data and therefore has memory; a metal rod that remains hot after it is removed from a fire has memory of having been heated; etc.

Causality: A system is said to be causal if its output depends only on past and present values of the input. In other words causal systems are ignorant about what input will be given at a future point in time. All real systems are causal. However, many rather simple mathematical operations are inherently non-causal. For example, consider the derivative of a function at some point in time, t, which is the slope of the function at time t; this slope can be properly computed only by using values of the function before and after the time instant t. A computer that stores data and performs operations like differentiation can utilize data collected at different points in time and

Continuous Time Signals and Systems

17

therefore mimic non-causal behavior. For example, if data is recorded between t=2 and t=50, then in order to calculate the derivative at time t=10 the computer can utilize data for t<10 and t> 1 0 since they are already stored; thus non-causal calculation (or "behavior") is effectively achieved.

Invertibility: A system is invertible if the input can be calculated using the output function. Many operations performed by systems are invertible. Some operations like the square operation are obviously not invertible.

For example, if we have a system that performs simple amplification (multiplies the input by a constant number) so that given an input x(t) the output yet) is produced: yet) = 3·x(t), then the input can be deduced from the output as: x(t)=y(t)/3.

If we have another system that performs a squaring operation so that given the input x(t) the output y(t) is: yet) = x2(t), then the input cannot be always deduced from the output; estimating x'(t)=;/y(t) will not be equal to x(t) if the original x(t) were negative.

Stability: A system is said to be stable if, for any finite input the output is always finite. A popular example of instability in a system occurs in a public address system when the microphone is placed close to the loudspeaker. In this situation even a small whisper picked up by the microphone is amplified and the amplified output of the loudspeaker is again picked up by the microphone and repeatedly amplified. Thus a finite input results in unbounded growth of the output and is termed instability. In this case we see that a system that is normally stable becomes unstable when the output is fed back to the input in an uncontrolled manner.

Time-invariance: A system is said to be time-invariant if its properties do not change over time. Most physiological systems tend to be time variant due to processes like adaptation. Similar to the concept of time-invariance is that of stationarity applied to signals. A signal is said to be stationary if its properties (statistical characteristics) do not change over time. A timevariant system or process will generate non-stationary signals.

Linearity: A Linear system has the properties of additivity and scaling. Additivity: A system is said to have the property of additivity if the output to the combination of two different signals is identical to the sum of the outputs obtained when the two inputs are applied independently, as illustrated in Figure 2-11.

18

Chapter 2

Xl(t) ~Yl(t) XZ(t) ~Y2(t) X1(t)+X~(t) ~Yl(t)+Y2(t)

Figure 2-11. Additivity property of a system.

Scaling: A system is said to have the property of scaling if when the input is multiplied by a constant then the resulting output is also magnified by the same constant, as illustrated in Figure 2-12.

x(t) ~y(t) a-xjt) ~a.y(t)

Figure 2-12. Scaling property ofa system.

In general any real system is a non-linear system and therefore each of the blocks in any physiological process or recording/analysis set-up should be treated as such. However, since linear systems are much simpler to analyze and think about, it is common practice to treat these blocks as comprising linear building blocks. Within some limited range of operation it is reasonable to regard the blocks as being linear. It is possible to compensate for the non-Iinearities of the measurement system by suitable electronic circuitry or numerical methods; but the non-linearities of the physiological system under observation have to be explicitly noted during all analysis. Interpretation of the final output always requires proper understanding of the physiological processes.

Another important assumption of most processing is that the systems are invariant over time. This is a fairly good approximation with modem electronic systems, but is hardly true of physiological processes. Again it is convenient to assume that the physiological processes are time-invariant over some short period of time. The more sophisticated processing methods, however, are able to dispense with this assumption.

2.3.2 Linear Time-Invariant Systems

An important property of Linear Time-Invariant (LTI) systems is that an elegant mathematical relationship can be established between the input and

Continuous Time Signals and Systems

19

output. Knowing this relationship the output of the system to any arbitrary input can be determined; conversely if the output is observed the input can be deduced.

In order to understand this technique, consider that the response of the system is known for an input of a very brief impulse. Then an arbitrary input can be regarded as being made of a sequence of such brief impulses (these impulses may be of different amplitudes). The output in response to this arbitrary input can be computed using the known response to a single impulse. Obviously, this procedure can be applied only if the system is linear and time-invariant.

1

8(t)

-+ t

Figure 2 -13. Representation of a unit impulse function.

Unit Impulse function or Delta function: The unit impulse function is a conceptually useful signal. It is defined to have a position in time but its duration is infinitesimal; this is achieved by assuming that the duration b.t tends to zero. The area of the function is unity and hence the name unit impulse. The normal unit impulse function is located at time t=0. There are slightly different definitions of the impulse function depending on the analytical convenience of the moment. (The most general analytical function defining the unit impulse is called the Dirac delta function after the physicist Paul Dirac.)

Here the unit impulse shall be defined as:

o < t < b.t elsewhere

(2.3)

This function is referred to as the delta function because of the symbol, 0, used to represent it; the graphical symbol for the impulse function is shown in Figure 2-13.

Since oCt) is located at time t=0, o(t-1:) is a delta function delayed by time 'C and located at time t=t, It follows that a delta function scaled by f!, factor A and shifted by time 'C is defined as:

20

Chapter 2

A . 8(t - -c) . At = { ~

't < t < 't + At elsewhere

(2.4)

2.3.3 Impulse Response of a Linear Time-Invariant System

Since the unit impulse function is a kind of elemental function it is useful to describe the behavior of a linear time-invariant (L TI) system in terms of its response to a single unit impulse function. This is referred to as the impulse response of the system and is often represented by the symbol h(t).

Ao(t_-'t_,__) __ ~.I L Tl system

Ah(t-'t) ~

Figure 2-14. Impulse response ofa Linear Time-Invariant system.

Since the system is time-invariant, the response of the system to a timeshifted unit impulse 8(t-.) will be h(t-'t); and, the system also being linear its response to a scaled and time-shifted unit impulse A·8(t-'t) will be A·h(t-'t), Figure 2-14.

2.3.3.1 Representing a signal in terms of scaled and shifted impulses Provided that At is indeed very small, the value of a function, x(t), at time t=t: can be expressed as

.<t<.+At elsewhere

(2.5)

If the time is divided into steps of width At, i.e., t=k-Ss (where k is any integer) then the function x(t) may be written as

x(k.At)· 8(t - k - At). At = {~(t)

k:- At < (k + l)At elsewhere

(2.6)

The entire function x(t) can be expressed as follows:

eo

L x(k.At).8(t - k- At). At = x(t)

k=-ct:>

(2.7)

Continuous Time Signals and Systems

21

This is illustrated with a function x(t) shown in Figure 2-15; the function x(t) is represented as a set of narrow strips in Figure 2-16.

In the limit 6.t---+O, Eq(2.7) becomes

00

Jx('t).8(t - 1:).dt = x(t)

(2.8)

-- t

Figure 2-15. A continuous time function, x(t}.

Figure 2-16. Representing a continuous time function as a set of narrow strips.

2.3.4 The Convolution Integral

Consider a signal x(t) being input to a linear time-invariant system with impulse response h(t). If the signal is regarded as a composition of scaled and shifted delta functions as shown above, then the response of the L TI system to each of the scaled and shifted impulses will be the impulse response suitably scaled and shifted. That is, each 8(t-1:) at the input results in a h(t-1:) at the output. Therefore, given the input to the system xCt):

xC!) = JX(1:) - 8(t - 1:)' di

the output will be

22

Chapter 2

y(t) = fx('r).h(t--r).dt

(2.9)

X(-'-t) __ ----.j~'-_L T_I--'-S..:....yS_te_m_~_y_(t_}o:_X_(t+);h(t)

_ h(t)

Figure 2-17. Response of an LTI system to an inputx(t).

The convolution operation: Let h(t) be the response of the system to a single impulse oCt). A signal x(t) input to the system can be written as Jx(-r)-o(t-'t)dt where the limits of integration are for all possible values of time, i.e., t is in the range ±co. Then the output yet) is Jx('t)-h(t--r)dt. This input-output relationship of an LTI system is represented in Figure 2-17. This is known as the convolution integral. For any linear time-invariant system with input x(t) and impulse response h(t) the output yet) is expressed as follows (with '*' denoting the convolution operation):

y(t)=x(t)*h(t)

(2.10)

2.3.5 Properties of Convolution

The convolution operation has several properties which are useful in the treatment of complex systems with several interconnected blocks.

Convolution is commutative:

co <t)

J x(,t)· h(t - 't)d-r = Jx(t - 't). h(-t)dr

-00

(2.11)

x(t)*h(t) = h(t)*x(t)

Convolution is associative:

(2.12)

Convolution is distributive:

(2.13)

Continuous Time Signals and Systems

23

In looking at complex systems it is useful to consider individual blocks as shown in Figure 2-17. If each block is linear and time-invariant then the total impulse response is the cumulative convolution of the individual blocks. This is illustrated in the following example.

EXAMPLE2-!

Consider a system comprising several subsystems shown in Figure 2-18. The intermediate signals are:

Xl (t)=x(t)*h1(t), X2(t)=Xl(t)*hlt)=x(t)*h1(t)*h2(t) X3( t)=X2( t)* h3(t) X4(t)=X2(t)*h4(t)

the input to the last block hs(t) is: X3(t)+X4(t), therefore,

x(t)

~(I)

Figure 2-18. Block diagram ofa system with several subsystems (Example 2-1).

The final output for the above set of blocks is:

In examining any recorded physiological signal it is useful to regard it as the output of a set of system blocks that have acted upon more primary signals. This is the essence of modeling in physiology. Such a picture provides an understanding of any changes in the observed data in terms of alterations in the preceding system blocks.

The impulse response of a measurement system can usually be determined easily by using several different techniques. Such characterization of measurement systems include the response of components like the transducer, amplifier, filters, etc. Sometimes the interface between the transducer and the physiological system may be poorly controlled and introduce alteration of the signal. If each of the blocks is characterized properly then the original input before modification by the various measurement system blocks can be inferred.

24

Chapter 2

Analysis of Signals and Impulse Response of Systems: Both signals and system impulse responses are simply functions of time. Therefore, they are often treated without distinction as far as the analysis is concerned.

2.3.5.1 Some analytically useful signals

Some mathematically simple signals are commonly used to describe more complex signals; in other words, these functions constitute a vocabulary to describe practical signals.

o -)- t

Figure 2-19. Unit step function.

The step function (Figure 2-19): The step function denoted as u(t) is defined as follows:

u(t) = {~

-00 < t s; 0 O:::;t<oo

(2.14)

If we wish to specify that a function, x(t), should be considered to exist only after time r=O, then it is common to write: x(t)-u(t), since the multiplication by u(t) sets the product to zero for time 1<0 but leaves the product the same as x(t) for time t~O.

The Rectangular function: The rectangular function has value of unity for some specified time interval and is zero for all other time, i.e.,

ret) = {~

tl :::; t ::;; t2 elsewhere

(2.15)

We can also define the rectangular function in terms of step functions,

(2.16)

Periodic rectangular waves: A periodic rectangular function repeats with period T, and needs to be defined only for one interval of duration T and this interval will be repeated infinitely.

Continuous Time Signals and Systems

25

Sinusoidal signals: Sinusoidal functions are the most widely used periodic functions. An important property of a sinusoid is that when differentiated the result is another sinusoid that is a scaled and time shifted version of the original. That is,

(2.17)

where T is the period of the sinusoid.

Exponential signals: Positive exponential signals of the form x(t)=ekl are not very common in practice at it grows progressively and tends to infinity. Negative or decaying exponentials are extremely common in many physical systems. They are commonly defined to exist only from time z=O,

-00 < t s: 0 O:::;t<oo

(2.18)

x(t)=e-N ·u(t)

The complex exponential: The complex exponential is not a physically realizable signal, but is nevertheless a conceptually very useful function. Complex numbers, of course, involve the imaginary quantity j ="-1 and are hence abstract entities. It must be emphasized that although complex numbers have no direct physical correspondence, they are used to describe real phenomena. For example, when a complex number a+jb is used to describe a physical quantity, the values a and b do not correspond directly to any physical feature, but the same complex number expressed in terms of the magnitude and phase may often be seen to describe tangible physical features. If the functions, e'q, sin(q) and cos(q) are expanded into series, the following relationship can be immediately observed:

ej9 = cos e + j sin e

(2_19)

This is known as Euler's relation and is extremely valuable in the mathematical treatment of signals and linear systems.

26

Chapter 2

EXAMPLE 2-2

Find the output when a rectangular signal is input to a system that has a rectangular function as the impulse response.

x(t) = {5

for 0 s t s Ii elsewhere

(2.20)

h(t) = {6

for Ors r S I; elsewhere

(2.21)

Figure 2·20. Input signal and impulse response for Example 2-2.

The output of the system, y(t), is the evaluation of the convolution integral:

..,

yet) = f x(-r:)h(t - -r:)d-r:

This integral can be interpreted as the area under the curve given by the product x(t)·h(t-1:) plotted as a function of the variable 1:. For this purpose we can draw x(1:) and h(t--r:) against the variable -r: as shown in Figure 2-21. The function h(t--r:) plotted against the variable 1: is the time reversed version of h(1:) with a time shift t. Note that h(t-1:) is plotted for a particular value of t; therefore, t has a fixed value in this figure. The product x(1:)-h(t-1:) for the plots in Figure 2-21 will be non-zero only in the interval O::;-r:::;t. and the value of the integral is equal to 4t. In this manner the value of yet) can be calculated for all values of t. As the value of t increases the rectangular function h(t-1:) moves to the right of the plot in Figure 2-21; therefore, this graphical representation of convolution is often referred to as "timereversing and sliding one function over the other and finding the area under the product".

Continuous Time Signals and Systems

27

-'t

-'t

Figure 2-21. The functions x(.) and h(t-.) for Example 2-2.

Since the functions in this integration contain discontinuities the calculation can be done piecewise as follows.

i) In the region t::;O, the product x('t)·h(t-1:) is zero and therefore,

y(t) =; 0

(2.22)

ii) In the region 0 ::; t s: TJ ,x(1:) is 1, and h(t-1:) is 1 till x=t and

y(t)

f

5(2). (2) di

o

41

(2.23)

iii) In the region T, ::; t s T2 we have:

y(t)

(2.24)

iv) In the region Tz ::; t s; T1+T2 we have:

y(/)

TI

J (2)· (2) dx

i-r«

4(Ti + J; -I)

(2.25)

v) And finally, in the region T1+T2::; t we have

y(t) = 0

(2.26)

Eqs.(2.22) to (2.26) completely describe the output of this system. The output is plotted in Figure 2-22.

28

Chapter 2

y(t)

T,

ReQion 1 Region 2 Region 3 Region 4 Region 5

T,

o

Figure 2-22. Output signal, y(t), for Example 2-2.

EXAMPLE 2-3

A step function is input to a system with a first order impulse response as defined below. Find the output signal.

x(t) = u(t)

(2.27)

h(t) = e-kl • u(t)

(2.28)

1

h(t)

x(t)

o

~t

o

-t

Figure 2-23. Input signal and impulse response for Example 2-3.

The input, x(t), and the impulse response. h(t). are drawn in Figure 2-23. The output can be calculated using the convolution integral.

~

yet) = x(t)*h(t) = h(t)*x(t) = J h('t)x(t -'t)d't

Continuous Time Signals and Systems

29

The two functions x(t-1') and h(1') are shown in Figure 2-24 plotted against 1'. The overlap between the two is in the range 0~1's:t, when t>0.

x(t-.)

h(.)

o

t

-.

a

Figure 2-24. The functions xu-r) & h(.) plotted against the variable r,

i) In the region t<O we have

y(t) = 0

(2.29)

ii) In the region t>O we have

yet)

(2.30)

11k ......•...•••..••.....•....•..•.••••..............•..•••...........

Region 1

y(t)

Region 2

o

Figure 2-25. Output signal, y(t), for Example 2-3.

The output function is shown in Figure 2-25.

30

Chapter 2

EXERCISES

1. From the definition of the convolution integral prove that the convolution operation is (i) commutative, (ii) associative and (iii) distributive.

2. For the following pairs of input signal, x(t), and system impulse response, h(t), determine the output signal, yet):

(a) x(t)=8(t-l)+5(t+ 1), h(t)=e-21·u(t)

(b) x(t)=u(t), h(t)=u(t)-u(t-l)

3. Obtain the odd and even parts of the following functions: (a) x(t)=e-21'u(t)

(b) x(t)=u(t)-u(t-2)

4. Two first order systems are cascaded as shown in Figure 2-26:

xC,_t)=--------+l.L.I_h_l (t_)_---'I-------+l.1 h2(t)

1-- • yet)

Figure 2-26. Cascaded system for Exercise 5.

Find the overall transfer function. Determine the output when a step function is input.

Chapter 3

Fourier Analysis for Continuous Time Processes

301 Decomposition of Periodic Signals

Any periodic function with a frequency fa. i.e., period To=I!fo, not only repeats itself with period To but also in periods of 2To, 3 To. 4To, etc. Therefore, a set of periodic functions with frequencies Fp, 2Fp. 3Fp. 4Fp, ... kFp when combined (added) will be periodic with period Tp=11Fp (which is the lowest common period). In other words periodic functions display the property that when a periodic function with a certain frequency (called fundamental frequency) is added to one or more of its harmonics (functions with frequencies that are multiples of the fundamental frequency) a new periodic function with exactly the same period as the fundamental is obtained. For example, take a sinusoid of frequency 5 Hz (period of one cycIe=0.2 s) and two of its harmonics, namely. 10 Hz and 20 Hz,

x(t) =sin(2rc(5)t)+ 1 y.; cos(2rc( 10)t)+0.6sin(2rc(20)t)

(3.1)

The 3 sinusoidal terms on the right side of Eq.(3.1). 5, 10 and 15 Hz, are shown graphically in the upper plot in Figure 3-1. The sum of the 3 sinusoids, resulting in x(t) is shown in the lower plot in Figure 3-1. The resulting composite signal is periodic with period 0.2 s although it is nonsinusoidal. Thus we can synthesize periodic waveforms of arbitrary shape by combining sinusoids. The shape of such a synthesized periodic waveform depends on two sets of parameters, namely, the relative amplitudes of the harmonics and the phase shifts. It is now easy to imagine the reverse process whereby the periodic signal, x(t), with period 0.2 s can be decomposed into 3 sinusoids with frequencies that are multiples of 5 Hz.

31

32

Chapter 3

Thus, an important property of periodic signals is that they can be decomposed into a set of periodic functions whose frequencies are multiples of the. fundamental frequency of the original signal. The set of periodic functions can be sinusoidal waves, square waves, etc. Sinusoidal waves are especially popular for the representation of periodic functions on account of various special properties of sinusoids like differentiation (i.e., sinusoids are always differentiable and also the derivative of a sinusoid is another similar looking sinusoid). This decomposition of periodic functions into sinusoidal components is called Fourier analysis; the complementary process whereby a set of sinusoidal functions are combined to form a new periodic function is called Fourier synthesis.

3

-3 B

-t

B.4

n

time (sec)

--+t

0.4

3

x(t)

-3 B

Figure 3-1. Periodic signal, xC!), formed as the sum of3 sinusoids, XI(t), X2(t) & X3(t).

3.1.1 Synthesis of an ECG Signal Using Pure Sinusoids

This idea can be well illustrated by the example of a physiological signal.

We'll see that the electrocardiogram (ECG) which is a common physiological signal can be obtained from the combination of a set of sinusoids. That is, we'll synthesize an ECG signal from a set of sinusoids.

The electrocardiographic signal in a normal human being at rest is roughly a periodic signal with period T=O.833 s, corresponding to a heart rate of 72 beats per minute. The fundamental frequency of such an ECG

Continuous Time Fourier Analysis

33

signal is therefore 1.2 Hz. Consider the addition of a sinusoid of 1.2 Hz and its first three harmonics expressed as

4

e(t) = LPk· cos(2n· fk· t + CPk) k~1

(3.2)

wherefk=kfo withfo being the fundamental frequency (1.2 Hz in this case), and <Pk is the phase shift of each sinusoid. The values of Pk and <Pk (k=1 to 4) are given in Table 3-1. The sum e(t) is shown in Figure 3-2.

-.. t

e(t)

Figure 3-2. Sum of four sinusoids - a first approximation to ECG synthesis.

This sum of four sinusoids exhibits a repeating large peak similar to the R-wave in an ECG but the other ECG components are absent and a small ripple is seen instead. If ten sinusoids of appropriate amplitudes and phase shifts are added instead ofthree we obtain a more prominent peak like the Rwave as well as smaller significant components similar to the P-wave and Twave, although the ripple remains. This sum of ten sinusoids is expressed in Eq.(3.3) and plotted in Figure 3-3. See Table 3-1 for the values ofpk and <Pk (k=1 to 10).

e'(t)

10

L P k . cos(2n . fk • t + <P k ) k=i

(3.3)

A much better construction of the ECG signal is obtained by the combination of 30 sinusoids (an 1 Hz sinusoid and 29 harmonics) with the amplitudes (Pk) and phase shifts ($k) of the sinusoids as given in Table 3-1.

34

Chapter 3

30

El(t) = LPk·COs(2rc·jk ·t+(h)

k~l

(3.4)

e'(t)

_t

Figure 3-3. Sum often sinusoids - a better approximation to the ECG.

The set of sinusoids with frequencies that are multiples of the fundamental 1.2 Hz sinusoid given by Eq.(3.4) combine to yield a new periodic function with period 0.833 s. It is often necessary to add a constant value or d.c. component to the set of sinusoids to obtain the required signal; this d.c. component may be regarded as a sinusoid with frequency 0 Hz. To the above BCG signal the addition of a constant value of Po produces a signal with a baseline of zero. The synthesis expression for this signal including the d.c. component is

30

E, (t) = Po + LPk . cos(21tjok· t + ~k)

k~l

30

= 2: P« . cOS(2rcjkt + ~ k)

k=O 30

= 2: P k • cosf 0) k t + (h ) k=O

(3.5)

This is the general expression for the synthesis of a signal from sinusoidal components. The BCG signal synthesized with the components in Table 3-1 is illustrated in Figure 3-4; the fundamental frequency 10=1.2 Hz. The combination of these thirty sinusoids is shown in Figure 3-4.

Continuous Time Fourier Analysis 35
Table 3-1. Sinusoids for ECG synthesis
Sinusoid number Frequency Magnitude Phase
k fj=kfg (Hz) es (mV) q,k (radians)
0 0.0 8.000 0.000
1 1.2 12.434 0.912
2 2.4 10.775 1.769
3 3.6 9.121 2.513
4 4.8 8.770 3.165
5 6.0 10.053 3.902
6 7.2 12.066 4.800
7 8.4 13.924 5.799
8 9.6 15.069 0.553
9 10.8 15.143 1.593
10 12.0 14.021 2.610
II 13.2 11.911 3.585
12 14.4 9.361 4.489
13 15.6 7.112 5.295
14 16.8 5.732 6.029
15 18.0 5.121 0.531
16 19.2 4.701 1.459
17 20.4 4.115 2.519
18 21.6 3.300 3.670
19 22.8 2.283 4.886
20 24.0 1.129 6.261
21 25.2 0.555 2.713
22 26.4 1.489 4.555
23 27.6 2.250 5.745
24 28.8 2.540 0.545
25 30.0 2.349 1.579
26 31.2 1.815 2.582
27 32.4 1.150 3.573
28 33.6 0.549 4.660
29 34.8 0.250 0.302
30 36.0 0.494 1.979 Just as a set of sinusoids can be combined to form a new periodic function, it can be shown that any periodic signal or function can be decomposed into a set of sinusoidal components. This is the basis of Fourier conversions. As seen in the above example not only the amplitude but also the phase angle of each sinusoidal component is important. Therefore, in general this decomposition will involve sine and cosine functions. or alternatively complex exponential functions. We shall see that a set of complex exponentials is equivalent to a set of sinusoids and subsequently use only complex exponential functions as they are somewhat easier to handle mathematically. There are several different variations of the basic Fourier conversion, the use of each of which depends on whether the signal in question is continuous or discrete, and whether it is periodic or not.

36

Chapter 3

E,(t),
Ii\. A A A A A
II ~- :1
t Figure 3-4. Sum of 30 sinusoids providing a good representation of the EeG.

3.2 Fourier Conversions

3.2.1 Periodic Continuous Time Signals: Fourier Series

As shown in the above discussion, a periodic function x(t) can be formed using a number of scaled and phase shifted sinusoidal components. These components have frequencies that are multiples of the fundamental frequency, ffio=2nfo, where fo=lITo is the period of the signal x(t). Although in practice we may wish to use only a finite number of sinusoidal components, for generality the number of components is said to be infinite, and the synthesis of a general periodic function may be written as

<lQ

x(t) = LPk cos(2n kfot + (h) k=O

(3.6)

where Pi and <Pk are sets of constants. By trigonometric expansion of the cosine we may rewrite Eq.(3.6) (let Ck=PkCOS(QJk) and dk=-Pksin(QJk) ),

00

x(t) = L [ckcos(2nikt) + a, sin(2nikt)] k=O

(3.7)

We can rearrange Eq.(3.7) and write it in another form,

Continuous Time Fourier Analysis

37

x(t)= f H(Ck - jdk)[cos(ro k t)+ jsin(ro kt)]+Hck + jdk )[COS(ro kt)- jsin(ro kt)]}

k-O

(3.8)

The constant j in the expression above is the imaginary quantity "';-1. Although the function x(t) is real valued, it is expressed using imaginary parts for convenient mathematical operations as we will see. (Verify by expanding the right hand side that the imaginary terms cancel out). Noting that COS(COkt)=COS(-rokt) and sin(cokt)=-sin(-rokt) we can rewrite Eq.(3.8) as

co

x(t) = L ak [cos/co kt) + i sin(ro kt)]

k=·oo

(3.9)

where ak is a complex number equal to (Ck -jdk)12 for k>O and (ck+jdk)/2 for k<O. Finally using Euler's relation we can write Eq.(3.9) as

co

x(t) = Lak e+)kUlOI

k=-ro

(3.10)

The coefficients ak are complex numbers with magnitude, lakl=Pk="';(c/+d/), and phase, Lak=~k=tan-\dk ICk). For notational consistency with some techniques to be introduced later we shall write ak as a[kJ, and the above synthesis equation for periodic functions becomes

<0

xCt) = L a[k]e+)kmol

k=·oo

(3.11)

This is the Fourier synthesis equation for periodic continuous time functions. In the above equations we also note that a[-k] is the complex conjugate of a[k1; this is true for all purely real x(t) which is all that we shall be concerned about.

Conversely, a periodic continuous function can be decomposed into the sum of an infinite number of complex exponentials. For a periodic function x(t) with period To, the coefficients of these sinusoids or complex exponentials are calculated by the following formula:

a[k] = _1_ f x(t)e-)kro"r dt

To <To>

k=-00 ... O,I,2,3, ... 00

(3.12)

38

Chapter 3

This is the Fourier analysis equation. Note that k has only integer values and a[k] is a discrete function. In general, a[k] is a complex number. The limit of integration is over anyone period To.

In order to obtain the Fourier series, the function or signal x(t) must

satisfy certain conditions called the Dirichlet conditions.

• the function must be absolutely integrable over the interval To,

• it must have only a finite number of maxima and minima in period To,

• it must have only a finite number of discontinuities in period To>

EXAMPLE 3-1

Consider the periodic continuous time signal, x(t) with repetition period To=2, shown in Figure 3-5> Defined in the period -1 to + 1,

x(/) - {~

_l<t<l 2 - - 2

elsewhere in the interval - 1 < t < 1

(3.13)

The Fourier coefficients a[k] can be calculated for different values of k, viz., k = ±l, ±2, ±3, ±4, .... etc.,

1 I

a[k] = - Jx(t)e-jkrool dt

2 _I

1

1 2J 1 -J'k ]~ I d = - 'e ] t

2 I

-i

(3.14)

=

for k* 0

1.

2

a[O] = t J 1· eO dt 0.5

.1

2

a[l] = lin = a[-I] a[2] = 0 = a[-2] a[3] = 3/n = a[-3] a[4] = 0 = a[-4] a[5] =-1I5n= a[-5] etc.

Continuous Time Fourier Analysis

39

x(t)

-4 -3

-2

-1

o

2

3

4

Figure 3-5. Periodic rectangular function centered about t=0.

In this case the a[k]'s are purely real, with the imaginary part being zero.

The calculated a[k]'s can either be tabulated or plotted as in Figure 3-6. It is often preferable to plot the coefficients. If the coefficients are not purely real as in this case they can be plotted either as (a) real and imaginary plots, or (b) magnitude and phase plots.

Reconstructing the original signal: The inverse conversion formula, the synthesis equation, can be used to recover x(t) using the a[kJ's.

OJ

x(t) = _La[k]'e+j2tl

OJ

= _La[k]{cos(nt)+jsin(Jtt)}

(3.15)

-00

00

= _L a[k]cos(nt)

k~-oo

If this series is approximated using only a[-3] to a[3], then the reconstructed x(t) is as shown in Figure 3-7. If a better approximation is used to reconstruct x(t) using more sinusoids, i.e., with a[-ll] to a[ll], then x[t] is as shown in Figure 3-8. Therefore, as the number of terms used for reconstruction is increased the approximation to the original time function x(t) improves. However, in the case of a discontinuous function as in this example, for any finite number of terms the amplitude of the ripples remains constant although they become narrower and concentrated near the discontinuity with increasing number of terms. This behavior of the Fourier series for discontinuous functions is known as the Gibbs phenomenon.

40

Chapter 3

a[k]

o

- k

Figure 3-6. Fourier series coefficients of a periodic rectangular function.

Figure 3-7. Reconstruction of the rectangular function from the first few Fourier coefficients.

-2

o

Figure 3-8. Reconstruction of the rectangular function using more Fourier coefficients.

EXAMPLE 3-2

Consider another periodic continuous time signal similar to the one in Example 3-1 but shifted in time by !t2, as shown in Figure 3-9. It has a period of 2 and is defined in the period -1 to + 1 as

Continuous Time Fourier Analysis

41

x(t) = {1

o S t s 1 -1 < t s 0

(3.16)

DDOOO

-4 -3 -2 -1

o

2

3

4

-t

Figure 3-9. Periodic rectangular function not centered about (=0.

I

1 J -'k3!!.t

= - 1· e:l 2 dt

20

sine n k)

__ =2_ -jk!!. e 2

(lm)

(3.17)

for k:;t: 0

1

a[O] = t J1·eOdt

°

0.5

The Fourier coefficients calculated in Eq.(3_17) have real and imaginary parts. The magnitude and phase are,

la[k]1 = sinC*) kx

ks:

La[k] =--

2

(3.18)

The Fourier coefficients ark] can be calculated for different values of k,

viz., k = ± 1, ±2, ±3, ±4, .... etc.,

la[1] 1= lin = ja[-I]I La[l]= -n12 radians = -La[-l]

la[2] 1= 0 = la[-2JI La[2]= -3n12 radians = -La[-2]

la[3]1 = 31n = la[-3]] La[3]= 0 radians = -La[-3]

42

Chapter 3

The magnitudes of ark] are identical to those in the previous example.

The only difference in the Fourier coefficients is the phase angle. This illustrates the fact that if a signal is simply shifted in time then only the Fourier phase angle is changed.

3.2.2 Aperiodic Continuous Time Signals: Fourier Transform

Most real signals are not periodic. Even a seemingly periodic signal like the cardiac cycle is non-periodic. The very notion of mortality implies nonperiodicity. Many other biological signals like the EEG and EMG are more obviously non-periodic. Since these signals do not have a finite period, one may think of them as having a period of infinity. The fundamental frequency in this case is zero. Hence, whereas the Fourier series for a periodic function yields coefficients at discrete frequencies /0, 2/0, 3/0, ... , etc., the Fourier transform of a non-periodic function yields a continuous function since the space between the frequencies is zero. This then leads to the definition of the Fourier transform, in which the function is integrated over infinity. The Fourier transform and its inverse are

00

X(ro) = [x(t) e-jootdt

(3.19)

-<0

(3.20)

Note that ffi is continuous and X( oi) is a continuous function.

EXAMPLE 3-3

Consider the non-periodic function shown in Figure 3-10.

_1.. <t< 1..

2 2

elsewhere

(3.21)

Continuous Time Fourier Analysis

43

I I I mQ, I I I
-4 -3 -2 -1 0 2 3 _____i_.. t Figure 3-10. Rectangular pulse centered about t=0.

The Fourier transfonn is calculated as

DO

X(co) = J x(t)· e-jOO[ dt

-DO

~

= J 1 . e - joot dt

-~

- joot Yz

e

=--

(3.22)

- joo

-Yz

_ 1 (jOO -jm)

_- e -e

jco

2 . (00)

= co sm '2

1

Figure 3-11. Fourier magnitude ofa rectangular pulse.

This X(co) shown graphically in Figure 3-11 is called a sine function. The mathematical as well as graphical form of the sine function are worth remembering as the function recurs frequently in signal processing. The sine function is usually written as

44

Chapter 3

X( ) sin(ro/2) . (/2)

co = = srnc eo

(co/2)

(3.23)

Fourier transform in operator form: The Fourier transform of a signal x(t) is conventionally denoted with the corresponding capital letter as X(eo) and is often written in operator form in one ofthe following ways:

3{x(t)} = X(co) x(t) = ,3-1 {X(co)} x(t)~X«(O)

=::I-I

x(t) X(co)

x(t)~X«(O)

(3.24)

3.2.3 Properties of the Fourier Domain

Linearity: The Fourier transform is a linear operation.

3{ax1 (t) + bX2 (t)} = a3{xl (t)} + b3{x2 (t)}

(3.25)

Time shift: If a time function, x(t), is shifted by time 1:, then in the Fourier domain it becomes a phase shift (01: radians.

(3.26)

Time derivative: The derivative of a time function is transformed to multiplication by js» in the Fourier domain. This is an important property which is used to solve time differential equations in the frequency domain.

(3.27)

Time integral: This is the complement of the above property - the timeintegral of a function becomes division by jeo in the Fourier domain.

(3.28)

Continuous Time Fourier Analysis

45

When the mean value of x(t) is non-zero, an additional term is introduced,

.3{ JX(t)dt} = .~. X(m) + rc· X(O)· 8(ro)

-a, J

(3.29)

Time domain convolution: The convolution operation in the time domain is transformed into a simple multiplication in the frequency domain .

.3{x(t)*h(t)} = X(ro)·H(m)

(3.30)

Thus if x(t) is input to a system with impulse response h(t) the output yCt) can be calculated in the Fourier transform domain as the product of the Fourier transforms of xCt) and h(t).

x(t} ,1 LTIsystem r y(t)=x(t)*h(t)
h(t)
X(oo) ,1 LTI system r Y(CD)=X(oo)·H(oo)
H(ro) Figure 3-12. Convolution property of the Fourier transform.

This is a very important property since it suggests an alternative method for performing convolutions of time domain signals. Furthermore, deconvolution is also possible using this property with the system impulse response being calculable from known input and output signals,

h(t) = .3-1 {H(ro)} :::: ~_I {Y(CO)} X(ro)

(3.31)

Time domain multiplication: Time domain multiplication is transformed into convolution in the frequency domain. This is termed modulation.

1

.3{x(t) ·h(t)} = -X(co)*Y(m)

2rc

(3.32)

Time scaling: Time expansion results in frequency shrinking and vice versa. Time-scaling x(t) by a factor a, results in frequency-scaling of its Fourier transform by the factor d":

46

Chapter 3

3{x(at)} = _1 X(i!l.)

lal a

(3.33)

Symmetry: If the time domain signal, x(t), is a real signal (has no imaginary component), then its Fourier transform is symmetrical about the vertical axis. (Superscript asterisk is used to denote the complex conjugate).

X(co) = X· (-co) Re{X(co)} = Re{X( -ro)} Im{X(co)} =-Im{X(-co)} IX(ro)1 = IX( -co)1 LX(ro) = LY( -ro)

(3.34)

Parseval's relation: The total energy of the signal in the time domain is equal to the total energy of its frequency domain representation.

co 1 <0

flx(t)12 dt =- flX(cot dt»

211:

-co -co

(3.35)

Ix(ro) 12=X(co)·X(ro) is the energy density spectrum or power spectrum.

3.3 System Transfer Function

The impulse response of a linear time-invariant system is also a time function and can therefore be Fourier transformed. The Fourier transform of a system impulse response is referred to as the Transfer Function of the system. The transfer function is conceptually important especially since multiplication of a Fourier transformed signal with the transfer function of a system yields the frequency domain representation of the output (i.e., the Fourier transform of the output). Therefore, input-output calculations on a system are more easily done in the Fourier domain or "frequency domain". In fact, it is common practice to take the Fourier transform of the input signal as well as the transform of the impulse response of a system, multiply them and take the inverse Fourier transform of the product to obtain the system output.

Continuous Time Fourier Analysis

47

33.1 The Laplace Transform

The Laplace .transform may be regarded as a general form of the Fourier transform. The definition of the Laplace transform is similar to that of the Fourier transform with the difference that the Laplace transform uses a complex variable s, The Laplace variable is defined as: S=CHjOJ, and the Laplace transform is identical to the Fourier when s=jOJ. The Fourier transform, X(OJ), is sometimes written as X(jro) to indicate this substitution of s=jOJ and also to emphasize that it is a complex function.

The Laplace transform of a time function xCt) is calculated as

L{x(t)}

Xes)

~

J x(t) e-st dt

(3.36)

The time function x(t) can be obtained from its Laplace transform using the inverse Laplace formula,

x(t) = C'{X(s)} = .fX(s)estds

(3.37)

The contour of integration is over some path where it converges.

3.3.2 Properties of the Laplace Transform

The Laplace transform of a signal has properties similar to the Fourier transform and they are summarized in Table 3-2 for convenience. Substituting s=jt» will give the corresponding Fourier transform property.

Table 3-2. Summary of Laplace Transform Properties

Operation Time domain signal Laplace domain

Time shift x(t-1:) e-"'X(s)
Time derivative dx(t)/dt sX(s)
Time integral fx(t) dt S·IX(S)
Convolution x(t)*h(t) X(s)·H(s)
Modulation x(t)·w(t) 2~ xoo- w (s)
Scaling x(at) I x(s)
l;r a 48

Chapter 3

EXAMPLE 3-4

Calculate the Laplace transform of the following function x(t) where k is a real constant: ~ .

x(t) = {o

ekt

-00 < t < ° O::;t<oo

(3.38)

co
Xes) = Jekl «" dt
0
co
= Jefk-S) I dt (3.39)
0
e{k-s) 00
=
k-sO This function converges if the exponent is finite, that is, if the real part of (k-s) isnegative. Thus, if (k-s)<O, i.e., (s-k»O or s>k, we have

0-1 X(s)=--

k-s 1

(3.40)

=--

s-k

In this example the integral in equation (3.37) converges only if k-s has a negative real part, i.e., only if s-k>O, i.e., s>k. The values of s where the Laplace transform converges can be represented graphically in the complex s-plane as shown in Figure 3-13. The regions where the function converges are indicated by shading. The right boundary of the region of convergence is at Re{s}=oo and the left boundary is at Re{s}=k. For any causal system the region of convergence will be to the right of the last pole. In this graphical representation of the complex s-plane, the Fourier transform is on the Im{s} axis, or the jro axis (i.e., where s=jro). Therefore, if the region of convergence of the Laplace transform includes the Im {s} axis or the JID axis, then the Fourier transform of the function converges and exists. Thus we see that the Laplace transform is a more general expression that can be represented. even when the Fourier transform cannot be evaluated.

Continuous Time Fourier Analysis

49

The Laplace transform is often easily calculated by performing the integration as in Example 3-4. Computation of the line integral for the inverse transform is usually avoided and instead the inverse Laplace transformation is obtained by simply using a table of time-functions and their Laplace transforms. Such a list of functions and their Laplace transforms is given in Table 3-3. A combination of this table and the Laplace transform properties can be used to obtain the inverse transform (and also the direct transform) of a very large class of functions.

(b)

Figure 3-13. Regions of convergence of the function in Example 3-4 for two different values of the constant k, (a) k<O, (b) k>O. Shaded regions indicate values of s where the function converges.

In Table 3-3 the time functions are defined for t20; a is an arbitrary real constant and n is an integer constant. If the Laplace transform of a system output is obtained by multiplying the input signal transform and the transfer function, it may be factorized so that each factor is one of the functions in Table 3-3. The output signal can then be calculated directly from this table.

Table 3-3. Laplace Transforms of common functions

Time function, x(t) for t>O Laplace Transform, Xes}

8(t) 1

u(t) lIs
e-at L'(a-i-s)
t . e-" 11 (a + s}2
tn-I. e «at I (n - 1)! 11 (a + s)"
1 -nt lI[s(a+s)]
-e 50

Chapter 3

EXAMPLE 3-5

Consider an signal x(t) input to an LTI system with impulse response h(t) as follows:

x(t) = e-bl • u(t)

(3.41)

h(t) = e-ct • u(t)

where band c are constants.

Using Laplace transforms calculate the outputy(t). Verify the result using direct convolution.

Laplace transform method: The Laplace transform of the output is

1 Yes) = X(s)· H(s) =

(b+ s)(c+ s)

(3.42)

Using partial fractions Yes) may be written as

Yes) = 11 (c - b) + 11 (b - c)

(b + s) (c + s)

(3.43)

Since band c are constants, the inverse transform can be written directly by looking at the table. y(t) may be written compactly as

(3.44)

Convolution method: yet) = x(t)*h(t)

""

y(t) = I x('t) . h(t - 't)dr;

-00

Region 1: when 1<0: x(t}h(t-1:)=O and,

y(t) = 0

(3.45)

Continuous Time Fourier Analysis

51

Region 2: when t~O: x(t)·h(t-1:)4:0 when 05:1::::;[ and,

I

() I -b,· -C(f-<)d

yt= e e 1:

o

(

= e -c I Ie -,(b-c) dx

o

(3.46)

_ e-C1 [e-(b-c)1 -1] c-b

1 [-bf -CI]

=-- e -e

c-b

Linear Systems described by differential equations: A linear system with input, x(t), and output, yet), may be described by an ordinary differential equation. The Laplace transform of the equation can be rearranged as:

Y(s)/X(s). This ratio is by definition the transfer function of the system. Thus, any linear system may be described by a differential equation, the system impulse response or the transfer function; and each of them are easily translated into the others.

Order of a System: The order of a linear system is the order of the differential equation used to describe it; alternatively, it is the order of the polynomials in the transfer function.

EXAMPLE 3-6

An LTI system with input x(t) and impulse response h(t) is described by the following differential equation:

dy(t) + 3y(t) = 3x(t) dt

(3.47)

Obtain the transfer function and the impulse response of the system. For the transfer function' plot (i) real and imaginary parts, (ii) magnitude and phase against frequency as the independent variable.

Taking the Laplace transform of the equation we get:

sY(s) + 3Y(s) = 3X(s)

(3.48)

52

Chapter 3

Rearranging Eq.(3.48),

Y(s)[s+ 3] = 3X(s)

(3.49)

The transfer function of the system is

H(s) = Yes) Xes)

3

(3.50)

s+3

The magnitude and phase of this function are plotted in Figure 3-14.

71/2

~,,)

-71/2

Figure 3-14: Magnitude and phase plots of the transfer function of Example 3-6.

The impulse response of the system is

h(t) = L-1 {H(s)} = 3e-3tu(t)

(3.51)

3.3.3 Frequency Response ofLTI Systems

An L TI system may be represented in the time domain or frequency domain as shown in Figure 3-15 where the Laplace transform (or Fourier transform) of the system impulse response is referred to as the transfer function of the LTI system. H(s=jro) represents the frequency response of the system, Since H(jro) is a complex function its magnitude and phase can be calculated. I H(jro) I will indicate the amount of gain or attenuation that different frequency components of signals will experience when passing through the system. Correspondingly, L.H(jro) indicates the amount of phase shift that the different frequency components of signals will experience when passing through the system. The phase shift can be translated into it. time shift since by definition the phase shift at any frequency is 21t times the

Continuous Time Fourier Analysis

53

fractional time shift, i.e., phase shift j-21n;/T radians, where 't is the time shift and T is the period of the frequency. For example, a phase shift of nl2 radians at 10Hz corresponds to a time shift of 25 ms, while at 25Hz a phase shift of nl2 corresponds to a time shift of lOms.

x(t) ! L TI system I y(t)"x(t)'h{t)

'------oj', h(t) r-'"

~~:::::;

X(",:!_) __ -o.j,! L ~~!rstem ~ Y(OJ) .. X(ro).H(ro)

Figure 3- J 5. Time and frequency domain representations of an L TI system.

3.3.4 Pole-Zero Plots and Bode Plots

The transfer function of a system may be represented in the complex splane to illustrate its region of convergence. In the case of linear timeinvariant, causal systems with stable responses, the magnitude and phase frequency responses can be plotted.

In general a transfer function, H(s), can be expressed as the ratio of two polynomials. These polynomials may be factorizable into first order or second order tenus (or higher order tenus),

H(s)

_KSnI (s - a1)(s -a2 ) •.. (s-b1)(s-b2)···

(3.52)

The terms in the numerator are called zeros because if s=al, or s=a2, etc., then the function H(s)=O; therefore, the transfer function H(s) is said to have zeros at a., a2, etc. The terms in the denominator are called poles because if s=b«, or s=b-, etc., then the function H(s)=oo; therefore, the transfer function H(s) is said to have poles at b i, bx, etc. The poles of a function are of particular importance because they determine the boundary of the region of convergence. Since the Laplace transform of a function does not converge at the poles (where the function is infinite) the region of convergence must exclude the pole. In the case of causal systems, the region of convergence is always to the right of the last pole. Since we are interested in the analysis of real systems we shall be primarily interested in causal systems. But it is worth remembering that it is possible to synthesize systems that are effectively non-causal. When a causal function comprises several poles, the region of convergence will be the common region of convergence for all the factors and will be to the right of the right-most pole. This is illustrated in the following example where the function is plotted in the complex s-plane.

54

Chapter 3

EXAMPLE 3-7

Draw the following transfer function of a causal system in the complex splane, with a=-l. b=-4 and c=-3, and indicate the region of convergence.

H(s) = (s-a) (s-b)(s-c)

(3.53)

Figure 3-16. Transfer function of Example 3-7 plotted in the complex s-plane. The region of convergence for each pole is to its right. The overlapping region which is the region of convergence for the transfer function is shown in dark gray.

The transfer function is plotted in Figure 3-16, with the pole locations indicated by a cross, "x", and the zero location indicated by an open circle, "0". Such plots are called pole-zero plots. The overlapping region of convergence is shaded in dark gray. The zeros are not of particular concern here since the function converges always at the zero. However, sometimes the occurrence of a zero at the same place as a pole can affect the convergence of the function .

. EXAMPLE 3-8

Consider the following transfer function:

H(s) = 101

S2 + 2s + 101

(3.54)

Draw the poles in the s-plane and indicate the region of convergence.

The denominator of this function cannot be factorized into simple first order terms, since the roots are complex. The roots of the second order polynomial in the denominator are

Continuous Time Fourier Analysis

55

s = - 2 ± .J 4 - 404 = -1 ± 1 OJ 2

(3.55)

The poles are marked in Figure 3-17 and the region of convergence shown shaded is to the right of the poles.

Figure 3-17. Transfer function of Example 3-8 and its region of convergence.

Usually, the frequency response of an LTI system is presented as graphical plots called Bode Plots. Bode plots allow the quick calculation of a system transfer function from a few easily remembered primitives. The transfer function (Laplace or Fourier transform of the impulse response) of the system is written as the ratio of polynomials. The numerator and denominator are factorized and the primitive plots of each factor are used to obtain the overall frequency response. In order to allow easy combination of the primitive plots, in a Bode plot (i) the frequency is drawn to a logarithmic scale, (ii) the magnitude is plotted in decibels (dB). Since the transfer function usually gives the voltage gain of electric circuits, the power gain is proportional to the square of the voltage gain. Thus the logarithmic gain in Bels is 10glO I H2(S) I =2 IOglO I H(s) I. The gain in decibels (a tenth of a Bel) is calculated as 2010gw I H(s) I. If the asymptotic Bode plots for a term that is a pole or a zero are known (i.e., the primitive plots) then the plot for the entire transfer function H(s) can be easily plotted graphically as follows:

(a) the magnitude plots in decibels of all the terms can be added since

(3.56)

(b) the phase plots of all the terms can be added since

(3.57)

56

Chapter 3

Since Bode plots are just asymptotic approximations to the actual frequency plots the graphical addition of the terms is quite simple. The advantage of these asymptotic plots is that the response of any system response can be plotted very quickly.

. Bode plot primitives: A Bode plot primitive is a plot for a factor, F(s), that cannot be reduced further.

1. Constant term: F(s)=K, or F(jro)=K.

The gain is independent of frequency, and in dB equal to 20l0g I K I at all frequencies. The phase is equal to zero because there is no imaginary part. These are shown in Figure 3-18.

40

90

11100 1110

10 100-+w

20 20/09,o/H(s)1

dB 0 +---11\ ..... 00--,1/1-0-,...----,10--10..---

-+ill

·20

45

LH(s) O+----,-~-,...----,--~--deg.

·45

·40

·90

Figure 3-18. Bode plots of a constant term.

2. Zero at s=O: F(s)=slk, or F(jw}::::jro/k.

I F(jm) I =ro/k, LF(jro)=tan·l(oo).

Table 3-4. Zero at s=O

Frequency (0 (rad/s) Magnitude (dB)

Phase (degrees)

10k lOOk

+20 +40

90 90 90 90 90

kllOO -40

kilO -20'

k 0

Table 3-4 shows the magnitude and phase at a few representative frequencies for the zero at s=O; The magnitude plot is a straight line with a slope of +20dB/decade, Figure 3-19. A decade refers to a change in frequency by a factor of 10. At w=k, the gain=I and in dB, gain=OdB.

Continuous Time Fourier Analysis

57

40

90T--------------------

kll00 kilO

20

20fog,o/H(sli O~~--~--~--.---.---

1<1100 10k lOOk __ ill

-40

..__

Slope: +2OdBIdec

45 LH(s}

O+--,~~--~--.---~-deg.

-45

dB

-20

-90

Figure 3-19. Bode plots of a zero at the origin.

3. Pole at s=O: F(s)=k!s, or F(jro)=kIjw.

I F(jro) I =klro, LF(jw)=tan-I(-oo).

Table 3-5. Pole at s=O

Freguency W (rad/s) Magnitude (dB)

Phase (degrees)

kllOO +40

kilO +20

k 0

10k -20

lOOk -40

-90

-90

-90

-90

-90

Table 3-5 shows the magnitude and phase at a few frequencies for the pole at s=O; the magnitude plot is a straight line with a slope of -20dB/decade_ It is interesting to note that a nice symmetry exists between the pole at s=O and the zero at s=O in these logarithmic plots of magnitude in dB versus the frequency on a log scale; the plots are both straight lines with only a sign change in the slopes. This symmetry won't be seen if simple linear plots are used. This is another reason for the use of logarithmic scales in Bode plots.

40 90
20 45
20Iog,o/H(s)l LH(s}
dB 0 1<1100 kilO deg. 1<1100 kilO k 10k 1001< __ ill
lOOk -+ ill
-20 -45
-90
-40 Figure 3-20. Bode plots for a pole at the origin.

4. Simple zero at s=-a: F(s)=(a+s)la, or F(jw)=(a+jro)la.

58

Chapter 3

Table 3-6. Simple zero at s=a

Frequency ill (rad/s) Magnitude (dB) Phase (degrees)

all 00 0 0

all 0 0 6

a B ~

lOa +20 84

100a +40 90

Note that in the magnitude a break point occurs at w=a, Table 3-6 and Figure 3-21; when w«a, I H(jro) I ""1, and when w»a, I H(jw) I ""w. In the Bode plot, Figure 3-21, this means that when ro«a, the gain=O dB, and when ro»a, the gain increases at the rate of +20dB/decade. The phase is almost zero when w<a/l0, and nl2 when w>10a. In the range O.la<w<10a, the phase change is approximately linear.

40 90
20 '\ 45
20Iog.o/H(s)I Slope: LH(s)
+2OdBIdec
0 0
dB all 00 all 0 a lOa l00~1Il deg. all00 a/to a lOa l00a -+1Il
-20 -45
-40 -90 Figure 3-21. Bode plots for a zero at s=a.

5. Simple pole at s=-b: F(s)=b/(b+s), or F(jw)=b/(b+jw).

I F(jw) I =1I-.J(I+w2/b2), and LF(jw)=tan-1(-ro/b).

Table 3-7. Simple pole at s=-b

Frequency ill (fad/s) Magnitude (dB) Phase (degrees)

bllOO 0 0

bllO 0 -6

b -3 -45

lOb -20 -84

100b -40 -90

There is a break point in the gain magnitude at w=b, Table 3-7 and Figure 3-22. Note when w«b, I H(jw) I ""I, and when w»b, I H(jw) I ""lIw. In the Bode plot this means that when w«b, the gain=O dB, and when w»b, the gain decreases at the rate of -20dB/decade. The phase is almost

Continuous Time Fourier Analysis

59

zero when ro<bll 0, and rrJ2 when co> lOb; when as=b, the phase=nI4. ill the range 0.lb<ro<10b. the phase change is approximately linear.

40

90

blloo bll0

b

lOb 100b-lom

20 20Iog1D]H(s)1

o

dB

blloo bll0 b

lOb l00~ co

45 LH(s)

o+--.--~--~--~-.--deg.

-45

-20

-:

-40

Slope: -20dBldo.

-90

Figure 3-22. Bode plots for a pole at s=b,

EXAMPLE 3-9

Draw the magnitude and phase Bode plots of the following transfer function:

300s H(s) = --:-----

S2+ 103s+300

(3.58)

We can rewrite H(s) in terms of factors convenient for using the above Bode plot primitives,

H(s) = 300s

(s+3)(s+100)

(3.59)

s 3 100

=-._--

1 (s + 3) (s + 100)

40

EJ 90
45
Phase
0
10 100 1000-+ m deg. 1110
-45
-90 20

Gain

dB

10 100 1000 -10 m

-40

Figure 3-23. Bode plots for the first factor in Example 3-9.

60

Chapter 3

Figure 3-23 shows the plot for the zero at s=O, crossing the OdB point at (0=1 rad/s. The pole at s=3 has a break point at 0)=3 rad/s (Figure 3-24) and the pole at s=100 has a break point at (0=100 (Figure 3-25). Therefore, the sum of these three primitives yields a magnitude plot that rises with a slope of +20dB/decade crossing the OdB point at (0=1 rad/s. At (0=3 rad/s the sum of the slopes becomes OdB/decade until (0=100 rad/s when the sum of the slopes becomes -20dB/decade. The phase plot is similarly drawn with the slopes of the lines being added until a new break point in anyone primitive plot is reached when the slope changes.

The graphical sum of the three factors yields the frequency response plots of the entire system and is shown in Figure 3-26. The overall frequency response of this transfer function is that of a band-pass system with frequencies below 3 rad/s suffering attenuation, and frequencies above 100 rad/s also suffering progressive attenuation.

40 IS~31 90
20 45
Gain Phase
0 0
dB 1110 10 100 1000 deg. 1110 10 100 1000 .... 0>
...... 0>
-20 -45
-40 -90 Figure 3-24. Bode plots for the second factor in Example 3-9.

40 ~ 90
20 45
0
Gain Phase
0 0
dB 1110 10 1000 deg. 1110 10 100 1000 .... 0>
...... ro
-20 -45
-90
-40 Figure 3-25. Bode plots for the third factor in Example 3-9.

It may be emphasized here that the Bode plots being asymptotic plots are accurate only away from the break points, and in the vicinity of the break points the actual frequency response does not undergo sharp changes as shown here (in the case of the magnitude plots). At the break frequencies, the magnitude plots can be better represented by a smooth curve. The phase

Continuous Time Fourier Analysis

61

plots are approximations over a larger range from the vicinity of a tenth of the break frequency to ten times the break frequency. The approximations of the Bode plots becomes more pronounced when more than one factor has a break point around the same frequency. This is illustrated in the next example.

40 90
20 45
20Iog,oIH(s) LH(s)
I 0
dB 0 10 deg. 1110 100 1000_ eo
-20 -45
-40 -90 Figure 3-26. Bode plots of the entire transfer function, Example 3-9.

EXAl\1PLE 3 -10

Draw the frequency response plots for the following transfer function.

H(s) = 101

S2 +2s+101

(3.60)

The denominator of this function has complex roots and therefore we cannot use first order Bode primitives. However, at very low frequencies as well as at very high frequencies, the response is similar to that of any transfer function with two poles; i.e., the low frequency response is flat at low frequencies (0 dB/decade) and decreases at high frequencies with a slope of -20 dB/decade. The deviation in the nature of the frequency response of a transfer function with complex roots occurs at frequencies close to these poles. The present transfer function has complex poles at: s=1±10j. Therefore, at frequencies close to 10 radians/s the frequency response is different from that of a transfer function with two simple poles. The second order Bode plots are shown in Figure 3-27. At 10 radians/s, the magnitude plot shows an increased gain, and this is called the resonant frequency of the system. The closer these poles are to the imaginary axis, the larger is the magnitude of resonance. Correspondingly, the phase plot also deviates from that for two simple poles; and the deviation increases as the complex poles move closer to the imaginary axis, or towards the right side of the complex s-plane.

62

Chapter 3

The transfer function of the second order system with complex poles is often written in an alternative form that better indicates the amount of resonance,

(3.61)

In equation (3.59) the constant (On is the resonant frequency and is also called the natural frequency of the system. The constant t;, is called the damping factor. The smaller the damping factor the greater is the magnitude of resonance. If t;,<1 then the system is said to be underdamped and it has a resonant peak in the frequency plot and tends to oscillate at the resonant frequency in the time domain. If S>l the system is said to be overdarnped and there is no resonant peak. t;,=l is termed critically damped. The phase plots of underdamped second order systems also deviate from that of second order systems with simple poles, with the amount of deviation increasing with decrease in damping. In the present example, (On=""lOl, and s=lI--JIOl.

-20

Slope: .?' -20dB/dec

-40

o I

frequency (radlsec)

90

10

100

Phase (degrees)

180

Figure 3-27. Second order Bode plots for the transfer function of Example 3-10.

3.3.5 Frequency Filters

A very commonly encountered component of linear systems is the frequency filter. Frequency filters or just filters as they are usually called,

Continuous Time Fourier Analysis

63

act so as to allow signals of certain frequencies to pass, but impede (or attenuate) the passage of other signals. The frequency range in which signals are passed without any diminution or attenuation is called the pass-band and the range of frequencies where the signal is attenuated or stopped is called the stop-band. A given system may have more than one stop-band and passband. Simple frequency filters are classified according to their specific characteri sties.

a) Low-Pass filters (LPF): A low pass filter passes low frequency signals and impedes or attenuates high frequency signals above a cutoff frequency. b) High-Pass filters (HPF): A high pass filter passes signals above its cutoff frequency and attenuates signals with frequency less than the cutoff.

c) Band-Pass filters (BPF): A band pass filter passes signals with frequency between a lower cutoff point and an upper cutoff point. Signals with frequency less than the lower cutoff as well as signals above the upper cutoff are attenuated.

d) Band-Stop filters (BSF): A band stop filter or band reject filter attenuates signals with frequency between a lower cutoff and an upper cutoff point. Signals with frequency less than the lower cutoff as well as signals above the upper cutoff frequency are passed.

Filters that pass and attenuate signals in several different bands are also possible. Among systems with frequency filtering properties in addition to the frequencies that are passed or attenuated the amount of attenuation and gain also vary from system to system. The plot of the magnitude versus frequency, of course, indicates all these aspects graphically.

Ideal filters: It is conceptually useful to sometimes consider perfect filters which pass all frequencies in the pass band perfectly (i.e., with gain= 1) and attenuate all frequencies in the stop band (i.e., with gain=O). However, practical filters have (i) a finite attenuation value in the stop band (ii) some variation of the gain with frequency in the pass-band and (iii) a transition band between the pass-band and stop-band where neither good attenuation nor good transmission occurs (i.e., poor attenuation).

Filter cutoff: The cutoff of a practical filter is defined as that frequency at which the power gain is one-half of the power gain at the pass-band frequencies. This corresponds to a decrease in 3 dB, therefore, the cutoff frequency is also called the 3 dB point.

There are many natural systems that act as frequency filters, since, they preferentially respond to some frequencies - for example, a tuning fork is a band-pass filter in that its output (mechanical vibration) contains only a narrow band of frequencies, even when the input (applied mechanical force) may contain several frequencies. There are also many artificial systems

64

Chapter 3

designed to act as frequency filters - a shock absorber in a vehicle prevents high frequency inputs from appearing at the output. Electronic circuits to produce any of the above filter characteristics are also readily available.

3.3.6 Phase Shifts and Time Delays

As noted earlier a time shift of 1: at a frequency !a corresponds to a phase shift of 1a=(fJa1:=2rc!a1:. A positive phase shift is termed lead and a negative phase shift is termed lag. A system that contains a constant time delay at all frequencies will have a phase plot that is a straight line; thus a system that introduces a constant time delay independent of frequency is called a linear phase system. A system whose transfer function is a ratio of two polynomials in s (the Laplace variable) and does not contain a pure time delay will have a maximum phase shift that is simply related to the order of the system; such systems are called minimum phase systems. A pure time delay introduces a phase shift without affecting the magnitude. Therefore, the presence of a time delay can make the phase shift to be greater than what is expected from the order of the system. Most biological systems contain pure time delays and are therefore non-minimum phase systems.

3.4 Systems Representation of Physiological Processes

The following examples illustrate the use of systems representation in physiology. Although each block is in general neither linear nor timeinvariant, it is often convenient to approximate them as L TI systems.

Illustration 1: Skeletal Muscle contraction: The block diagram in Figure 3-28 shows the generation of skeletal muscle force beginning with the nerve action potential (nerve A.P .), proceeding to the muscle, the excitationcontraction coupling (E-C coupling) and finally force generation by the molecular force generator, the actin-myosin chemical dynamics.

~ , Neuromuscular

~ transmission

Figure 3-28. Block diagram representation of muscle contraction.

Illustration 2: Pupil control:

The function of the pupil control system is to adjust the size of the pupil so that if the external light is too bright or too dim the pupil becomes smaller

Continuous Time Fourier Analysis

65

or larger so as to allow just the desired amount of light (the desired amount of light being determined by the higher brain centers), Figure 3-29.

other inputs from theCNS

Figure 3-29. Block diagram representation of pupil control of the eye.

EXERCISES

1. Obtain the Fourier series or transform of the following functions:

(a) x(t) = sin(2t) + cos(3t)

(b) x(t) = u(t-l) - u(t-2)

(c) x(t) = e-2t _ eAt

2. Show that, ifx(t) is real (i.e., Re{x(t)}=x(t), Irn {x(t)} =0), then:

(i) Re{X(ro)}=Re{X(-co)}

(ii) Im {X(co)}=-Im {X(-co)}

3. Show that the Fourier magnitude of x(t) is identical to the Fourier magnitude ofx(t-1:), where 1: is a constant.?

4_ Show that if X(co) is the Fourier transform of x(t) then the Fourier transform of dx(t)/dt isjro-X(co).

5. Skeletal muscle can be represented as a linear model as shown below in Figure 3-30:

66

Chapter 3

Impulse train (action potentials)

Calcium transients

Cross-bridge dynamics

Force output

Figure 3-30. Exercise 5.

The input to this system is a sequence of impulses which represent the action potential excitation. The first block which is the active state has an impulse response ht(t), and the second block which is the cross-bridge dynamics has an impulse response h2(t).

Calculate the function describing the twitch force of the muscle model, (i.e., calculate the overall impulse response of the system).

6. Draw the Bode plots for the following transfer functions:

5

(a) H(s) = (s + 15)(s + 60)

(b) H(s) = s(s + 100)

(s + 3)(s + 50)(s + 300)

Chapter 4

Discrete Time Signals and Systems

4.1 Discretization of Continuous-Time Signals

All the signals considered so far are continuous-time signals in the sense that they are defined for every possible value of time; in other words a function x(t) is defined for all real values of time. However, in order to subject signals to numerical analysis we must have finite lists of numbers, which can be obtained by sampling the continuous-time signal at a finite number of points in time. This means that the value of x(t) at discrete points in time is obtained. The resulting discrete-time signal, x[n], can be stored as a sequence of numbers in a computer and analyzed. In order to store x[n] as a sequence of numbers a finite resolution of representation must necessarily be chosen; this is the process of quantization. In practice sampling as well as quantization is done. by electronic analogue-to-digital converter circuits. The two main considerations in analogue to digital (AID) conversion are (i) the rate of data collection or the sampling frequency, and (ii) the resolution of data representation or quantization. During the theoretical analysis of discrete-time signals it is convenient to separate the issues of sampling and quantization. Since the effects of sampling are usually more critical we will primarily deal with the sampled signal, x[n], assuming that the effects of quantization are absent. Quantization, which on a digital computer depends on the number of digital bits used to represent the numbers, will be discussed briefly; but for most physiological signals it is found that 8 bits, 12 bits or 16 bits of data resolution is adequate for representing the signals. During the analysis of signals poor numerical resolution can cause accumulation of erroneous results. This is usually taken care of by using more than 16 bits (32 bits or 64 bits) in the processing of signals. Thus, although the input data itself may be acquired with 12 bit resolution, higher resolution has to be used

67

68

Chapter 4

in subsequent calculations to prevent propagation of rounding-off errors. We may note here that the terms analogue signal and continuous-time signal are' synonymous. The terms discrete-time signal and sampled signal are synonymous and refer to signals discretized by sampling; if the signals are quantized in addition to being sampled then they are referred to as digital data or digitized signals.

4.1.1 Sampling and Quantization

The process of digitization for the purpose of obtaining discrete data from an analogue signal comprises sampling and quantization as shown in Figure 4-1. After the data has been digitized, it is put into a numerical machine for digital computations, Finally, the data has to reconstructed into an analogue signal for use in the physical world.

x(t)

Figure 4-1, Discretization of an analogue signal, numerical computation and reconstruction of the output analogue signal.

Sampling a continuous-time signal x(t) at intervals of!J.t seconds yields a discrete signal x[n,.6.t], where the new variable n is the sample number and is an integer value. The sampling rate F, is the reciprocal of the sampling interval: Fs=l/.6.t. Since the sampling interval I1t is usually fixed for all the signals in a particular situation it is usual to simply refer to the sampled signal as x[n]. However, it must be remembered that for any particular x[n] the sampling interval Ar must be known to relate it to the original x(t). Since discretization of a continuous signal involves sampling it at a discrete number of time instants, the time interval between samples or the rate of sampling is of primary concern. If the sampling rate is very infrequent, then only few samples are obtained, and important features of the signal like sudden changes, spikes and other transients may be missed. On the other hand if a very high rate of sampling is used then the amount of digital data obtained is very large with the attendant problems of storage, computational expense, etc. Therefore, the issue of sampling pivots on the critical question of what is the minimum sampling rate which will assure that all the features of the continuous time signal are faithfully recorded, i.e., all the information contained in the continuous time signal is preserved through the sampling process. This question may be rephrased as: what is the minimum sampling rate for a signal x(t) which will guarantee complete reconstruction of the

Discrete Time Signals and Systems

69

original signal from the sampled version x[n]. This is the essence of the sampling theorem.

4.1.2 The Sampling Theorem

The Sampling Theorem (or Nyquist Sampling Theorem or Shannon Theorem) obtained from control theory and information theory states that:

When digitizing a continuous-time signal, it must be sampled at a frequency that is at least twice the signal's highest frequency component. The discrete signal thus formed can be used to completely reconstruct the original analogue continuous-time signal. This minimum sampling frequency is called the Nyquist sampling rate, or the Nyquist sampling frequency.

Aliasing: If a signal is sampled at a rate less than twice its highest frequency, i.e., less than its Nyquist rate, then the signal will be misrepresented, with the frequencies above half the sampling rate appearing ambiguously indistinguishable from frequencies below the half the sampling rate. This phenomenon is called aliasing. A discrete-time signal thus obtained by undersampling a continuous time signal cannot be used to properly reconstruct the original signal.

4.1.2.1 Sampling renders the signal periodic in the Fourier domain

To understand this phenomenon of aliasing it is useful to look at the sampled signal in the frequency domain. We shall later show mathematically that the transform of any sampled signal is periodic. The periodicity of the frequency transform of sampled signals may be illustrated with an example.

EXAMPLE 4-1

Consider the sampling of three sinusoidal signals, al(l), a2(t), a3(t), using a sampling rate of Fs=4 samples/s (~t=0.25s). Let the frequencies of the sinusoids be 1.2 Hz, 5.2 Hz and 9.2 Hz respectively, i.e., at(t)=sin(21t 1.2t), azCt)=sin(21t5.2t) and a3(t)=sin(21t 9.2t). These three signals are shown in the top three plots of Figure 4-2. At the chosen sample intervals, 0, 0.25, 0.5, etc., all three signals have the same amplitude. Therefore, the sampled signals al[n], a2[n] and a3[n] are identicaL

al[n] = sini21t 1.2n/4~

a2[n] = sin 21t 5.2n/4 = sin(21t (4 + 1.2)n/4) = sin(21t 1.2n/4) (4.1)

a3[n] = sin 21t 9.2n/4 = sin{21t (8 + 1.2)n/4) = sin(21t 1.2n/4)

This sampled signal, a[n]=al[n]=a2[n]=a3[n] results from sampling a 1.2 Hz sinusoid, a 5.2 Hz sinusoid or a 9.2 Hz sinusoid. Since a[n] is

70

Chapter 4

simultaneously the discrete version of al(t), a2(t) and a3(t) the Fourier transform of a[n] must also simultaneously exist at all three frequencies 1.2 Hz, 5.2 Hz and 9.2 Hz. In matter of fact as we will see later the transform of a[n] repeats every 4 Hz (it repeats in the frequency domain with a spacing equal to the sampling frequency).

a3(t)
t(se.:) _ 1.0
I 1 I
a]nJ r f 1 n-+ 4 Figure 4-2. Three different sinusoids sampled to obtain the same discrete function.

..

...

lA(n)l

o 1.2

5.2

9.2 frequency (Hz)

Figure 4-3. Frequency domain representation of the signals in the previous figure.

The Fourier transform of a[n] is shown in Figure 4-3. For discrete-time signals the Fourier transform uses the variable 0 since its properties are somewhat different from the Fourier transform for continuous time signals. The Fourier transform magnitude, 1 A(Q) I. shows frequency components at

Discrete Time Signals and Systems

71

1.2 Hz, 5.2 Hz and 9.2 Hz as well as other frequencies. Since the Fourier transform of a real signal is an even function, the negative frequency component is also repeated and is shown in Figure 4-3 by dotted lines.

The above illustration can be extended to any sinusoidal signal of frequency 10 sampled at a sampling rate of F, samples per second; the sampled signal will be identical to that obtained by sampling all sinusoids of frequency: lo±kFs where k is any integer. Further, using the symmetry property of the Fourier transform of a real signal we can say that, sampling a sinusoid of frequency, /0, at F, samples per second will yield a discrete-time signal that is identical to that obtained by sampling any sinusoid of frequency ±1o±k·Fs, where k is any integer. This concept is applicable to any signal, including non-sinusoids, since the signal can be regarded as being simply a combination of sinusoids by Fourier analysis. Thus, any signal sampled at F, samples per second will yield a discrete-time signal, whose Fourier domain representation repeats with a spacing of Fs Hertz.

ILLUSTRATION OF SAMPLING AND RECONSTRUCTION

Let us consider the sampling of a more realistic signal. Figure 4-4 show a continuous-time signal xCt) and Figure 4-5 shows x[n] obtained by sampling xCt) at 25Hz.

o

time(s)

1.5

x(t)

Figure 4-4. Continuous time signal xC!).

We shall now look at the signal in the frequency domain applying some of the concepts already developed. As seen in the case of the Fourier transform of continuous time signals, the mathematical process of the Fourier Transform on a real signal results in a complex function with real and imaginary parts, with the real part being an even function, while the imaginary part being an odd function. Therefore, mathematically the frequency domain representation is symmetrical about the ordinate and has both positive frequencies and negative frequencies. When illustrating the Fourier transform it is common to show the magnitude alone; the phase being shown only when necessary. Figure 4-6 shows the magnitude of .x{ro)

72

Chapter 4

which is the Fourier transform of the signal x(t) in Figure 4-4. As can be seen in Figure 4-6 the magnitude is an even function and is symmetrical about the ordinate.

o ~~~~~,,~~~~~~~~~~~3~O~~~~~ )l ("ample number)

"In)

Figure 4-5. Discrete time signal formed by sampling x(t) in the previous figure at a rate of 25 samples/second.

X(w)1

-25

o

f re qaency (Hz)

25

Figure 4-6. Fourier magnitude ofx(t}.

As demonstrated above, when a signal is sampled at a frequency Fs, the Fourier Transform of the sampled signal is periodic with interval F; The Fourier transform, X(O) of the sampled signal x[n] is shown in Figure 4-7. Again only the magnitude is shown. It is clear from this figure that if the original signal has frequency components greater than Fsl2 then X(O) being formed by repeating X( co) will be altered in the region of Fsl2, since the frequency components will overlap. lfthe sampling frequency, Fs, is reduced further, there will be substantial overlap between the adjacent periods of the Fourier transform, X(O). This overlapping is the phenomenon of aliasing. This problem will be avoided if the original signal contains components only below Fsf2. Or conversely, the sampling rate, Fs, must be greater than twice the highest frequency component of the signal being sampled. This condition is the essence of the Sampling Theorem. The meaning of the sampling

Discrete Time Signals and Systems

73

theorem become clearer when we consider methods of recovermg the original x(t) from x[n].

F.

F.

-25

o

fraquancy (Hz)

Figure 4-7. Fourier magnitude of xln].

4.1.3 Reconstruction of a Signal from its Sampled Version

The best test of the adequacy of the sampling rate is to examine whether the original signal x(t) can be reproduced from the sampled signal x[nJ. If the process of sampling has involved no loss of information, complete reconstruction should be possible. The simplest method of reconstructing x(t) from x[n] is the zero-order hold in which the analogue value is held at the sample value for one sample interval, Ilt, and then the signal is changed to the new sample value. Another method that is used commonly in computer displays is first-order interpolation or linear interpolation where adjacent points are joined by a straight line. Obviously, neither of these reproduce the continuous-time signal properly. The ideal method of reconstruction becomes evident when you consider the difference between the Fourier transform of the continuous-time signal, X(co) (Figure 4-6), and the Fourier transform of the discrete-time signal, X(O) (Figure 4-7). X(co) is the transform of the signal that we wish to reconstruct, and XeD) is the transform of the available sampled signal, x[n]. The difference between X(co) and XeD) is that while XeD) repeats periodically over all 0 (i.e., -00<0<00), X( co) if zero for III >/max (where Im~x is the highest frequency component of the signal). If Imax<Fs then discarding XeD) at frequencies corresponding to

III »F, will make it identical to X(co). Therefore, a low pass filter that will truncate the transform, XeD), to contain only one cycle of XeD) and rejects the rest (higher frequencies) will effect the best possible reconstruction of the signal. This is illustrated in Figure 4-8 where an ideal low pass filter with cutoff Fc=Fsf2 is shown. Obviously, the reconstruction is adequate if and only if the original signal had no component greater than Fsl2.

74

IX(Q)1

-25

frequency (lIz)

o

Chapter 4

Figure 4-8. Applying an ideal low pass filter in the frequency domain.

The difficulty with this ideal reconstruction is that practical low pass filters do not approach ideal characteristics. However, a good high order LPF will perform excellent reconstruction for all practical purposes. The most important point to be noted is that theoretically it is possible to recover the original signal completely if the condition of the sampling theorem has been observed during sampling.

Such a reconstruction low pass filter with cutoff at Fsf2 performs interpolation in the time domain that is "ideal" and therefore much better than either simple zero-order hold or linear interpolation. If the signal is sampled at a rate several times greater than the Nyquist rate, then the reconstruction is achieved much more easily without any excessive concern about the method of interpolation in reconstruction. This also becomes

. obvious when we observe that using a high sampling rate will allow even a simple low pass filter with a not very sharp cutoff, to easily separate adjacent cycles of X(.o.).

4.1.4 Quantization of Sampled Data

Analogue data in principle has infinite resolution, which means that it can be amplified to any extent to see finer details. Digital data on the other hand has finite resolution depending on the number of bits used to represent each data point. The above discussion of sampling has assumed that infinite resolution is available for data representation; that is, any numerical value of the sample can be correctly recorded with no loss in precision. However, in practice this is not true. Commercially available AID converters usually have 8, 12, 16 Or 24 bit data representation.

If an analogue signal is digitized by an 8 bit AID converter with an input range of -5V to +5V, then the resolution is: Q=10V/28=39mV. If the AID converter has the same input range but 16 bit data, the resolution is:

Discrete Time Signals and Systems

75

Q=iOV/216=O.153mV. The input to the AID converter being an analogue signal, the range can be easily modified using a low noise amplifier (or attenuator) so that the full range of the analogue signal matches the range of the digital 'representation as assumed here. The choice of the number of bits in the AID converter is constrained by two factors: (i) the time taken to perform the analogue-to-digital conversion of each sample, and (ii) the level of noise in the system. The first condition also implies that the conversion should obtain the samples exactly at time points, n/F;

4.1.5 Data Conversion Time - Sample & Hold

AID conversion is normally controlled by a clock whose frequency can be set to any required value. In such an AID system, in order to have equally spaced samples the AID conversion must be complete before the input signal changes to another quantization level, i.e., the digital data should correspond to the analogue value at that exact point in time and should not accidentally appear at even one quantization level above or below. If the conversion of the analogue data is done directly the conversion will have to be extremely fast as illustrated in the example below:

EXAMPLE 4-2

A 100Hz sinusoidal analogue signal, sin(2n·r OO·t), has to be converted by an 8 bit AID whose full input range is covered by the signal (i.e., -1 to + 1 volt). How quickly must each AID conversion be done? What if the signal is 1 KHz ? What if a 16 bit AID is used?

We should consider the case where the signal is changing fastest, and ensure that the AID converter will sample the signal correctly. The resolution of the 8-bit converter is:

Resolution = rangelnumber of levels = Q = 21256 = 7.8 m V

The maximum rate of change of the sinusoidal signal occurs when it crosses zero, and at this point the 100Hz signal changes at the rate of

~ sin((t)tn~o = 21[(100) Volts I second = 628 V I s

(4.2)

Therefore, if the AID conversion is to remain accurate to the 8-bit resolution, it should be completed in less than the time taken to change by one quantization level, Q, i.e.,: T= 0.0078/628 = 12Jls.

If a 1 kHz signal is to be digitized by this NO converter the conversion should take no more than 1.2 JlS. If 16 bit AID conversion is used for a 100 Hz sinusoidal signal, then the conversion should take no more than 0.048Jls.

76

Chapter 4

The most common types of commercial AID converters (successive approximation AID, and dual slope NO) require about 50Jls for 16. bit conversion; in these NO converters the conversion time increases with increasing number of bits. Therefore, if an AID converter is used directly it cannot be used to digitize very fast signals, or alternatively, for fast signals only a small digital data size, of 4 to 8 bits can be used. Fortunately, a very simple solution is available by using sample-and-hold circuits.

Sample and hold: A sample-and-hold circuit which is driven by clock pulses simply holds the input signal constant from the time of a clock pulse until the next clock pulse; therefore, the output value is constant for the entire clock period and has a value equal to the input signal at the time of the clock pulse. Thus, the ND conversion can be done any time during one clock period. In the case of the example above, where the signal frequency is I kHz, the sampling rate needs to be above 2 kHz; which means that Y:z ms is available for the ND conversion (compare this with the time required without the sample and hold). This sequential process of sample & hold followed by AID conversion driven by clock pulses is shown schematically in Figure 4-9.

Figure 4-9. A sample-and-hold circuit used with an AID converter.

Quantization resolution and noise levels: In the above discussion it is assumed that the analogue signal contains information with infinite resolution and every level of the digital data is significant. However, in practice analogue signals as well as the analogue circuitry in the AID converter have finite noise levels and finite signal-to-noise ratio (SNR). It is quite straightforward to see that if every change in level of the digital data is to reflect changes in the signal and not noise, then the noise level itself should not cause a change of one quantization level, Q; therefore, the noise should be less than ±Y:zQ. Furthermore, in order to avoid aliasing, if the sampling rate is Fs, then the magnitude of the total input signal, including noise, should be less than Y:zQ at frequencies above FJ2.

Effect of the number of bits on fidelity: Practical considerations require that a limited number of bits must be chosen for data representation. In a noise free system, increasing the number of bits will increase the accuracy of data representation. Reducing the number of bits is equivalent to increasing the effective noise content in the data. In other words, if you have 8 bit data representation and noise equivalent to 7 bits, the amount of signal

Discrete Time Signals and Systems

77

information obtained is the same as having only 1 bit of conversion resolution.

4.2 Discrete-Time Signals

Discrete-time signals are obtained from continuous time signals by using an analogue-to-digital converter, which usually includes a sample and hold circuit. The conversions are controlled by a precision clock ensuring that the samples are equally spaced in time.

4.2.1 Analogue to Digital Conversion

The process of obtaining discrete time signals from continuous time ones can be summarized as follows:

a) The minimum sampling rate is specified by the Nyquist Sampling Theorem, but for purposes of display and ease of reconstruction higher rates may be used.

b) The number of useful levels of quantization, (=2n where n is the number of bits of data representation) is determined by the noise in the continuous time signaL The signal voltage corresponding to a change of one quantization level, Q=(Full Scale Rangej/Z", should be twice the magnitude of noise. If a sampling rate of F; is used the magnitude of all signal variation (including noise) above frequency Fs/2 should be less than YzQ.

c) An adequately sampled signal can be used to reconstruct the original signal by using a low pass filter on the sampled data (scaled impulses at the sample points) to remove the frequencies above FJ2 where Fs is the sampling rate.

4.2.2 Operations on Discrete-Time Signals

All the operations that may be performed on continuous-time signals may also be performed on discrete-time signals. The most useful operations are:

Time shift: A signal x[n] that is delayed by k units of time is represented as x[n-k] and if it occurs k units oftime earlier then we write x[n+k].

Time reversal: A signal, x[n], may be presented reversed in time and it is then represented as x[ -n].

Time scaling: A signal may be compressed in time by an integer value.

For example, x[2n] is the signal x[n] compressed in time by a factor of two.

78

Chapter 4

This is identical to reducing the sampling rate by a factor of two by discarding alternate samples. Time expansion of the signal is not really. possible with discrete-time functions since that would require knowledge of values in-between samples. Although such knowledge of values lying between samples can be obtained if the Nyquist sampling criterion has been satisfied, simple time expansion is not possible.

Even and Odd functions: A function is said to be even if: x[n] = x[-n] and a function is said to be odd if: x[n] = .. x[-n]. Any discrete-time function can be broken into a sum of two signals - one odd and the other even.

The even part ofa functionx[n] is: xe[n]=Yz{x[n]+x[-n]} The odd part of a functionx(t) is: xo[n]=Yz[.x[n]-x[-n]}

Periodic signals: A signal x[n] is said to be periodic if for some number of samples N, x[n]=x[n+kN] where k is any integer. A periodic continuoustime function x(t) when sampled will not necessarily yield a periodic discrete-time function. The discrete-time function x[n] will be periodic only if the period, T, ofx(t) is a multiple of the sampling interval, At.

4.3 Discrete-Time Systems

As in the case of continuous-time systems any set of processes that affect the nature of a signal may be called a system. A system may take one or more signals and produce one or more outputs. The simplest system takes one signal as its input and delivers one output signal. All the properties of continuous-time systems, viz., memory, causality, invertibiIity, stability, time-invariance and linearity can be defined in the same manner for discretetime systems. The last two being particularly important, are repeated here.

Time Invariance: A discrete-time system is said to be time-invariant if its input-output characteristics do not change over time. This means that timeshifting the input by a constant amount of time results in the output being time shifted by the same constant amount of time.

Linearity: A discrete-time system is said to be linear if, (i) scaling the input by a constant scales the output by the same constant, and (ii) addition of two or more input signals results in addition of the individual output responses.

Unit Impulse function or Delta function: The definition of the discretetime unit impulse function is quite simple compared to the definition of the continuous-time impulse function.

Discrete Time Signals and Systems

79

8[n] = {~

n=O n:;t:O

(4.3)

The discrete impulse is also called the Kronecker delta function.

4.3.1 The Impulse Response of a Discrete LTI System

As in the case of continuous-time systems the impulse response of an linear time-invariant discrete-time system is defined as the output of the system when a unit delta function is inputto it, Figure 4-10.

o[n]

I LTI ----+f~ system

1- __ ..... h[n]

Figure 4- J O. Impulse response of a discrete time system

Representing a discrete-time signal by of scaled and shifted impulses:

Analogous to the continuous-time case, discrete-time signals may be represented in terms of shifted and scaled delta functions. A unit amplitude event at any instant k can be denoted by. a time shifted impulse 8[n-k]. At instant k the magnitude of a signal, x[n], is x[k], and may be written as x[k]8[n-k]. Therefore, a signal, x[n], may be expressed in general as

00

x[n] = )' x[k] 8[n - k]

~ k~-<o

(4.4)

4.3.2 The Convolution Sum

For a linear time-invariant system since the response to a scaled and shifted delta function will be the scaled and shifted impulse response, the output of the system to x[n] will be

yEn]

0')

Lx[k] hEn - k]

(4.5)

-00

This convolution operation is denoted as

y[n]=x[n]*h[n]

(4.6)

-----.~-.~.~ -_ _ _.

80

Chapter 4

4.3.3 Properties of the Discrete Convolution Operation

The convolution sum has the same primary properties as the convolution integral:

1. Commutative:

y[n]=x[n]*h[n]=h[n]*x[n]

(4.7)

2. Associative:

(4.8)

3. Distributive:

(4.9)

4.3.4 Examples of the Convolution Sum

EXAMPLE 4-3

Find the output yEn] for a system with the input, x[n], and impulse response, hEn] shown in Figure 4-11.

I x[,,]

II II".

h[n]

o

-n

..11. .. ___

o

-n

Figure 4-11. Signals for Example 4-3.

x[n]=38[n]+28[n-l]+ 18[n-2]

(4.10)

h[n]=8[n]+8[n-l]

(4.11)

The output calculation is illustrated in Figure 4-12.

y[n]=3h[n]+2h[n-I]+ Ih[n-2]

(4.12)

Discrete Time Signals and Systems

81

~I LTI t
o[n] system 3h[n]

20[n-1] , LTI I ~ 2h[n-1J
system

,I LTt I ~ 1 h[n-2]
1 0[n-2] system

Figure 4-12. Output calculation for Example 4-3. y[O]=3, y[I]=5, y[2]=3, y[3]=I, all other values are zero

EXAMPLE 4-4

Find the output, y[n], of the following system with a unit step function as the impulse response and an input which is a decreasing geometric series.

x[n]=an urn] with O<a<l

(4.13)

h[n]=u[n]

( 4.14)

a.>

yEn] = _L hEn - k] x[k]

k=-oo

n

=_Ll.ak

k=O

o n+l

a -a

I-a

(4.15)

If we have a=Y2 theny[n]=2-(Y2Y'

4.3.5 Frequency Filtering by Discrete-Time Systems

It is possible to devise simple LTI discrete-time systems that selectively permit the passage of some signals while attenuating other signals. A simple example will illustrate the behavior of suchjilters.

EXAMPLE 4-5

In this example we will examine a set of sinusoidal signals of known frequency being passed through an L TI system.

82

Chapter 4

Consider a sinusoidal signal, x(t)=sin(2nfot), sampled at a rate of 4 samples/second, i.e., a sample interval of ~ s. This sampled signal x[n] is

x[ n ]=x( n6.t)=sin(2nfon6.t)=sin( Yznfon)

(4.16)

This sampled signal x[n] is then passed through one of two systems h][n] and h2[n]. In order to study the response of the systems to sinusoids of different frequencies we'll use three different frequencies for the sinusoidal signals, signal A with a frequency of ~ Hz, signal B with a frequency of Yz Hz and signal C with a frequency of 1 Hz.

System 1 (Figure 4-13}:

hi [n] = torn] +t8[n - 2]

00

Yl[n] = Lx[n - m]· hEm]

(4.17)

1J'l:. ...... CO

= tx[n] + tx[n - 2]

.... 1.1 ..

-n

I I I I • I • • • • I • I I

Figure 4-13. Impulse response of discrete-time system-I.

System 2 (Figure 4-14):

~[n]=to[n]-t8[n- 2]

00

Y2[n]= Lx[n-m]-h[m]

(4.18)

m=-co

-n

o

Figure 4-14. Impulse response of discrete-time system-2.

Discrete Time Signals and Systems

Signal A (Figure 4-J5}:1o=l0. Hz, xa(t)=sin(2n·Y4·t), xa[n]=sin(nn/8)

83

(4.19)

Figure 4-15. Signal A.

Signal B (Figure 4-16}:fo=Yz Hz, xb(t)=sin(2n·Yz.t), xb[n]=sin(nn/4)

(4.20)

Figure 4-16. Signal B.

Signal C (Figure 4-17}:fo=1 Hz,xc(t)=sin(2n·l·t), xc[n]=sin(nn/2)

(4.21)

-n

Figure 4-17. Signal C.

~~~---------.- - -~.------------

84

Chapter 4

Case AI: Signal xa[n] is passed through system 1: y[n]=xa[n]*hJ[n]=Yzsin(nnI8)+Yz·sin«n-2)rr/8)

y[n]=0.92·sin(nrr/8-rr/8)

(4.22)

Case B 1: Signal Xb[ n] is passed through system 1: y[n]=xb[n]*h1[n]=Yz·sin(nnI4)+Yz-sin«n-2)rr/4)

y[n]=0_7-sin(nrr/4-rr/4)

(4.23)

Case CI: Signal xj»] is passed through system 1: y[ n ]=xc[ n ]*ht [n ]=Yz-sin(nn:/2)+Yz-sin«n-2)n:/2)

y[n]=O

(4_24)

1.0

90D

__.. frequency (Hz)

0.5 1.0

Gain Magnitude

0.5

o

"

Phase O~--~--+---+---+----

o

o

0.5 1.0

->- frequency (Hz)

_900

Figure 4-18. Frequency response of system-I.

The response of system-l is plotted in Figure 4-18_

Case A2: Signal xa[ n] is passed through system 2: yen ]=Xa[ n ]*h2[n]=Y2_-sin(nn/8)-Y:z·sin«n-2)n:!8)

y[ n ]=0.3 8·sin(nn:/8+ 3rr/8)

(4_25)

Case B2: Signal xb[n] is passed through system 2: y[ n ]=Xb[ n ]*h2[ n ]=Y:z·sin(nn:/4 )-Yz·sin«n-2)nI4)

y[n]=0.7-sin(nn:/4+rr/4)

(4.26)

Case C2: Signal xc[n] is passed through system 2: y[ n ]=xc[ n] *h2[ n ]=Y:z·sin(nnI2)-Yz·sin«n-2)n:/2)

Potrebbero piacerti anche