© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
The present text evolved from course notes developed over a period of a dozen years teaching undergraduates the basics of signal processing for communications. The students had mostly a background in electrical engineering, computer science or mathematics, and were typically in their third year of studies at Ecole Polytechnique Fédérale de Lausanne (EPFL), with an interest in communication systems. Thus, they had been exposed to signals and systems, linear algebra, elements of analysis (e.g. Fourier series) and some complex analysis, all of this being fairly standard in an undergraduate program in engineering sciences.
The notes having reached a certain maturity, including examples, solved problems and exercises, we decided to turn them into an easytouse text on signal processing, with a look at communications as an application. But rather than writing one more book on signal processing, of which many good ones already exist, we deployed the following variations, which we think will make the book appealing as an undergraduate text.
While mathematical rigor is not the emphasis, we made sure to be precise, and thus the text is not approximate in its use of mathematics. Remember, we think signal processing to be mathematics applied to a fun topic, and not mathematics for its own sake, nor a set of applications without foundations.
Certainly, the masterpiece in this regard is C. Shannon’s 1948 foundational paper on “The Mathematical Theory of Communication”. It completely revolutionized the way communication systems are designed and built, and, still today, we mostly live in its legacy. Not surprisingly, one of the key results of signal processing is the sampling theorem for bandlimited functions (often attributed to Shannon, since it appears in the abovementioned paper), the theorem which singlehandedly enabled the digital revolution. To a mathematician, this is a simple corollary to Fourier series, and he/she might suggest many other ways to represent such particular functions. However, the strength of the sampling theorem and its variations (e.g. oversampling or quantization) is that it is an operational theorem, robust, and applicable to actual signal acquisition and reconstruction problems.
In order to showcase such powerful applications, the last chapter is entirely devoted to developing an endtoend communication system, namely a modem for communicating digital information (or bits) over an analog channel. This realworld application (which is present in all modern communication devices, from mobile phones to ADSL boxes) nicely brings together many of the concepts and designs studied in the previous chapters.
Being less formal, more abstract and applicationdriven seems almost like moving simultaneously in several and possibly opposite directions, but we believe we came up with the right balancing act. Ultimately, of course, the readers and students are the judges!
A last and very important issue is the online access to the text and supplementary material. A full html version together with the unavoidable errata and other complementary material is available at www.sp4comm.org. A solution manual is available to teachers upon request.
As a closing word, we hope you will enjoy the text, and we welcome your feedback. Let signal processing begin, and be fun!
Martin Vetterli and Paolo Prandoni
Spring 2008, Paris and Grandvaux
The current book is the result of several iterations of a yearly signal processing undergraduate class and the authors would like to thank the students in Communication Systems at EPFL who survived the early versions of the manuscript and who greatly contributed with their feedback to improve and refine the text along the years. Invaluable help was also provided by the numerous teaching assistants who not only volunteered constructive criticism but came up with a lot of the exercices which appear at the end of each chapter (and their relative solutions). In no particular order: Andrea Ridolfi provided insightful mathematical remarks and also introduced us to the wonders of PsTricks while designing figures. Olivier Roy and Guillermo Barrenetxea have been indefatigable ambassadors between teaching and student bodies, helping shape exercices in a (hopefully) more userfriendly form. Ivana Jovanovic, Florence Bénézit and Patrick Vandewalle gave us a set of beautiful ideas and pointers thanks to their recitations on choice signal processing topics. Luciano Sbaiz always lent an indulgent ear and an insightful answer to all the doubts and worries which plague scientific writers. We would also like to express our personal gratitude to our families and friends for their patience and their constant support; unfortunately, to do so in a proper manner, we should resort to a lyricism which is sternly frowned upon in technical textbooks and therefore we must confine ourselves to a simple “thank you”.
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
A signal, technically yet generally speaking, is a a formal description of a phenomenon evolving over time or space; by signal processing we denote any manual or “mechanical” operation which modifies, analyzes or otherwise manipulates the information contained in a signal. Consider the simple example of ambient temperature: once we have agreed upon a formal model for this physical variable – Celsius degrees, for instance – we can record the evolution of temperature over time in a variety of ways and the resulting data set represents a temperature “signal”. Simple processing operations can then be carried out even just by hand: for example, we can plot the signal on graph paper as in Figure 1.1, or we can compute derived parameters such as the average temperature in a month.
Conceptually, it is important to note that signal processing operates on an abstract representation of a physical quantity and not on the quantity itself. At the same time, the type of abstract representation we choose for the physical phenomenon of interest determines the nature of a signal processing unit. A temperature regulation device, for instance, is not a signal processing system as a whole. The device does however contain a signal processing core in the feedback control unit which converts the instantaneous measure of the temperature into an ON/OFF trigger for the heating element. The physical nature of this unit depends on the temperature model: a simple design is that of a mechanical device based on the dilation of a metal sensor; more likely, the temperature signal is a voltage generated by a thermocouple and in this case the matched signal processing unit is an operational amplifier.
Finally, the adjective “digital” derives from digitus, the Latin word for finger: it concisely describes a world view where everything can be ultimately represented as an integer number. Counting, first on one’s fingers and then in one’s head, is the earliest and most fundamental form of abstraction; as children we quickly learn that counting does indeed bring disparate objects (the proverbial “apples and oranges”) into a common modeling paradigm, i.e. their cardinality. Digital signal processing is a flavor of signal processing in which everything including time is described in terms of integer numbers; in other words, the abstract representation of choice is a onesizefitall countability. Note that our earlier “thought experiment” about ambient temperature fits this paradigm very naturally: the measuring instants form a countable set (the days in a month) and so do the measures themselves (imagine a finite number of ticks on the thermometer’s scale). In digital signal processing the underlying abstract representation is always the set of natural numbers regardless of the signal’s origins; as a consequence, the physical nature of the processing device will also always remain the same, that is, a general digital (micro)processor. The extraordinary power and success of digital signal processing derives from the inherent universality of its associated “world view”.
Probably the earliest recorded example of digital signal processing dates back to the 25th century BC. At the time, Egypt was a powerful kingdom reaching over a thousand kilometers south of the Nile’s delta. For all its latitude, the kingdom’s populated area did not extend for more than a few kilometers on either side of the Nile; indeed, the only inhabitable areas in an otherwise desert expanse were the river banks, which were made fertile by the yearly flood of the river. After a flood, the banks would be left covered with a thin layer of nutrientrich silt capable of supporting a full agricultural cycle. The floods of the Nile, however, were^{(1)} a rather capricious meteorological phenomenon, with scant or absent floods resulting in little or no yield from the land. The pharaohs quickly understood that, in order to preserve stability, they would have to set up a grain buffer with which to compensate for the unreliability of the Nile’s floods and prevent potential unrest in a famished population during “dry” years. As a consequence, studying and predicting the trend of the floods (and therefore the expected agricultural yield) was of paramount importance in order to determine the operating point of a very dynamic taxation and redistribution mechanism. The floods of the Nile were meticulously recorded by an array of measuring stations called “nilometers” and the resulting data set can indeed be considered a fullfledged digital signal defined on a time base of twelve months. The Palermo Stone, shown in the left panel of Figure 1.2, is a faithful record of the data in the form of a table listing the name of the current pharaoh alongside the yearly flood level; a more modern representation of an flood data set is shown on the left of the figure: bar the references to the pharaohs, the two representations are perfectly equivalent. The Nile’s behavior is still an active area of hydrological research today and it would be surprising if the signal processing operated by the ancient Egyptians on their data had been of much help in anticipating for droughts. Yet, the Palermo Stone is arguably the first recorded digital signal which is still of relevance today.

“Digital” representations of the world such as those depicted by the Palermo Stone are adequate for an environment in which quantitative problems are simple: counting cattle, counting bushels of wheat, counting days and so on. As soon as the interaction with the world becomes more complex, so necessarily do the models used to interpret the world itself. Geometry, for instance, is born of the necessity of measuring and subdividing land property. In the act of splitting a certain quantity into parts we can already see the initial difficulties with an integerbased world view ;^{(2)} yet, until the Hellenic period, western civilization considered natural numbers and their ratios all that was needed to describe nature in an operational fashion. In the 6th century BC, however, a devastated Pythagoras realized that the the side and the diagonal of a square are incommensurable, i.e. that is not a simple fraction. The discovery of what we now call irrational numbers “sealed the deal” on an abstract model of the world that had already appeared in early geometric treatises and which today is called the continuum. Heavily steeped in its geometric roots (i.e. in the infinity of points in a segment), the continuum model postulates that time and space are an uninterrupted flow which can be divided arbitrarily many times into arbitrarily (and infinitely) small pieces. In signal processing parlance, this is known as the “analog” world model and, in this model, integer numbers are considered primitive entities, as rough and awkward as a set of sledgehammers in a watchmaker’s shop.
In the continuum, the infinitely big and the infinitely small dance together in complex patterns which often defy our intuition and which required almost two thousand years to be properly mastered. This is of course not the place to delve deeper into this extremely fascinating epistemological domain; suffice it to say that the apparent incompatibility between the digital and the analog world views appeared right from the start (i.e. from the 5th century BC) in Zeno’s works; we will appreciate later the immense import that this has on signal processing in the context of the sampling theorem.
Zeno’s paradoxes are well known and they underscore this unbridgeable gap between our intuitive, integerbased grasp of the world and a model of the world based on the continuum. Consider for instance the dichotomy paradox; Zeno states that if you try to move along a line from point A to point B you will never in fact be able to reach your destination. The reasoning goes as follows: in order to reach B, you will have to first go through point C, which is located midway between A and B; but, even before you reach C, you will have to reach D, which is the midpoint between A and C; and so on ad infinitum. Since there is an infinity of such intermediate points, Zeno argues, moving from A to B requires you to complete an infinite number of tasks, which is humanly impossible. Zeno of course was well aware of the empirical evidence to the contrary but he was brilliantly pointing out the extreme trickery of a model of the world which had not yet formally defined the concept of infinity. The complexity of the intellectual machinery needed to solidly counter Zeno’s argument is such that even today the paradox is food for thought. A firstyear calculus student may be tempted to offhandedly dismiss the problem by stating
 (1.1) 
but this is just a void formalism begging the initial question if the underlying notion of the continuum is not explicitly worked out.^{(3)} In reality Zeno’s paradoxes cannot be “solved”, since they cease to be paradoxes once the continuum model is fully understood.
The two competing models for the world, digital and analog, coexisted quite peacefully for quite a few centuries, one as the tool of the trade for farmers, merchants, bankers, the other as an intellectual pursuit for mathematicians and astronomers. Slowly but surely, however, the increasing complexity of an expanding world spurred the more practicallyoriented minds to pursue science as a means to solve very tangible problems besides describing the motion of the planets. Calculus, brought to its full glory by Newton and Leibnitz in the 17th century, proved to be an incredibly powerful tool when applied to eminently practical concerns such as ballistics, ship routing, mechanical design and so on; such was the faith in the power of the new science that Leibnitz envisioned a nottoodistant future in which all human disputes, including problems of morals and politics, could be worked out with pen and paper: “gentlemen, calculemus”. If only.
As Cauchy unsurpassably explained later, everything in calculus is a limit and therefore everything in calculus is a celebration of the power of the continuum. Still, in order to apply the calculus machinery to the real world, the real world has to be modeled as something calculus understands, namely a function of a real (i.e. continuous) variable. As mentioned before, there are vast domains of research well behaved enough to admit such an analytical representation; astronomy is the first one to come to mind, but so is ballistics, for instance. If we go back to our temperature measurement example, however, we run into the first difficulty of the analytical paradigm: we now need to model our measured temperature as a function of continuous time, which means that the value of the temperature should be available at any given instant and not just once per day. A “temperature function” as in Figure 1.3 is quite puzzling to define if all we have (and if all we can have, in fact) is just a set of empirical measurements reasonably spaced in time. Even in the rare cases in which an analytical model of the phenomenon is available, a second difficulty arises when the practical application of calculus involves the use of functions which are only available in tabulated form. The trigonometric and logarithmic tables are a typical example of how a continuous model needs to be made countable again in order to be put to real use. Algorithmic procedures such as series expansions and numerical integration methods are other ways to bring the analytic results within the realm of the practically computable. These parallel tracks of scientific development, the “Platonic” ideal of analytical results and the slide rule reality of practitioners, have coexisted for centuries and they have found their most durable mutual peace in digital signal processing, as will appear shortly.
One of the fundamental problems in signal processing is to obtain a permanent record of the signal itself. Think back of the ambient temperature example, or of the floods of the Nile: in both cases a description of the phenomenon was gathered by a naive sampling operation, i.e. by measuring the quantity of interest at regular intervals. This is a very intuitive process and it reflects the very natural act of “looking up the current value and writing it down”. Manually this operation is clearly quite slow but it is conceivable to speed it up mechanically so as to obtain a much larger number of measurements per unit of time. Our measuring machine, however fast, still will never be able to take an infinite amount of samples in a finite time interval: we are back in the clutches of Zeno’s paradoxes and one would be tempted to conclude that a true analytical representation of the signal is impossible to obtain.
At the same time, the history of applied science provides us with many examples of recording machines capable of providing an “analog” image of a physical phenomenon. Consider for instance a thermograph: this is a mechanical device in which temperature deflects an inktipped metal stylus in contact with a slowly rolling papercovered cylinder. Thermographs like the one sketched in Figure 1.4 are still currently in use in some simple weather stations and they provide a chart in which a temperature function as in Figure 1.3 is duly plotted. Incidentally, the principle is the same in early sound recording devices: Edison’s phonograph used the deflection of a steel pin connected to a membrane to impress a “continuoustime” sound wave as a groove on a wax cylinder. The problem with these analog recordings is that they are not abstract signals but a conversion of a physical phenomenon into another physical phenomenon: the temperature, for instance, is converted into the amount of ink on paper while the sound pressure wave is converted into the physical depth of the groove. The advent of electronics did not change the concept: an audio tape, for instance, is obtained by converting a pressure wave into an electrical current and then into a magnetic deflection. The fundamental consequence is that, for analog signals, a different signal processing system needs to be designed explicitly for each specific form of recording.
Consider for instance the problem of computing the average temperature over a certain time interval. Calculus provides us with the exact answer if we know the elusive “temperature function” f(t) over an interval [T_{0},T_{1}] (see Figure 1.5, top panel):
 (1.2) 
We can try to reproduce the integration with a “machine” adapted to the particular representation of temperature we have at hand: in the case of the thermograph, for instance, we can use a planimeter as in Figure 1.6, a manual device which computes the area of a drawn surface; in a more modern incarnation in which the temperature signal is given by a thermocouple, we can integrate the voltage with the RC network in Figure 1.7. In both cases, in spite of the simplicity of the problem, we can instantly see the practical complications and the degree of specialization needed to achieve something as simple as an average for an analog signal.
Now consider the case in which all we have is a set of daily measurements c_{1},c_{2},…,c_{D} as in Figure 1.1; the “average” temperature of our measurements over D days is simply:
 (1.3) 
(as shown in the bottom panel of Figure 1.5) and this is an elementary sum of D terms which anyone can carry out by hand and which does not depend on how the measurements have been obtained: wickedly simple! So, obviously, the question is: “How different (if at all) is Ĉ from ?” In order to find out we can remark that if we accept the existence of a temperature function f(t) then the measured values c_{n} are samples of the function taken one day apart:

(where T_{s} is the duration of a day). In this light, the sum (1.3) is just the Riemann approximation to the integral in (1.2) and the question becomes an assessment on how good an approximation that is. Another way to look at the problem is to ask ourselves how much information we are discarding by only keeping samples of a continuoustime function.
The answer, which we will study in detail in Chapter 9, is that in fact the continuoustime function and the set of samples are perfectly equivalent representations – provided that the underlying physical phenomenon “doesn’t change too fast”. Let us put the proviso aside for the time being and concentrate instead on the good news: first, the analog and the digital world can perfectly coexist; second, we actually possess a constructive way to move between worlds: the sampling theorem, discovered and rediscovered by many at the beginning of the 20th century^{(4)} , tells us that the continuoustime function can be obtained from the samples as
 (1.4) 
So, in theory, once we have a set of measured values, we can build the continuoustime representation and use the tools of calculus. In reality things are even simpler: if we plug (1.4) into our analytic formula for the average (1.2) we can show that the result is a simple sum like (1.3). So we don’t need to explicitly go “through the looking glass” back to continuoustime: the tools of calculus have a discretetime equivalent which we can use directly.
The equivalence between the discrete and continuous representations only holds for signals which are sufficiently “slow” with respect to how fast we sample them. This makes a lot of sense: we need to make sure that the signal does not do “crazy” things between successive samples; only if it is smooth and well behaved can we afford to have such sampling gaps. Quantitatively, the sampling theorem links the speed at which we need to repeatedly measure the signal to the maximum frequency contained in its spectrum. Spectra are calculated using the Fourier transform which, interestingly enough, was originally devised as a tool to break periodic functions into a countable set of building blocks. Everything comes together.
While it appears that the time continuum has been tamed by the sampling theorem, we are nevertheless left with another pesky problem: the precision of our measurements. If we model a phenomenon as an analytical function, not only is the argument (the time domain) a continuous variable but so is the function’s value (the codomain); a practical measurement, however, will never achieve an infinite precision and we have another paradox on our hands. Consider our temperature example once more: we can use a mercury thermometer and decide to write down just the number of degrees; maybe we can be more precise and note the halfdegrees as well; with a magnifying glass we could try to record the tenths of a degree – but we would most likely have to stop there. With a more sophisticated thermocouple we could reach a precision of one hundredth of a degree and possibly more but, still, we would have to settle on a maximum number of decimal places. Now, if we know that our measures have a fixed number of digits, the set of all possible measures is actually countable and we have effectively mapped the codomain of our temperature function onto the set of integer numbers. This process is called quantization and it is the method, together with sampling, to obtain a fully digital signal.
In a way, quantization deals with the problem of the continuum in a much “rougher” way than in the case of time: we simply accept a loss of precision with respect to the ideal model. There is a very good reason for that and it goes under the name of noise. The mechanical recording devices we just saw, such as the thermograph or the phonograph, give the illusion of analytical precision but are in practice subject to severe mechanical limitations. Any analog recording device suffers from the same fate and even if electronic circuits can achieve an excellent performance, in the limit the unavoidable thermal agitation in the components constitutes a noise floor which limits the “equivalent number of digits”. Noise is a fact of nature that cannot be eliminated, hence our acceptance of a finite (i.e. countable) precision.
Noise is not just a problem in measurement but also in processing. Figure 1.8 shows the two archetypal types of analog and digital computing devices; while technological progress may have significantly improved the speed of each, the underlying principles remain the same for both. An analog signal processing system, much like the slide rule, uses the displacement of physical quantities (gears or electric charge) to perform its task; each element in the system, however, acts as a source of noise so that complex or, more importantly, cheap designs introduce imprecisions in the final result (good slide rules used to be very expensive). On the other hand the abacus, working only with integer arithmetic, is a perfectly precise machine^{(5)} even if it’s made with rocks and sticks. Digital signal processing works with countable sequences of integers so that in a digital architecture no processing noise is introduced. A classic example is the problem of reproducing a signal. Before mp3 existed and file sharing became the bootlegging method of choice, people would “make tapes”. When someone bought a vinyl record he would allow his friends to record it on a cassette; however, a “peertopeer” dissemination of illegally taped music never really took off because of the “second generation noise”, i.e. because of the ever increasing hiss that would appear in a tape made from another tape. Basically only first generation copies of the purchased vinyl were acceptable quality on home equipment. With digital formats, on the other hand, duplication is really equivalent to copying down a (very long) list of integers and even very cheap equipment can do that without error.
Finally, a short remark on terminology. The amplitude accuracy of a set of samples is entirely dependent on the processing hardware; in current parlance this is indicated by the number of bits per sample of a given representation: compact disks, for instance, use 16 bits per sample while DVDs use 24. Because of its “contingent” nature, quantization is almost always ignored in the core theory of signal processing and all derivations are carried out as if the samples were real numbers; therefore, in order to be precise, we will almost always use the term discretetime signal processing and leave the label “digital signal processing” (DSP) to the world of actual devices. Neglecting quantization will allow us to obtain very general results but care must be exercised: in the practice, actual implementations will have to deal with the effects of finite precision, sometimes with very disruptive consequences.
Signals in digital form provide us with a very convenient abstract representation which is both simple and powerful; yet this does not shield us from the need to deal with an “outside” world which is probably best modeled by the analog paradigm. Consider a mundane act such as placing a call on a cell phone, as in Figure 1.9: humans are analog devices after all and they produce analog sound waves; the phone converts these into digital format, does some digital processing and then outputs an analog electromagnetic wave on its antenna. The radio wave travels to the base station in which it is demodulated, converted to digital format to recover the voice signal. The call, as a digital signal, continues through a switch and then is injected into an optical fiber as an analog light wave. The wave travels along the network and then the process is inverted until an analog sound wave is generated by the loudspeaker at the receiver’s end.
Communication systems are in general a prime example of sophisticated interplay between the digital and the analog world: while all the processing is undoubtedly best done digitally, signal propagation in a medium (be it the the air, the electromagnetic spectrum or an optical fiber) is the domain of differential (rather than difference) equations. And yet, even when digital processing must necessarily hand over control to an analog interface, it does so in a way that leaves no doubt as to who’s boss, so to speak: for, instead of transmitting an analog signal which is the reconstructed “real” function as per (1.4), we always transmit an analog signal which encodes the digital representation of the data. This concept is really at the heart of the “digital revolution” and, just like in the cassette tape example, it has to do with noise.
Imagine an analog voice signal s(t) which is transmitted over a (long) telephone line; a simplified description of the received signal is

where the parameter α, with α < 1, is the attenuation that the signal incurs and where n(t) is the noise introduced by the system. The noise function is of obviously unknown (it is often modeled as a Gaussian process, as we will see) and so, once it’s added to the signal, it’s impossible to eliminate it. Because of attenuation, the receiver will include an amplifier with gain G to restore the voice signal to its original level; with G = 1∕α we will have

Unfortunately, as it appears, in order to regenerate the analog signal we also have amplified the noise G times; clearly, if G is large (i.e. if there is a lot of attenuation to compensate for) the voice signal end up buried in noise. The problem is exacerbated if many intermediate amplifiers have to be used in cascade, as is the case in long submarine cables.
Consider now a digital voice signal, that is, a discretetime signal whose samples have been quantized over, say, 256 levels: each sample can therefore be represented by an 8bit word and the whole speech signal can be represented as a very long sequence of binary digits. We now build an analog signal as a twolevel signal which switches for a few instants between, say, 1 V and +1 V for every “0” and “1” bit in the sequence respectively. The received signal will still be

but, to regenerate it, instead of linear amplification we can use nonlinear thresholding:

Figure 1.10 clearly shows that as long as the magnitude of the noise is less than α the twolevel signal can be regenerated perfectly; furthermore, the regeneration process can be repeated as many times as necessary with no overall degradation.

In reality of course things are a little more complicated and, because of the nature of noise, it is impossible to guarantee that some of the bits won’t be corrupted. The answer is to use error correcting codes which, by introducing redundancy in the signal, make the sequence of ones and zeros robust to the presence of errors; a scratched CD can still play flawlessly because of the ReedSolomon error correcting codes used for the data. Data coding is the core subject of Information Theory and it is behind the stellar performance of modern communication systems; interestingly enough, the most successful codes have emerged from group theory, a branch of mathematics dealing with the properties of closed sets of integer numbers. Integers again! Digital signal processing and information theory have been able to join forces so successfully because they share a common data model (the integer) and therefore they share the same architecture (the processor). Computer code written to implement a digital filter can dovetail seamlessly with code written to implement error correction; linear processing and nonlinear flow control coexist naturally.
A simple example of the power unleashed by digital signal processing is the performance of transatlantic cables. The first operational telegraph cable from Europe to North America was laid in 1858 (see Fig. 1.11); it worked for about a month before being irrecoverably damaged.^{(6)} From then on, new materials and rapid progress in electrotechnics boosted the performance of each subsequent cable; the key events in the timeline of transatlantic communications are shown in Table 1.1. The first transatlantic telephone cable was laid in 1956 and more followed in the next two decades with increasing capacity due to multicore cables and better repeaters; the invention of the echo canceler further improved the number of voice channels for already deployed cables. In 1968 the first experiments in PCM digital telephony were successfully completed and the quantum leap was around the corner: by the end of the 70’s cables were carrying over one order of magnitude more voice channels than in the 60’s. Finally, the deployment of the first fiber optic cable in 1988 opened the door to staggering capacities (and enabled the dramatic growth of the Internet).
Finally, it’s impossible not to mention the advent of data compression in this brief review of communication landmarks. Again, digital processing allows the coexistence of standard processing with sophisticated decision logic; this enables the implementation of complex datadependent compression techniques and the inclusion of psychoperceptual models in order to match the compression strategy to the characteristics of the human visual or auditory system. A music format such as mp3 is perhaps the first example to come to mind but, as shown in Table 1.2, all communication domains have been greatly enhanced by the gains in throughput enabled by data compression.


This book tries to build a largely selfcontained development of digital signal processing theory from within discrete time, while the relationship to the analog model of the world is tackled only after all the fundamental “pieces of the puzzle” are already in place. Historically, modern discretetime processing started to consolidate in the 50’s when mainframe computers became powerful enough to handle the effective simulations of analog electronic networks. By the end of the 70’s the discipline had by all standards reached maturity; so much so, in fact, that the major textbooks on the subject still in use today had basically already appeared by 1975. Because of its ancillary origin with respect to the problems of that day, however, discretetime signal processing has long been presented as a tributary to much more established disciplines such as Signals and Systems. While historically justifiable, that approach is no longer tenable today for three fundamental reasons: first, the pervasiveness of digital storage for data (from CDs to DVDs to flash drives) implies that most devices today are designed for discretetime signals to start with; second, the trend in signal processing devices is to move the analogtodigital and digitaltoanalog converters at the very beginning and the very end of the processing chain so that even “classically analog” operations such as modulation and demodulation are now done in discretetime; third, the availability of numerical packages like Matlab provides a testbed for signal processing experiments (both academically and just for fun) which is far more accessible and widespread than an electronics lab (not to mention affordable).
The idea therefore is to introduce discretetime signals as a selfstanding entity (Chap. 2), much in the natural way of a temperature sequence or a series of flood measurements, and then to remark that the mathematical structures used to describe discretetime signals are one and the same with the structures used to describe vector spaces (Chap. 3). Equipped with the geometrical intuition afforded to us by the concept of vector space, we can proceed to “dissect” discretetime signals with the Fourier transform, which turns out to be just a change of basis (Chap. 4). The Fourier transform opens the passage between the time domain and the frequency domain and, thanks to this dual understanding, we are ready to tackle the concept of processing as performed by discretetime linear systems, also known as filters (Chap. 5). Next comes the very practical task of designing a filter to order, with an eye to the subtleties involved in filter implementation; we will mostly consider FIR filters, which are unique to discrete time (Chaps 6 and 7). After a brief excursion in the realm of stochastic sequences (Chap. 8) we will finally build a bridge between our discretetime world and the continuoustime models of physics and electronics with the concepts of sampling and interpolation (Chap. 9); and digital signals will be completely accounted for after a study of quantization (Chap. 10). We will finally go back to purely discrete time for the final topic, multirate signal processing (Chap. 11), before putting it all together in the final chapter: the analysis of a commercial voiceband modem (Chap. 12).
The Bible of digital signal processing was and remains DiscreteTime Signal Processing, by A. V. Oppenheim and R. W. Schafer (PrenticeHall, last edition in 1999); exceedingly exhaustive, it is a musthave reference. For background in signals and systems, the eponimous Signals and Systems, by Oppenheim, Willsky and Nawab (Prentice Hall, 1997) is a good start.
Most of the historical references mentioned in this introduction can be integrated by simple web searches. Other comprehensive books on digital signal processing include S. K. Mitra’s Digital Signal Processing (McGraw Hill, 2006) and Digital Signal Processing, by J. G. Proakis and D. K. Manolakis (Prentis Hall 2006). For a fascinating excursus on the origin of calculus, see D. Hairer and G. Wanner, Analysis by its History (SpringerVerlag, 1996). A more than compelling epistemological essay on the continuum is Everything and More, by David Foster Wallace (Norton, 2003), which manages to be both profound and hilarious in an unprecedented way.
Finally, the very prolific literature on current signal processing research is published mainly by the Institute of Electronics and Electrical Engineers (IEEE) in several of its transactions such as IEEE Transactions on Signal Processing, IEEE Transactions on Image Processing and IEEE Transactions on Speech and Audio Processing.
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
In this Chapter we define more formally the concept of the discretetime signal and establish an associated basic taxonomy used in the remainder of the book. Historically, discretetime signals have often been introduced as the discretized version of continuoustime signals, i.e. as the sampled values of analog quantities, such as the voltage at the output of an analog circuit; accordingly, many of the derivations proceeded within the framework of an underlying continuoustime reality. In truth, the discretization of analog signals is only part of the story, and a rather minor one nowadays. Digital signal processing, especially in the context of communication systems, is much more concerned with the synthesis of discretetime signals rather than with sampling. That is why we choose to introduce discretetime signals from an abstract and selfcontained point of view.
A discretetime signal is a complexvalued sequence. Remember that a sequence is defined as a complexvalued function of an integer index n, with n Z; as such, it is a twosided, infinite collection of values. A sequence can be defined analytically in closed form, as for example:
 (2.1) 
shown as the “triangular” waveform plotted in Figure 2.1; or
 (2.2) 
which is a complex exponential of period 40 samples, plotted in Figure 2.2. An example of a sequence drawn from the real world is
 (2.3) 
plotted in Figure 2.3 from year 1900 to 2002. Another example, this time of a random sequence, is
 (2.4) 
a realization of which is plotted in Figure 2.4.
A few notes are in order:
While analytical forms of discretetime signals such as the ones above are useful to illustrate the key points of signal processing and are absolutely necessary in the mathematical abstractions which follow, they are nonetheless just that, abstract examples. How does the notion of a discretetime signal relate to the world around us? A discretetime signal, in fact, captures our necessarily limited ability to take repeated accurate measurements of a physical quantity. We might be keeping track of the stock market index at the end of each day to draw a pencil and paper chart; or we might be measuring the voltage level at the output of a microphone 44,100 times per second (obviously not by hand!) to record some music via the computer’s soundcard. In both cases we need “time to write down the value” and are therefore forced to neglect everything that happens between measuring times. This “look and write down” operation is what is normally referred to as sampling. There are realworld phenomena which lend themselves very naturally and very intuitively to a discretetime representation: the daily DowJones index, for example, solar spots, yearly floods of the Nile, etc. There seems to be no irrecoverable loss in this neglect of intermediate values. But what about music, or radio waves? At this point it is not altogether clear from an intuitive point of view how a sampled measurement of these phenomena entail no loss of information. The mathematical proof of this will be shown in detail when we study the sampling theorem; for the time being let us say that “the proof of the cake is in the eating”: just listen to your favorite CD!
The important point to make here is that, once a realworld signal is converted to a discretetime representation, the underlying notion of “time between measurements” becomes completely abstract. All we are left with is a sequence of numbers, and all signal processing manipulations, with their intended results, are independent of the way the discretetime signal is obtained. The power and the beauty of digital signal processing lies in part with its invariance with respect to the underlying physical reality. This is in stark contrast with the world of analog circuits and systems, which have to be realized in a version specific to the physical nature of the input signals.
The following sequences are fundamental building blocks for the theory of signal processing.
Impulse. The discretetime impulse (or discretetime delta function) is potentially the simplest discretetime signal; it is shown in Figure 2.5(a) and is defined as
 (2.5) 
Unit Step. The discretetime unit step is shown in Figure 2.5(b) and is defined by the following expression:
 (2.6) 
The unit step can be obtained via a discretetime integration of the impulse (see eq. (2.16)).
Exponential Decay. The discretetime exponential decay is shown in Figure 2.5(c) and is defined as
 (2.7) 
The exponential decay is, as we will see, the free response of a discretetime first order recursive filter. Exponential sequences are wellbehaved only for values of a less than one in magnitude; sequences in which a > 1 are unbounded and represent an unstable behavior (their energy and power are both infinite).
Complex Exponential. The discretetime complex exponential has already been shown in Figure 2.2 and is defined as
 (2.8) 
Special cases of the complex exponential are the realvalued discretetime sinusoidal oscillations:
x[n]  = sin(ω_{0}n + φ)  (2.9) 
x[n]  = cos(ω_{0}n + φ)  (2.10) 
a)
b)

With respect to the oscillatory behavior captured by the complex exponential, a note on the concept of “frequency” is in order. In the continuoustime world (the world of textbook physics, to be clear), where time is measured in seconds, the usual unit of measure for frequency is the Hertz which is equivalent to 1∕second. In the discretetime world, where the index n represents a dimensionless time, “digital” frequency is expressed in radians which is itself a dimensionless quantity.^{(1)} The best way to appreciate this is to consider an algorithm to generate successive samples of a discretetime sinusoid at a digital frequency ω_{0}:

At each iteration,^{(2)} the argument of the trigonometric function is incremented by ω_{0} and a new output sample is produced. With this in mind, it is easy to see that the highest frequency manageable by a discretetime system is ω_{max} = 2π; for any frequency larger than this, the inner 2πperiodicity of the trigonometric functions “maps back” the output values to a frequency between 0 and 2π. This can be expressed as an equation:
 (2.11) 
for all values of k Z. This 2πequivalence of digital frequencies is a pervasive concept in digital signal processing and it has many important consequences which we will study in detail in the next Chapters.
In this Section we present some elementary operations on sequences.
Shift. A sequence x[n], shifted by an integer k is simply:
 (2.12) 
If k is positive, the signal is shifted “to the left”, meaning that the signal has been delayed; if k is negative, the signal is shifted “to the right”, meaning that the signal has been advanced. The delay operator can be indicated by the following notation:
Scaling. A sequence x[n] scaled by a factor α C is
 (2.13) 
If α is real, then the scaling represents a simple amplification or attenuation of the signal (when α > 1 and α < 1, respectively). If α is complex, amplification and attenuation are compounded with a phase shift.
Sum. The sum of two sequences x[n] and w[n] is their termbyterm sum:
 (2.14) 
Please note that sum and scaling are linear operators. Informally, this means scaling and sum behave “intuitively”:
or
Product. The product of two sequences x[n] and w[n] is their termbyterm product
 (2.15) 
Integration. The discretetime equivalent of integration is expressed by the following running sum:
 (2.16) 
Intuitively, integration computes a nonnormalized running average of the discretetime signal.
Differentiation. A discretetime approximation to differentiation is the firstorder difference:^{(3)}
 (2.17) 
With respect to Section 2.1.2, note how the unit step can be obtained by applying the integration operator to the discretetime impulse; conversely, the impulse can be obtained by applying the differentiation operator to the unit step.
The signal reproducing formula is a simple application of the basic signal and signal properties that we have just seen and it states that
 (2.18) 
Any signal can be expressed as a linear combination of suitably weighed and shifted impulses. In this case, the weights are nothing but the signal values themselves. While selfevident, this formula will reappear in a variety of fundamental derivations since it captures the “inner structure” of a discretetime signal.
We define the energy of a discretetime signal as
 (2.19) 
(where the squarednorm notation will be clearer after the next Chapter). This definition is consistent with the idea that, if the values of the sequence represent a timevarying voltage, the above sum would express the total energy (in joules) dissipated over a 1Ωresistor. Obviously, the energy is finite only if the above sum converges, i.e. if the sequence x[n] is squaresummable. A signal with this property is sometimes referred to as a finite energy signal. For a simple example of the converse, note that a periodic signal which is not identically zero is not squaresummable.
We define the power of a signal as the usual ratio of energy over time, taking the limit over the number of samples considered:
 (2.20) 
Clearly, signals whose energy is finite, have zero total power (i.e. their energy dilutes to zero over an infinite time duration). Exponential sequences which are not decaying (i.e. those for which a > 1 in (2.7)) possess infinite power (which is consistent with the fact that they describe an unstable behavior). Note, however, that many signals whose energy is infinite do have finite power and, in particular, periodic signals (such as sinusoids and combinations thereof). Due to their periodic nature, however, the above limit is undetermined; we therefore define their power to be simply the average energy over a period. Assuming that the period is N samples, we have
 (2.21) 
The examples of discretetime signals in (2.1) and (2.2) are twosided, infinite sequences. Of course, in the practice of signal processing, it is impossible to deal with infinite quantities of data: for a processing algorithm to execute in a finite amount of time and to use a finite amount of storage, the input must be of finite length; even for algorithms that operate on the fly, i.e. algorithms that produce an output sample for each new input sample, an implicit limitation on the input data size is imposed by the necessarily limited life span of the processing device.^{(4)} This limitation was all too apparent in our attempts to plot infinite sequences as shown in Figure 2.1 or 2.2: what the diagrams show, in fact, is just a meaningful and representative portion of the signals; as for the rest, the analytical description remains the only reference. When a discretetime signal admits no closedform representation, as is basically always the case with realworld signals, its finite time support arises naturally because of the finite time spent recording the signal: every piece of music has a beginning and an end, and so did every phone conversation. In the case of the sequence representing the Dow Jones index, for instance, we basically cheated since the index did not even exist for years before 1884, and its value tomorrow is certainly not known – so that the signal is not really a sequence, although it can be arbitrarily extended to one. More importantly (and more often), the finiteness of a discretetime signal is explicitly imposed by design since we are interested in concentrating our processing efforts on a small portion of an otherwise longer signal; in a speech recognition system, for instance, the practice is to cut up a speech signal into small segments and try to identify the phonemes associated to each one of them.^{(5)} A special case is that of periodic signals; even though these are bonafide infinite sequences, it is clear that all information about them is contained in just one period. By describing one period (graphically or otherwise), we are, in fact, providing a full description of the sequence. The complete taxonomy of the discretetime signals used in the book is the subject of the next Sections ans is summarized in Table 2.1.
As we just mentioned, a finitelength discretetime signal of length N are just a collection of N complex values. To introduce a point that will reappear throughout the book, a finitelength signal of length N is entirely equivalent to a vector in C^{N}. This equivalence is of immense import since all the tools of linear algebra become readily available for describing and manipulating finitelength signals. We can represent an Npoint finitelength signal using the standard vector notation
Note the transpose operator, which declares x as a column vector; this is the customary practice in the case of complexvalued vectors. Alternatively, we can (and often will) use a notation that mimics the one used for proper sequences:
Here we must remember that, although we use the notation x[n], x[n] is not defined for values outside its support, i.e. for n < 0 or for n ≥ N. Note that we can always obtain a finitelength signal from an infinite sequence by simply dropping the sequence values outside the indices of interest. Vector and sequence notations are equivalent and will be used interchangeably according to convenience; in general, the vector notation is useful when we want to stress the algorithmic or geometric nature of certain signal processing operations. The sequence notation is useful in stressing the algebraic structure of signal processing.
Finitelength signals are extremely convenient entities: their energy is always and, as a consequence, no stability issues arise in processing. From the computational point of view, they are not only a necessity but often the cornerstone of very efficient algorithmic design (as we will see, for instance, in the case of the FFT); one could say that all “practical” signal processing lives in C^{N}. It would be extremely awkward, however, to develop the whole theory of signal processing only in terms of finitelength signals; the asymptotic behavior of algorithms and transformations for infinite sequences is also extremely valuable since a stability result proven for a general sequence will hold for all finitelength signals too. Furthermore, the notational flexibility which infinite sequences derive from their functionlike definition is extremely practical from the point of view of the notation. We can immediately recognize and understand the expression x[n  k] as a kpoint shift of a sequence x[n]; but, in the case of finitesupport signals, how are we to define such a shift? We would have to explicitly take into account the finiteness of the signal and the associated “border effects”, i.e. the behavior of operations at the edges of the signal. For this reason, in most derivations which involve finitelength signal, these signals will be embedded into proper sequences, as we will see shortly.
Aperiodic Signals. The most general type of discretetime signal is represented by a generic infinite complex sequence. Although, as previously mentioned, they lie beyond our processing and storage capabilities, they are invaluably useful as a generalization in the limit. As such, they must be handled with some care when it comes to their properties. We will see shortly that two of the most important properties of infinite sequences concern their summability: this can take the form of either absolute summability (stronger condition) or square summability (weaker condition, corresponding to finite energy).

Periodic Signals. A periodic sequence with period N is one for which
 (2.22) 
The tilde notation [n] will be used whenever we need to explicitly stress a periodic behavior. Clearly an Nperiodic sequence is completely defined by its N values over a period; that is, a periodic sequence “carries no more information” than a finitelength signal of length N.
Periodic Extensions. Periodic sequences are infinite in length, and yet their information is contained within a finite number of samples. In this sense, periodic sequences represent a first bridge between finitelength signals and infinite sequences. In order to “embed” a finitelength signal x[n], n = 0,…,N  1 into a sequence, we can take its periodized version:
 (2.23) 
this is called the periodic extension of the finite length signal x[n]. This type of extension is the “natural” one in many contexts, for reasons which will be apparent later when we study the frequencydomain representation of discretetime signals. Note that now an arbitrary shift of the periodic sequence corresponds to the periodization of a circular shift of the original finitelength signal. A circular shift by k Z is easily visualized by imagining a shift register; if we are shifting towards the right (k > 0), the values which pop out of the rightmost end of the shift register are pushed back in at the other end.^{(6)} The relationship between the circular shift of a finitelength signal and the linear shift of its periodic extension is depicted in Figure 2.6. Finally, the energy of a periodic extension becomes infinite, while its power is simply the energy of the finitelength original signal scaled by 1∕N.
FiniteSupport Signals. An infinite discretetime sequence [n] is said to have finite support if its values are zero for all indices outside of an interval; that is, there exist N and M Z such that
Note that, although [n] is an infinite sequence, the knowledge of M and of the N nonzero values of the sequence completely specifies the entire signal. This suggests another approach to embedding a finitelength signal x[n], n = 0,…,N  1, into a sequence, i.e.
 (2.24) 
where we have chosen M = 0 (but any other choice of M could be used). Note that, here, in contrast to the the periodic extension of x[n], we are actually adding arbitrary information in the form of the zero values outside of the support interval. This is not without consequences, as we will see in the following Chapters. In general, we will use the bar notation [n] for sequences defined as the finite support extension of a finitelength signal. Note that, now, the shift of the finitesupport extension gives rise to a zeropadded shift of the signal locations between M and M + N  1; the dynamics of the shift are shown in Figure 2.7.

Example 2.1: Discretetime in the Far West
The fact that the “fastest” digital frequency is 2π can be readily appreciated in old western movies. In classic scenarios there is always a sequence showing a stagecoach leaving town. We can see the spoked wagon wheels starting to turn forward faster and faster, then stop and then starting to turn backwards. In fact, each frame in the movie is a snapshot of a spinning disk with increasing angular velocity. The filming process therefore transforms the wheel’s movement into a sequence of discretetime positions depicting a circular motion with increasing frequency. When the speed of the wheel is such that the time between frames covers a full revolution, the wheel appears to be stationary: this corresponds to the fact that the maximum digital frequency ω = 2π is undistinguishable from the slowest frequency ω = 0. As the speed of the real wheel increases further, the wheel on film starts to move backwards, which corresponds to a negative digital frequency. This is because a displacement of 2π + α between successive frames is interpreted by the brain as a negative displacement of α: our intuition always privileges the most economical explanation of natural phenomena.
Example 2.2: Building periodic signals
Given a discretetime signal x[n] and an integer N > 0 we can always formally write
The signal ỹ[n], if it exists, is an Nperiodic sequence. The periodic signal ỹ[n] is “manufactured” by superimposing infinite copies of the original signal x[n] spaced N samples apart. We can distinguish three cases:

The first two cases are illustrated in Figure 2.8. In practice, the periodization of short sequences is an effective method to synthesize the sound of string instruments such as a guitar or a piano; used in conjunction with simple filters, the technique is known as the KarplusStrong algorithm.
As an example of the last type, take for instance the signal x[n] = α^{n} u[n]. The periodization formula leads to
ỹ[n]  = ∑ _{k=∞}^{∞}α^{(nkN)}u[n  kN] = ∑ _{k=∞}^{⌊n∕N⌋}α^{(nkN)} 
ỹ[n]  = ∑ _{k=∞}^{m}α^{(mk)Ni} = α^{i} ∑ _{h=0}^{∞}α^{hN} 
 (2.25) 
which is indeed Nperiodic. An example is shown in Figure 2.9.

For more discussion on discretetime signals, see DiscreteTime Signal Processing, by A. V. Oppenheim and R. W. Schafer (PrenticeHall, last edition in 1999), in particular Chapter 2.
Other books of interest include: B. Porat, A Course in Digital Signal Processing (Wiley, 1997) and R. L. Allen and D. W. Mills’ Signal Analysis (IEEE Press, 2004).
Exercise 2.1: Review of complex numbers.
Exercise 2.2: Periodic signals. For each of the following discretetime signals, state whether the signal is periodic and, if so, specify the period:
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
In the 17th century, algebra and geometry started to interact in a fruitful synergy which continues to the present day. Descartes’s original idea of translating geometric constructs into algebraic form spurred a new line of attack in mathematics; soon, a series of astonishing results was produced for a number of problems which had long defied geometrical solutions (such as, famously, the trisection of the angle). It also spearheaded the notion of vector space, in which a geometrical point could be represented as an ntuple of coordinates; this, in turn, readily evolved into the theory of linear algebra. Later, the concept proved useful in the opposite direction: many algebraic problems could benefit from our innate geometrical intuition once they were cast in vector form; from the easy threedimensional visualization of concepts such as distance and orthogonality, more complex algebraic constructs could be brought within the realm of intuition. The final leap of imagination came with the realization that the concept of vector space could be applied to much more abstract entities such as infinitedimensional objects and functions. In so doing, however, spatial intuition could be of limited help and for this reason, the notion of vector space had to be formalized in much more rigorous terms; we will see that the definition of Hilbert space is one such formalization.
Most of the signal processing theory which in this book can be usefully cast in terms of vector notation and the advantages of this approach are exactly what we have just delineated. Firstly of all, all the standard machinery of linear algebra becomes immediately available and applicable; this greatly simplifies the formalism used in the mathematical proofs which will follow and, at the same time, it fosters a good intuition with respect to the underlying principles which are being put in place. Furthermore, the vector notation creates a frame of thought which seamlessly links the more abstract results involving infinite sequences to the algorithmic reality involving finitelength signals. Finally, on the practical side, vector notation is the standard paradigm for numerical analysis packages such as Matlab; signal processing algorithms expressed in vector notation translate to working code with very little effort.
In the previous Chapter, we established the basic notation for the different classes of discretetime signals which we will encounter time and again in the rest of the book and we hinted at the fact that a tight correspondence can be established between the concept of signal and that of vector space. In this Chapter, we pursue this link further, firstly by reviewing the familiar Euclidean space in finite dimensions and then by extending the concept of vector space to infinitedimensional Hilbert spaces.
Euclidean geometry is a straightforward formalization of our spatial sensory experience; hence its cornerstone role in developing a basic intuition for vector spaces. Everybody is (or should be) familiar with Euclidean geometry and the natural “physical” spaces like R^{2} (the plane) and R^{3} (the threedimensional space). The notion of distance is clear; orthogonality is intuitive and maps to the idea of a “right angle”. Even a more abstract concept such as that of basis is quite easy to contemplate (the standard coordinate concepts of latitude, longitude and height, which correspond to the three orthogonal axes in R^{3}). Unfortunately, immediate spatial intuition fails us for higher dimensions (i.e. for R^{N} with N > 3), yet the basic concepts introduced for R^{3} generalize easily to R^{N} so that it is easier to state such concepts for the higherdimensional case and specialize them with examples for N = 2 or N = 3. These notions, ultimately, will be generalized even further to more abstract types of vector spaces. For the moment, let us review the properties of R^{N}, the Ndimensional Euclidean space.
Vectors and Notation. A point in R^{N} is specified by an Ntuple of coordinates:^{(1)}
where x_{i} R, i = 0,1,…,N  1. We call this set of coordinates a vector and the Ntuple will be denoted synthetically by the symbol x; coordinates are usually expressed with respect to a “standard” orthonormal basis.^{(2)} The vector
i.e. the null vector, is considered the origin of the coordinate system.
The generic nth element in vector x is indicated by the subscript x_{n}. In the following we will often consider a set of M arbitrarily chosen vectors in R^{N} and this set will be indicated by the notation x^{(k)}_{k=0 …M1}. Each vector in the set is indexed by the superscript ⋅^{(k)}. The nth element of the kth vector in the set is indicated by the notation x_{n}^{(k)}
Inner Product. The inner product between two vectors x,y R^{N} is defined as
 (3.1) 
We say that x and y are orthogonal when their inner product is zero:
 (3.2) 
Norm. The norm of a vector is defined in terms of the inner product as
 (3.3) 
It is easy to visualize geometrically that the norm of a vector corresponds to its length, i.e. to the distance between the origin and the point identified by the vector’s coordinates. A remarkable property linking the inner product and the norm is the CauchySchwarz inequality (the proof of which is nontrivial); given x,y R^{N} we can state that
Distance. The concept of norm is used to introduce the notion of Euclidean distance between two vectors x and y:
 (3.4) 

From this, we can easily derive the Pythagorean theorem for N dimensions: if two vectors are orthogonal, x ⊥ y, and we consider the sum vector z = x + y, we have
 (3.5) 
The above properties are graphically shown in Figure 3.1 for R^{2}.
Bases. Consider a set of M arbitrarily chosen vectors in R^{N}: {x^{(k)}}_{k=0…M1}. Given such a set, a key question is that of completeness: can any vector in R^{N} be written as a linear combination of vectors from the set? In other words, we ask ourselves whether, for any z R^{N}, we can find a set of M coefficients α_{k} R such that z can be expressed as
 (3.6) 
Clearly, M needs to be greater or equal to N, but what conditions does a set of vectors {x^{(k)}}_{k=0…M1} need to satisfy so that (3.6) holds for any z R^{N}? There needs to be a set of M vectors that span R^{N}, and it can be shown that this is equivalent to saying that the set must contain at least N linearly independent vectors. In turn, N vectors {y^{(k)}}_{k=0…N1} are linearly independent if the equation
 (3.7) 
is satisfied only when all the β_{k}’s are zero. A set of N linearly independent vectors for R^{N} is called a basis and, amongst bases, the ones with mutually orthogonal vectors of norm equal to one are called orthonormal bases. For an orthonormal basis {y^{(k)}} we therefore have
 (3.8) 
Figure 3.2 reviews the above concepts in low dimensions.

The standard orthonormal basis for R^{N} is the canonical basis δ^{(k)}_{k=0…N1} with
The orthonormality of such a set is immediately apparent. Another important orthonormal basis for R^{N} is the normalized Fourier basis {w^{(k)}}_{k=0…N1} for which
The orthonormality of the basis will be proved in the next Chapter.
The purpose of the previous Section was to briefly review the elementary notions and spatial intuitions of Euclidean geometry. A thorough study of vectors in R^{N} and C^{N} is the subject of linear algebra; yet, the idea of vectors, orthogonality and bases is much more general, the basic ingredients being an inner product and the use of a square norm as in (3.3).
While the analogy between vectors in C^{N} and lengthN signal is readily apparent, the question now hinges on how we are to proceed in order to generalize the above concepts to the class of infinite sequences. Intuitively, for instance, we can let N grow to infinity and obtain C^{∞} as the Euclidean space for infinite sequences; in this case, however, much care must be exercised with expressions such as (3.1) and (3.3) which can diverge for sequences as simple as x[n] = 1 for all n. In fact, the proper generalization of C^{N} to an infinite number of dimensions is in the form of a particular vector space called Hilbert space; the structure of this kind of vector space imposes a set of constraints on its elements so that divergence problems, such as the one we just mentioned, no longer bother us. When we embed infinite sequences into a Hilbert space, these constraints translate to the condition that the corresponding signals have finite energy – which is a mild and reasonable requirement.
Finally, it is important to remember that the notion of Hilbert space is applicable to much more general vector spaces than C^{N}; for instance, we can easily consider spaces of functions over an interval or over the real line. This generality is actually the cornerstone of a branch of mathematics called functional analysis. While we will not follow in great depth these kinds of generalizations, we will certainly point out a few of them along the way. The space of square integrable functions, for instance, will turn out to be a marvelous tool a few Chapters from now when, finally, the link between continuous—and discrete—time signals will be explored in detail.
A word of caution: we are now starting to operate in a world of complete abstraction. Here a vector is an entity per se and, while analogies and examples in terms of Euclidean geometry can be useful visually, they are, by no means, exhaustive. In other words: vectors are no longer just Ntuples of numbers; they can be anything. This said, a Hilbert space can be defined in incremental steps starting from a general notion of vector space and by supplementing this space with two additional features: the existence of an inner product and the property of completeness.
Vector Space. Consider a set of vectors V and a set of scalars S (which can be either R or C for our purposes). A vector space H(V,S) is completely defined by the existence of a vector addition operation and a scalar multiplication operation which satisfy the following properties for any x,y,z, V and any α,β S:
 (3.9) 
 (3.10) 
 (3.11) 
 (3.12) 
 (3.13) 
 (3.14) 
 (3.15) 
 (3.16) 
Inner Product Space. What we have so far is the simplest type of vector space; the next ingredient which we consider is the inner product which is essential to build a notion of distance between elements in a vector space. A vector space with an inner product is called an inner product space. An inner product for H(V,S) is a function from V × V to S which satisfies the following properties for any x,y,z, V :
 (3.17) 
 (3.18) 
 (3.19) 
 (3.20) 
 (3.21) 
 (3.22) 
From this definition of the inner product, a series of additional definitions and properties can be derived: first of all, orthogonality between two vectors is defined with respect to the inner product, and we say that the nonzero vectors x and y are orthogonal, or , if and only if
 (3.23) 
From the definition of an inner product, we can define the norm of a vector as (we will omit from now on the subscript 2 from the norme symbol):
 (3.24) 
In turn, the norm satisfies the CauchySchwartz inequality:
 (3.25) 
with strict equality if and only if x = αy. The norm also satisfies the triangle inequality:
 (3.26) 
with strict equality if and only if x = αy and α R^{+}. For orthogonal vectors, the triangle inequality becomes the famous Pythagorean theorem:
 (3.27) 
Hilbert Space. A vector space H(V,S) equipped with an inner product is called an inner product space. To obtain a Hilbert space, we need completeness. This is a slightly more technical notion, which essentially implies that convergent sequences of vectors in V have a limit that is also in V . To gain intuition, think of the set of rational numbers Q versus the set of real numbers R. The set of rational numbers is incomplete, because there are convergent sequences in Q which converge to irrational numbers. The set of real numbers contains these irrational numbers, and is in that sense the completion of Q. Completeness is usually hard to prove in the case of infinitedimensional spaces; in the following it will be tacitly assumed and the interested reader can easily find the relevant proofs in advanced analysis textbooks. Finally, we will also only consider separate Hilbert spaces, which are the ones that admit orthonormal bases.
Finite Euclidean Spaces. The vector space C^{N}, with the “natural” definition for the sum of two vectors z = x + y as
 (3.28) 
and the definition of the inner product as
 (3.29) 
is a Hilbert space.
Polynomial Functions. An example of “functional” Hilbert space is the vector space P_{N}[0,1] of polynomial functions on the interval [0,1] with maximum degree N. It is a good exercise to show that P_{∞}[0,1] is not complete; consider for instance the sequence of polynomials
This series converges as .
Square Summable Functions. Another interesting example of functional Hilbert space is the space of square integrable functions over a finite interval. For instance, L_{2}[π,π] is the space of real or complex functions on the interval [π,π] which have a finite norm. The inner product over L_{2}[π,π] is defined as
 (3.30) 
so that the norm of f(t) is
 (3.31) 
For f(t) to belong to L_{2}[π,π] it must be ∥f∥ < ∞.
The inner product is a fundamental tool in a vector space since it allows us to introduce a notion of distance between vectors. The key intuition about this is a typical instance in which a geometric construct helps us to generalize a basic idea to much more abstract scenarios. Indeed, take the simple Euclidean space R^{N} and a given vector x; for any vector y R^{N} the inner product < x,y > is the measure of the orthogonal projection of y over x. We know that the orthogonal projection defines the point on x which is closest to y and therefore this indicates how well we can approximate y by a simple scaling of x. To illustrate this, it should be noted that
where θ is the angle between the two vectors (you can work out the expression in R^{2} to easily convince yourself of this; the result generalizes to any other dimension). Clearly, if the vectors are orthogonal, the cosine is zero and no approximation is possible. Since the inner product is dependent on the angular separation between the vectors, it represents a first rough measure of similarity between x and y; in broad terms, it provides a measure of the difference in shape between vectors.
In the context of signal processing, this is particularly relevant since most of the times, we are interested in the difference in shape” between signals. As we have said before, discretetime signals are vectors; the computation of their inner product will assume different names according to the processing context in which we find ourselves: it will be called filtering, when we are trying to approximate or modify a signal or it will be called correlation when we are trying to detect one particular signal amongst many. Yet, in all cases, it will still be an inner product, i.e. a qualitative measure of similarity between vectors. In particular, the concept of orthogonality between signals implies that the signals are perfectly distinguishable or, in other words, that their shape is completely different.
The need for a quantitative measure of similarity in some applications calls for the introduction of the Euclidean distance, which is derived from the inner product as
 (3.32) 
In particular, for C^{N} the Euclidean distance is defined by the expression:
 (3.33) 
whereas for L_{2}[π,π] we have
 (3.34) 
In the practice of signal processing, the Euclidean distance is referred to as the root mean square error;^{(4)} this is a global, quantitative goodnessoffit measure when trying to approximate signal y with x.
Incidentally, there are other types of distance measures which do not rely on a notion of inner product; for example in C^{N} we could define
 (3.35) 
This distance is based on the supremum norm and is usually indicated by ; however, it can be shown that there is no inner product from which this norm can be derived and therefore no Hilbert space can be constructed where is the natural norm. Nonetheless, this norm will reappear later, in the context of optimal filter design.
Now that we have defined the properties of Hilbert space, it is only natural to start looking at the consequent inner structure of such a space. The best way to do so is by introducing the concept of basis. You can think of a basis as the “skeleton” of a vector space, i.e. a structure which holds everything together; yet, this skeleton is flexible and we can twist it, stretch it and rotate it in order to highlight some particular structure of the space and facilitate access to particular information that we may be seeking. All this is accomplished by a linear transformation called a change of basis; to anticipate the topic of the next Chapter, we will see shortly that the Fourier transform is an instance of basis change.
Sometimes, we are interested in exploring in more detail a specific subset of a given vector space; this is accomplished via the concept of subspace. A subspace is, as the name implies, a restricted region of the global space, with the additional property that it is closed under the usual vector operations. This implies that, once in a subspace, we can operate freely without ever leaving its confines; just like a fullfledged space, a subspace has its own skeleton (i.e. the basis) and, again, we can exploit the properties of this basis to highlight the features that interest us.
Assume H(V,S) is a Hilbert space, with V a vector space and S a set of scalars (i.e. C).
Subspace. A subspace of V is defined as a subset P ⊆ V that satisfies the following properties:
Clearly, V is a subspace of itself.
Span. Given an arbitrary set of M vectors W = x^{(m)}_{m=0,1,…,M1}, the span of these vectors is defined as
 (3.38) 
i.e. the span of W is the set of all possible linear combinations of the vectors in W. The set of vectors W is called linearly independent if the following holds:
 (3.39) 
Basis. A set of K vectors W = x^{(k)}_{k=0,1,…,K1} from a subspace P is a basis for that subspace if
The last statement affirms that any y P can be written as a linear combination of x^{(k)}_{k=0,1,…,K1} or that, for all y P, there exist K coefficients α_{k} such that
 (3.40) 
which is equivalently expressed by saying that the set W is complete in P.
Orthogonal/Orthonormal Basis. An orthonormal basis for a subspace P is a set of K basis vectors W = x^{(k)}_{k=0,1,…,K1} for which
 (3.41) 
which means orthogonality across vectors and unit norm. Sometimes, the set of vectors can be orthogonal but not normal (i.e. the norm of the vectors is not unitary). This is not a problem provided that we remember to include the appropriate normalization factors in the analysis and/or synthesis formulas. Alternatively, an lineary idependent set of vectors can be orthonormalized via the GrammSchmidt procedure, which can be found in any linear algebra textbook.
Among all bases, orthonormal bases are the most “beautiful” in a way because of their structure and their properties. One of the most important properties for finitedimensional spaces is the following:
In other words, in finite dimensions, once we find a full set of orthogonal vectors, we are sure that the set spans the space.
Let W = x^{(k)}_{k=0,1,…,K1} be an orthonormal basis for a (sub)space P. Then the following properties (all of which are easily verified) hold:
Analysis Formula. The coefficients in the linear combination (3.40) are obtained simply as
 (3.42) 
The coefficients {α_{k}} are called the Fourier coefficients^{(5)} of the orthonormal expansion of y with respect to the basis W and (3.42) is called the Fourier analysis formula; conversely, Equation (3.40) is called the synthesis formula.
Parseval’s Identity For an orthonormal basis, there is a norm conservation property given by Parseval’s identity:
 (3.43) 
For physical quantities, the norm is dimensionally equivalent to a measure of energy; accordingly, Parseval’s identity is also known as the energy conservation formula.
Bessel’s Inequality. The generalization of Parseval’s equality is Bessel’s inequality. In our subspace P, consider a set of L orthonormal vectors (a set which is not necessarily a basis since it may be L < K), with G = g^{(l)}_{l=1,2,…L1}; then the norm of any vector y P is lower bounded as :
 (3.44) 
and the lower bound is reached for all y if and only if the system G is complete, that is, if it is an orthonormal basis for P.
Best Approximations. Assume P is a subspace of V ; if we try to approximate a vector y V by a linear combination of basis vectors from the subspace P, then we are led to the concepts of least squares approximations and orthogonal projections. First of all, we define the best linear approximation P of a general vector y V to be the approximation which minimizes the norm . Such approximation is easily obtained by projecting y onto an orthonormal basis for P, as shown in Figure 3.3. With W as our usual orthonormal basis for P, the projection is given by
 (3.45) 
Define the approximation error as the vector d = y ; it can be easily shown that:
 (3.46) 
This approximation with an orthonormal basis has a key property: it can be successively refined. Assume you have the approximation with the first m terms of the orthonormal basis:
 (3.47) 
and now you want to compute the (m + 1)term approximation. This is simply given by
 (3.48) 
While this seems obvious, it is actually a small miracle, since it does not hold for more general, nonorthonormal bases.

Considering the examples of 3.2.2, we have the following:
Finite Euclidean Spaces. For the simplest case of Hilbert spaces, namely C^{N}, orthonormal bases are also the most intuitive since they contain exactly N mutually orthogonal vectors of unit norm. The classical example is the canonical basis δ^{(k)}_{k=0…N1} with
 (3.49) 
but we will soon study more interesting bases such as the Fourier basis w^{(k)}, for which
In C^{N}, the analysis and synthesis formulas (3.42) and (3.40) take a particularly neat form. For any set x^{(k)} of N orthonormal vectors one can indeed arrange the conjugates of the basis vectors^{(6)} as the successive rows of an N × N square matrix M so that each matrix element is the conjugate of the nth element of the mth basis vector:
M is called a change of basis matrix. Given a vector y, the set of expansion coefficient {α_{k}}_{k=0…N1} can now be written itself as a vector^{(7)} α C^{N}. Therefore, we can rewrite the analysis formula (3.42) in matrixvector form and we have
 (3.50) 
The reconstruction formula (3.40) for y from the expansion coefficients, becomes, in turn,
 (3.51) 
where the superscript denotes the Hermitian transpose (transposition and conjugation of the matrix). The previous equation shows that y is a linear combination of the columns of M^{H}, which, in turn, are of course the vectors x^{(k)}. The orthogonality relation (3.49) takes the following forms:
M^{H}M  = I  (3.52) 
MM^{H}  = I  (3.53) 
Polynomial Functions. A basis for P_{N}[0,1] is x^{k}_{0≤k<N}. This basis, however, is not an orthonormal basis. It can be transformed to an orthonormal basis by a standard GrammSchmidt procedure; the basis vector thus obtained are called Legendre polynomials.
Square Summable Functions. An orthonormal basis set for L_{2}[π,π] is the set (1∕)exp(jnt)_{nZ}. This is actually the classic Fourier basis for functions on an interval. Please note that, here, as opposed to the previous examples, the number of basis vectors is actually infinite. The orthogonality of these basis vectors is easily verifiable; their completeness, however, is rather hard to prove and this, unfortunately, is very much the rule for all nontrivial, infinitedimensional basis sets.
We are now in a position to formalize our intuitions so far, with respect to the equivalence between discretetime signals and vector spaces, with a particularization for the three main classes of signals that we have introduced in the previous Chapter. Note that in the following, we will liberally interchange the notations x and x[n] to denote a sequence as a vector embedded in its appropriate Hilbert space.
The correspondence between the class of finitelength, lengthN signals and C^{N} should be so immediate at this point that it does not need further comment. As a reminder, the canonical basis is the canonical basis for C^{N}. The kth canonical basis vector is often expressed in signal form as
As we have seen, Nperiodic signals are equivalent to lengthN signals. The space of Nperiodic sequences is therefore isomorphic to C^{N}. In particular, the sum of two sequences considered as vectors is the standard pointwise sum for the elements:
 (3.54) 
while, for the inner product, we extend the summation over a period only:
 (3.55) 
The canonical basis for the space of Nperiodic sequences is the canonical basis for C^{N}, because of the isomorphism; in general, any basis for C^{N} is also a basis for the space of Nperiodic sequences. Sometimes, however, we will also consider an explicitly periodized version of the basis. For the canonical basis, in particular, the periodized basis is composed of N vectors of infinitelength ^{(k)}}_{k=0…N1} with
Such a sequence is called a pulse train. Note that here we are abandoning mathematical rigor, since the norm of each of these basis vectors is infinite; yet the pulse train, if handled with care, can be a useful tool in formal derivations.
In the case of infinite sequences, whose “natural” Euclidean space would appear to be C^{∞}, the situation is rather delicate. While the sum of two sequences can be defined in the usual way, by extending the sum for C^{N} to C^{∞}, care must be taken when evaluating the inner product. We have already pointed out that the formula:
 (3.56) 
can diverge even for simple constant sequences such as x[n] = y[n] = 1. A way out of this impasse is to restrict ourselves to ℓ_{2}(Z), the space of square summable sequences, for which
 (3.57) 
This is the space of choice for all the theoretical derivations involving infinite sequences. Note that these sequences are often called “of finite energy”, since the square norm corresponds to the definition of energy as given in (2.19).
The canonical basis for ℓ_{2}(Z) is simply the set δ^{(k)}_{kZ}; in signal form:
 (3.58) 
This is an infinite set, and actually an infinite set of linearly independent vectors, since
 (3.59) 
has no solution for any k. Note that, for an arbitrary signal x[n] the analysis formula gives
so that the reconstruction formula becomes
which is the reproducing formula (2.18). The Fourier basis for ℓ_{2}(Z) will be introduced and discussed at length in the next Chapter.
As a last remark, note that the space of all finitesupport signals, which is clearly a subset of ℓ_{2}(Z), does not form a Hilbert space. Clearly, the space is closed under addition and scalar multiplication, and the canonical inner product is well behaved since all sequences have only a finite number of nonzero values. However, the space is not complete; to clarify this, consider the following family of signals:
For k growing to infinity, the sequence of signals converges as y_{k}[n] → y[n] = 1∕n for all n; while y[n] is indeed in ℓ_{2}(Z), since
y[n] is clearly not a finitesupport signal.
A comprehensive review of linear algebra, which contains all the concepts of Hilbert spaces but in finite dimensions, is the classic by G. Strang, Linear Algebra and Its Applications (Brooks Cole, 2005). For an introduction to Hilbert spaces, there are many mathematics books; we suggest N. Young, An Introduction to Hilbert Space (Cambridge University Press, 1988). As an alternative, a more intuitive and engineeringmotivated approach is in the classic book by D. G. Luenberger, Optimization by Vector Space Methods (Wiley, 1969).
Exercise 3.1: Elementary operators. An operator is a transformation of a given signal and is indicated by the notation
For instance, the delay operator is indicated as
and the onestep difference operator is indicated as
 (3.60) 
A linear operator is one for which the following holds:
In C^{N}, any linear operator on a vector x can be expressed as a matrixvector multiplication for a suitable N × N matrix A. In C^{N}, we define the delay operator as the left circular shift of a vector:
Assume N = 4 for convenience; it is easy to see that
so that D is the matrix associated to the delay operator.
Which operator do you think it is associated to? What does the operator do?
Exercise 3.2: Bases. Let x^{(k)}_{k=0,…,N1} be a basis for a subspace S.
Prove that any vector z S is uniquely represented in this basis.
Hint: prove by contradiction.
Exercise 3.3: Vector spaces and signals.
and the multiplication by a scalar:
form a vector space. Give its dimension and find a basis.
Exercise 3.4: The Haar basis. Consider the following change of basis matrix in C^{8}, with respect to the standard orthonormal basis:
Note the pattern in the first four rows, in the next two, and in the last two.
The basis described by H is called the Haar basis and it is one of the most celebrated cornerstones of a branch of signal processing called wavelet analysis. To get a feeling for its properties, consider the following set of numerical experiments (you can use Matlab or any other numerical package):
and compute its coefficients in the Haar basis.
and compute its coefficients in the Haar basis.
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
Fourier theory has a long history, from J. Fourier’s early work on the transmission of heat to recent results on nonharmonic Fourier series. Fourier theory is a branch of harmonic analysis, and in that sense, a topic in pure and applied mathematics. At the same time, because of its usefulness in practical applications, Fourier analysis is a key tool in several engineering branches, and in signal processing in particular.
Why is Fourier analysis so important? To understand this, let us take a short philosophical detour. Interesting signals are timevarying quantities: you can imagine, for instance, the voltage level at the output of a microphone or the measured level of the tide at a particular location; in all cases, the variation of a signal, over time, implies that a transfer of energy is happening somewhere, and ultimately this is what we want to study. Now, a timevarying value which only increases over time is not only a physical impossibility but a recipe for disaster for whatever system is supposed to deal with it; fuses will blow, wires will melt and so on. Oscillations, on the other hand, are nature’s and man’s way of keeping things in motion without trespassing all physical bounds; from Maxwell’s wave equation to the mechanics of the vocal cords, from the motion of an engine to the ebb and flow of the tide, oscillatory behavior is the recurring theme. Sinusoidal oscillations are the purest form of such a constrained motion and, in a nutshell, Fourier’s immense contribution was to show that (at least mathematically) one could express any given phenomenon as the combined output of a number of sinusoidal “generators”.
Sinusoids have another remarkable property which justifies their ubiquitous presence. Indeed, any linear timeinvariant transformation of a sinusoid is a sinusoid at the same frequency: we express this by saying that sinusoidal oscillations are eigenfunctions of linear timeinvariant systems. This is a formidable tool for the analysis and design of signal processing structures, as we will see in much detail in the context of discretetime filters.
The purpose of the present Chapter is to introduce and analyze some key results on Fourier series and Fourier transforms in the context of discretetime signal processing. It appears that, as we mentioned in the previous Chapter, the Fourier transform of a signal is a change of basis in an appropriate Hilbert space. While this notion constitutes an extremely useful unifying framework, we also point out the peculiarities of its specialization within the different classes of signal. In particular, for finitelength signals we highlight the eminently algebraic nature of the transform, which leads to efficient computational procedures; for infinite sequences, we will analyze some of its interesting mathematical subtleties.
The Fourier transform of a signal is an alternative representation of the data in the signal. While a signal lives in the time domain,^{(1)} its Fourier representation lives in the frequency domain. We can move back and forth at will from one domain to the other using the direct and inverse Fourier operators, since these operators are invertible.
In this Chapter we study three types of Fourier transform which apply to the three main classes of signals that we have seen so far:
The frequency representation of a signal (given by a set of coefficients in the case of the DFT and DFS and by a frequency distribution in the case of the DTFT) is called the spectrum.
The basic ingredient of all the Fourier transforms which follow, is the discretetime complex exponential; this is a sequence of the form:

A complex exponential represents an oscillatory behavior; A R is the amplitude of the oscillation, ω is its frequency and φ is its initial phase. Note that, actually, a discretetime complex exponential sequence is not always a periodic sequence; it is periodic only if ω = 2π(M∕N) for some value of M,N Z. The power of a complex exponential is equal to the average energy over a period, i.e. A^{2}, irrespective of frequency.
In the introduction, we hinted at the fact that Fourier analysis allows us to decompose a physical phenomenon into oscillatory components. However, it may seem odd, that we have chosen to use complex oscillation for the analysis of realworld signals. It may seem even odder that these oscillations can have a negative frequency and that, as we will soon see in the context of the DTFT, the spectrum extends over to the negative axis.
The starting point in answering these legitimate questions is to recall that the use of complex exponentials is essentially a matter of convenience. One could develop a complete theory of frequency analysis for real signals using only the basic trigonometric functions. You may actually have noticed this if you are familiar with the Fourier Series of a real function; yet the notational overhead is undoubtedly heavy since it involves two separate sets of coefficients for the sine and cosine basis functions, plus a distinct term for the zeroorder coefficient. The use of complex exponentials elegantly unifies these separate series into a single complexvalued sequence. Yet, one may ask again, what does it mean for the spectrum of a musical sound to be complex? Simply put, the complex nature of the spectrum is a compact way of representing two concurrent pieces of information which uniquely define each spectral component: its frequency and its phase. These two values form a twoelement vector in R^{2} but, since R^{2} is isomorphic to C, we use complex numbers for their mathematical convenience.
With respect negative frequencies, one must first of all consider, yet again, a basic complex exponential sequence such as x[n] = exp(jωn). We can visualize its evolution over discretetime as a series of points on the unit circle in the complex plane. At each step, the angle increases by ω, defining a counterclockwise circular motion. It is easy to see that a complex exponential sequence of frequency ω is just the same series of points with the difference that the points move clockwise instead; this is illustrated in detail in Figure 4.1. If we decompose a real signal into complex exponentials, we will show that, for any given frequency value, the phases of the positive and negative components are always opposite in sign; as the two oscillations move in opposite directions along the unit circle, their complex part will always cancel out exactly, thus returning a purely real signal.^{(2)}

The final step in developing a comfortable feeling for complex oscillations comes from the realization that, in the synthesis of discretetime signals (and especially in the case of communication systems) it is actually more convenient to work with complexvalued signals, themselves. Although the transmitted signal of a device like an ADSL box is a real signal, the internal representation of the underlying sequences is complex; therefore complex oscillations become a necessity.
We now develop a Fourier representation for finitelength signals; to do so, we need to find a set of oscillatory signals of length N which contain a whole number of periods over their support. We start by considering a family of finitelength sinusoidal signals (indexed by an integer k) of the form
 (4.1) 
where all the ω_{k}’s are distinct frequencies which fulfill our requirements. To determine these frequency values, note that, in order for w_{k}[n] to contain a whole number of periods over N samples, it must conform to
which translates to
The above equation has N distinct solutions which are the N roots of unity exp(j2πm∕N), m = 0,…,N  1; if we define the complex number
then the family of N signals in (4.1) can be written as
 (4.2) 
for each value of k = 0,…,N  1. We can represent these N signals as a set of vectors w^{(k)}_{k=0,…,N1} in C^{N} with
 (4.3) 
The real and imaginary parts of w^{(k)} for N = 32 and for some values of k are plotted in Figures 4.2 to 4.5.
We can verify that w^{(k)}_{k=0,…,N1} is a set of N orthogonal vectors and therefore a basis for C^{N}; indeed we have (noting that W_{N}^{k}^{*} = W_{N}^{k}):
 (4.4) 
since W_{N}^{iN} = 1 for all i Z. In more compact notation we can therefore state that
 (4.5) 
The vectors w^{(k)}_{k=0,…,N1} are the Fourier basis for C^{N}, and therefore for the space of lengthN signals. It is immediately evident that this basis is not orthonormal, since w^{(k)}^{2} = N, but that it could be made orthonormal simply by scaling the basis vectors by (1∕). In signal processing practice, however, it is customary to keep the normalization factor explicit in the change of basis formulas; this is mostly due to computational reasons, as we will see later, but, for the sake of consistency with the mainstream literature, we will also follow this convention.
The Discrete Fourier Transform (DFT) analysis and synthesis formulas can now be easily expressed in the familiar matrix notation as in Section 3.3.3: define an N × N square matrix W by stacking the conjugate of the basis vectors, i.e. W_{nk} = exp(j(2π∕N)nk) = W_{N}^{nk}; from this we can state, for all vectors x C^{N}:
X  = Wx  (4.6) 
x  = W^{H}X  (4.7) 
 (4.8) 
(once again, note the explicit normalization factor).
It is very common in the literature to explicitly write out the inner products in (4.6) and (4.7); this is both for historical reasons and to underscore the highly structured form of this transformation which, as we will see, is the basis for very efficient computational procedures. In detail, we have the analysis formula
 (4.9) 
and the dual synthesis formula
 (4.10) 
where we have used the standard convention of “lumping” the normalizing factor (1∕N) entirely within in the reconstruction sum (4.10).
To return to the physical interpretation of the DFT, what we have just obtained is the decomposition of a finitelength signal into a set of N sinusoidal components; the magnitude and initial phase of each oscillator are given by the coefficients X[k] in (4.9) (or, equivalently, by the vector elements X_{k} in (4.6)^{(3)} ). To stress the point again:
The first N output values of this “machine” are exactly x[n].
If we look at this from the opposite end, each X[k] shows “how much” oscillatory behavior at frequency 2π∕k, is contained in the signal; this is consistent with the fact that an inner product is a measure of similarity. The coefficients X[k] are referred to as the spectrum of the signal. The square magnitude X[k]^{2} is a measure (up to a scale factor N) of the signal’s energy at the frequency (2π∕N)k; the coefficients X[k], therefore, show exactly how the global energy of the original signal is distributed in the frequency domain while Parseval’s equality (4.8) guarantees that the result is consistent. The phase of each Fourier coefficient, indicated by ∡X[k], specifies the initial phase of each oscillator for the reconstruction formula, i.e. the relative alignment of each complex exponential at the onset of the signal. While this does not influence the energy distribution in frequency, it does have a significant effect on the shape of the signal in the discretetime domain as we will shortly see in more detail.
Some examples for signals in C^{64} are plotted in Figures 4.6–4.9. Figure 4.6 shows one of the simplest cases: indeed, the signal x[n] is a sinusoid whose frequency coincides with that of one of the basis vectors (precisely, to that of w^{(4)}) and, as a consequence of the orthogonality of the basis, only X[4] and X[60] are nonzero (this can be easily verified by decomposing the sinusoid as the sum of two appropriate basis functions). Figure 4.7 shows the same signal, but this time with a phase offset.
The magnitude DFT does not change, but the phase offset appears in the phase of the transform. Figure 4.8 shows the transform of a sinusoid whose frequency does not coincide with any of the basis frequencies. As a consequence, all of the basis vectors are needed to reconstruct the signal. Clearly, the magnitude is larger for frequencies closer to hot of the original signal’s (6π∕64 and 7π∕64 in this case); yet, to reconstruct x[n] exactly, we need the contribution of the entire basis. Finally, Figure 4.9 shows the DFT of a step signal. It can be shown (with a few trigonometric manipulations) that the DFT of a step signal is
where N is the length of the signal and M is the length of the step (in Figure 4.9 N = 64 and M = 5, for instance.)
Consider the reconstruction formula in (4.10); what happens if we let the index n roam outside of the [0,N  1] interval? Since W_{N}^{(n+iN)k} = W_{N}^{nk} for all i Z, we note that x[n + iN] = x[n]. In other words, the reconstruction formula in (4.10) implicitly defines a periodic sequence of period N. This is the reason why, earlier, we stated that periodic sequences are the natural way to embed a finitelength signal into a sequence: their Fourier representation is formally identical. This is not surprising since a) we have already established a correspondence between C^{N} and ^{N} and b) we are actually expressing a lengthN sequence as a combination of Nperiodic basis signals.
The Fourier representation of periodic sequences is called the Discrete Fourier Series (DFS), and its explicit analysis and synthesis formulas are the exact equivalent of (4.9) and (4.10), modified only with respect to the range of the indices. We have already seen that in (4.10), the reconstruction formula, n is now in Z. Symmetrically, due to the Nperiodicity of W_{N}^{k}, we can let the index k in (4.9) assume any value in Z too; this way, the DFS coefficients become an Nperiodic sequence themselves and the DFS becomes a change of basis in ^{N} using the definition of inner product given in Section (3.4.2) and the formal periodic basis for ^{N}:
[k]  = ∑ _{n=0}^{N1}[n]W_{ N}^{nk}, k Z  (4.11) 
[n]  = ∑ _{k=0}^{N1}[k]W_{ N}^{nk}, n Z  (4.12) 
We now consider a Fourier representation for infinite nonperiodic sequences. From a purely mathematical point of view, the DiscreteTime Fourier Transform of a sequence x[n] is defined as
 (4.13) 
The DTFT is therefore a complexvalued function of the real argument ω, and, as can be easily verified, it is periodic in ω with period 2π. The somewhat odd notation X(exp(jω)) is quite standard in the signal processing literature and offers several advantages:
The DTFT, when it exists, can be inverted via the integral
 (4.14) 
as can be easily verified by substituting (4.13) into 4.14) and using
In fact, due to the 2πperiodicity of the DTFT, the integral in (4.14) can be computed over any 2πwide interval on the real line (i.e. between 0 and 2π, for instance). The relation between a sequence x[n] and its DTFT X(exp(jω)) will be indicated in the general case by
While the DFT and DFS were signal transformations which involved only a finite number of quantities, both the infinite summation and the realvalued argument, appearing in the DTFT, can create an uneasiness which overshadows the conceptual similarities between the transforms. In the following, we start by defining the mathematical properties of the DTFT and we try to build an intuitive feeling for this Fourier representation, both with respect to its physical interpretation and to its conformity to the “change of basis” framework, that we used for the DFT and DFS.
Mathematically, the DTFT is a transform operator which maps discretetime sequences onto the space of 2πperiodic functions. Clearly, for the DTFT to exist, the sum in (4.13) must converge, i.e. the limit for M →∞ of the partial sum
 (4.15) 
must exist and be finite. Convergence of the partial sum in (4.15) is very easy to prove for absolutely summable sequences, that is for sequences satisfying
 (4.16) 
since, according to the triangle inequality,
 (4.17) 
For this class of sequences it can also be proved that the convergence of X_{M}(exp(jω)) to X(exp(jω)) is uniform and that X(exp(jω)) is continuous. While absolute summability is a sufficient condition, it can be shown that the sum in (4.15) is convergent also for all squaresummable sequences, i.e. for sequences whose energy is finite; this is very important to us with respect to the discussion in Section 3.4.3 where we defined the Hilbert space ℓ_{2}(Z). In the case of square summability only, however, the convergence of (4.15) is no longer uniform but takes place only in the meansquare sense, i.e.
 (4.18) 
Convergence in the mean square sense implies that, while the total energy of the error signal becomes zero, the pointwise values of the partial sum may never approach the values of the limit. One manifestation of this odd behavior is called the Gibbs phenomenon, which has important consequences in our approach to filter design, as we will see later. Furthermore, in the case of squaresummable sequences, X(exp(jω)) is no longer guaranteed to be continuous.
As an example, consider the sequence:
 (4.19) 
Its DTFT can be computed as the sum^{(4)}
X(e^{jω})  = ∑ _{n=N}^{N}e^{jωn}  
= ∑ _{n=1}^{N}e^{jωn} + ∑ _{n=0}^{N}e^{jωn}  
= +  1  
= e^{jω∕2} + e^{jω∕2}  1  
=  
= 

The DTFT of this particular signal turns out to be real (we will see later that this is a consequence of the signal’s symmetry) and it is plotted in Figure 4.10. When, as is very often the case, the DTFT is complexvalued, the usual way to represent it graphically takes the magnitude and the phase separately into account. The DTFT is always a 2πperiodic function and the standard convention is to plot the interval from π to π. Larger intervals can be considered if the periodicity needs to be made explicit; Figure 4.11, for instance, shows five full periods of the same function.

A way to gain some intuition about the structure of the DTFT formulas is to consider the DFS of periodic sequences with larger and larger periods. Intuitively, as we look at the structure of the Fourier basis for the DFS, we can see that the number of basis vectors in (4.9) grows with the length N of the period and, consequently, the frequencies of the underlying complex exponentials become “denser” between 0 and 2π. We want to show that, in the limit, we end up with the reconstruction formula of the DTFT.
To do so, let us restrict ourselves to the domain of absolute summable sequences; for these sequences, we know that the sum in (4.13) exists. Now, given an absolutely summable sequence x[n], we can always build an Nperiodic sequence [n] as
 (4.20) 
for any value of N (see Example 2.2); this is guaranteed by the fact that the above sum converges for all n Z (because of the absolute summability of x[n]) so that all values of [n] are finite. Clearly, there is overlap between successive copies of x[n]; the intuition, however, is the following: since in the end we will consider very large values for N and since x[n], because of absolute summability, decays rather fast with n, the resulting overlap of “tails” will be negligible. This can be expressed as
Now consider the DFS of [n]:
 (4.21) 
where in the last term we have used (4.20), interchanged the order of the summation and exploited the fact that exp(j(2π∕N)(n + iN)k) = exp(j(2π∕N)nk). We can see that, for every value of i in the outer sum, the argument of the inner sum varies between iN and iN + N  1, i.e. nonoverlapping intervals, so that the double summation can be simplified as
 (4.22) 
and therefore
 (4.23) 
This already gives us a noteworthy piece of intuition: the DFS coefficients for the periodized signal are a discrete set of values of its DTFT (here considered solely as a formal operator) computed at multiples of 2π∕N. As N grows, the spacing between these frequency intervals narrows more and more so that, in the limit, the DFS converges to the DTFT.
To check that this assertion is consistent, we can now write the DFS reconstruction formula using the DFS values given to us by inserting (4.23) in (4.10):
 (4.24) 
By defining Δ = (2π∕N), we can rewrite the above expression as
 (4.25) 
and the summation is easily recognized as the Riemann sum with step Δ approximating the integral of f(ω) = X(e^{jω})e^{jωn} between 0 and 2π. As N goes to infinity (and therefore [n] → x[n]), we can therefore write
 (4.26) 
which is indeed the DTFT reconstruction formula (4.14).^{(5)}
We now show that, if we are willing to sacrifice mathematical rigor, the DTFT can be cast in the same conceptual framework we used for the DFT and DFS, namely as a basis change in a vector space. The following formulas are to be taken as nothing more than a set of purely symbolic derivations, since the mathematical hypotheses under which the results are well defined are far from obvious and are completely hidden by the formalism. It is only fair to say, however, that the following expressions represent a very handy and intuitive toolbox to grasp the essence of the duality between the discretetime and the frequency domains and that they can be put to use very effectively to derive quick results when manipulating sequences.
One way of interpreting Equation (4.13) is to see that, for any given value ω_{0}, the corresponding value of the DTFT is the inner product in ℓ_{2}(Z) of the sequence x[n] with the sequence exp(jω_{0}n); formally, at least, we are still performing a projection in a vector space akin to C^{∞}:
Here, however, the set of “basis vectors” {exp(jωn)}_{ωR} is indexed by the real variable ω and is therefore uncountable. This uncountability is mirrored in the inversion formula (4.14), in which the usual summation is replaced by an integral; in fact, the DTFT operator maps ℓ_{2}(Z) onto L_{2}[π,π] which is a space of 2πperiodic, square integrable functions. This interpretation preserves the physical meaning given to the inner products in (4.13) as a way to measure the frequency content of the signal at a given frequency; in this case the number of oscillators is infinite and their frequency separation becomes infinitesimally small.
To complete the picture of the DTFT as a change of basis, we want to show that, at least formally, the set {exp(jωn)}_{ωR} constitutes an orthogonal “basis” for ℓ_{2}(Z).^{(6)} In order to do so, we need to introduce a quirky mathematical entity called the Dirac delta functional; this is defined in an implicit way by the following formula
 (4.27) 
where f(t) is an arbitrary integrable function on the real line; in particular
 (4.28) 
While no ordinary function satisfies the above equation, δ(t) can be interpreted as shorthand for a limiting operation. Consider, for instance, the family of parametric functions^{(7)}
 (4.29) 
which are plotted in Figure 4.12. For any continuous function f(t) we can write
 (4.30) 
where we have used the Mean Value theorem. Now, as k goes to infinity, the integral converges to f(0); hence we can say that the limit of the series of functions r_{k}(t) converges then to the Dirac delta. As already stated, the delta cannot be considered as a proper function, so the expression δ(t) outside of an integral sign has no mathematical meaning; it is customary however to associate an “idea” of function to the delta and we can think of it as being undefined for t0 and to have a value of ∞ at t = 0. This interpretation, together with (4.27), defines the socalled sifting property of the Dirac delta; this property allows us to write (outside of the integral sign):
 (4.31) 
The physical interpretation of the Dirac delta is related to quantities expressed as continuous distributions for which the most familiar example is probably that of a probability distribution (pdf). These functions represent a value which makes physical sense only over an interval of nonzero measure; the punctual value of a distribution is only an abstraction. The Dirac delta is the operator that extracts this punctual value from a distribution, in a sense capturing the essence of considering smaller and smaller observation intervals.
To see how the Dirac delta applies to our basis expansion, note that equation (4.27) is formally identical to an inner product over the space of functions on the real line; by using the definition of such an inner product we can therefore write
 (4.32) 
which is, in turn, formally identical to the reconstruction formula of Section 3.4.3. In reality, we are interested in the space of 2πperiodic functions, since that is where DTFTs live; this is easily accomplished by building a 2πperiodic version of the delta as
 (4.33) 
where the leading 2π factor is for later convenience. The resulting object is called a pulse train, similarly to what we built for the case of periodic sequences in ^{N}. Using the pulse train and given any 2πperiodic function f(ω), the reconstruction formula (4.32) becomes
 (4.34) 
for any σ R.
Now that we have the delta notation in place, we are ready to start. First of all, we show the formal orthogonality of the basis functions {exp(jωn)}_{ωR}. We can write
 (4.35) 
The lefthand side of this equation has the exact form of the DTFT reconstruction formula (4.14); hence we have found the fundamental relationship
 (4.36) 
Now, the DTFT of a complex exponential exp(jσn) is, in our change of basis interpretation, simply the inner product ⟨exp(jωn),exp(jσn)⟩; because of (4.36) we can therefore express this as
 (4.37) 
which is formally equivalent to the orthogonality relation in (4.5).
We now recall for the last time that the delta notation subsumes a limiting operation: the DTFT pair (4.36) should be interpreted as shorthand for the limit of the partial sums
(where we have chosen ω_{0} = 0 for the sake of example). Figure 4.13 plots s_{k}(ω) for increasing values of k (we show only the [π,π] interval, although of course the functions are 2πperiodic). The family of functions s_{k}(ω) is exactly equivalent to the family of functions r_{k}(t) we saw in (4.29); they too become increasingly narrow while keeping a constant area (which turns out to be 2π). That is why we can simply state that s_{k}(ω) → (ω).
From (4.36) we can easily obtain other interesting results: by setting ω_{0} = 0 and by exploiting the linearity of the DTFT operator, we can derive the DTFT of a constant sequence:
 (4.38) 
or, using Euler’s formulas, the DTFTs of sinusoidal functions:
cos(ω_{0}n + φ) e^{jφ} (ω  ω_{ 0}) + e^{jφ} (ω + ω_{ 0})  (4.39)  
sin(ω_{0}n + φ) e^{jφ} (ω  ω_{ 0})  e^{jφ} (ω + ω_{ 0})  (4.40) 
We can now show that, thanks to the delta formalism, the DTFT is the most general type of Fourier transform for discretetime signals. Consider a lengthN signal x[n] and its N DFT coefficients X[k]; consider also the sequences obtained from x[n] either by periodization or by building a finitesupport sequence. The computation of the DTFTs of these sequences highlights the relationships linking the three types of discretetime transforms that we have seen so far.
Periodic Sequences. Given a lengthN signal x[n], n = 0,…,N  1, consider the associated Nperiodic sequence [n] = x[n mod N] and its N DFS coefficients X[k]. If we try to write the analysis DTFT formula for [n] we have
(e^{jω})  = ∑ _{n=∞}^{∞}[n]e^{jωn}  
= ∑ _{n=∞}^{∞}e^{jωn}  (4.41)  
= ∑ _{k=0}^{N1}X[k]  (4.42) 
 (4.43) 
which is the relationship between the DTFT and the DFS. If we restrict ourselves to the [π,π] interval, we can see that the DTFT of a periodic sequence is a series of regularly spaced deltas placed at the N roots of unity and whose amplitude is proportional to the DFS coefficients of the sequence. In other words, the DTFT is uniquely determined by the DFS and vice versa.
FiniteSupport Sequences. Given a lengthN signal x[n], n = 0,…, N  1 and its N DFT coefficients X[k], consider the associated finitesupport sequence
from which we can easily derive the DTFT of as
 (4.44) 
with
What the above expression means, is that the DTFT of the finite support sequence [n] is again uniquely defined by the N DFT coefficients of the finitelength signal x[n] and it can be obtained by a type of Lagrangian interpolation. As in the previous case, the values of DTFT at the roots of unity are equal to the DFT coefficients; note, however, that the transform of a finite support sequence is very different from the DTFT of a periodized sequence. The latter, in accordance with the definition of the Dirac delta, is defined only in the limit and for a finite set of frequencies; the former is just a (smooth) interpolation of the DFT.
The DTFT possesses the following properties.
Symmetries and Structure. The DTFT of a timereversed sequence is
 (4.45) 
while, for the complex conjugate of a sequence we have
 (4.46) 
For the very important case of a real sequence x[n] R, property 4.46 implies that the DTFT is conjugatesymmetric:
 (4.47) 
which leads to the following special symmetries for real signals:
 (4.48) 
 (4.49) 
 (4.50) 
 (4.51) 
Finally, if x[n] is real and symmetric, then the DTFT is real:
 (4.52) 
while, for real antisymmetric signals we have that the DTFT is purely imaginary:
 (4.53) 
Linearity and Shifts. The DTFT is a linear operator:
 (4.54) 
A shift in the discretetime domain leads to multiplication by a phase term in the frequency domain:
 (4.55) 
while multiplication of the signal by a complex exponential (i.e. signal modulation by a complex “carrier” at frequency ω_{0}) leads to
 (4.56) 
which means that the spectrum is shifted by ω_{0}. This last result is known as the modulation theorem.
Energy Conservation. The DTFT satisfies the PlancherelParseval equality:
 (4.57) 
or, using the respective definitions of inner product for ℓ_{2}(Z) and L_{2}[π,π]:
 (4.58) 
(note the explicit normalization factor 1∕2π). The above equality specializes into Parseval’s theorem as
 (4.59) 
which establishes the conservation of energy property between the time and the frequency domains.
The DTFT properties we have just seen extend easily to the Fourier Transform of periodic signals. The easiest way to obtain the particularizations which follow is to apply relationship (4.43) to the results of the previous Section.
Symmetries and Structure. The DFS of a timereversed sequence is
 (4.60) 
while, for the complex conjugate of a sequence we have
 (4.61) 
For real periodic sequences, the following special symmetries hold (see (4.47)–(4.53)):
[k]  = ^{*}[k]  (4.62) 
[k]  = [k]  (4.63) 
∡[k]  = ∡[k]  (4.64) 
Re[k]  = Re[k]  (4.65) 
Im[k]  = Im[k]  (4.66) 
 (4.67) 
while, for real antisymmetric signals, we can state that the DFS is purely imaginary.
Linearity and Shifts. The DFS is a linear operator, since it can be expressed as a matrixvector product. A shift in the discretetime domain leads to multiplication by a phase term in the frequency domain:
 (4.68) 
while multiplication of the signal by a complex exponential of frequency multiple of 2π∕N leads to of a shift in frequency:
 (4.69) 
Energy Conservation. We have already seen the energy conservation property in the context of basis expansion. Here, we simply recall Parseval’s theorem, which states
 (4.70) 
The properties of the DFT are obviously the same as those for the DFS, given the formal equivalence of the transforms. The only detail is how to interpret shifts, index reversal and symmetries for finite, lengthN vectors; this is easily solved by considering the fact that the equivalence DFTDFS translates in the time domain to a homomorphism between a lengthN signal and its associated Nperiodic extension to an infinite sequence. A shift, for instance, can be applied to the periodized version of the signal and the resulting shifted length N signal is given by the values of the shifted sequence from 0 to N  1, as previously explained in Section 2.2.2.
Mathematically, this means that all shifts and time reversals of a lengthN signal are operated modulo N. Consider a lengthN signal:
 (4.71) 
Its timereversed version is
 (4.72) 
as we can easily see by considering the underlying periodic extension (note that x[0] remains in place!) A shift by k corresponds to a circular shift:
 
= x(n  k) mod N  (4.73) 

The concept of symmetry can be reinterpreted as follows: a symmetric signal is equal to its time reversed version; therefore, for a lengthN signal to be symmetric, the first member of (4.71) must equal the first member of (4.72), that is
 (4.74) 
Note that, in the above definition, the index k runs from 1 of (N  1)∕2; this means that symmetry does not place any constraint on the value of x[0] and, similarly, the value of x[N∕2] is also unconstrained for evenlength signals. Figure 4.14 shows some examples of symmetric lengthN signals for different values of N. Of course the same definition can be used for antisymmetric signals with just a change of sign.
Symmetries and Structure. The symmetries and structure derived for the DFS can be rewritten specifically for the DFT as
x[n mod N]  X[k mod N]  (4.75) 
x^{*}[n]  X^{*}[k mod N]  (4.76) 
The following symmetries hold only for real signals:
X[k]  = X^{*}[k mod N]  (4.77) 
X[k]  = X[k mod N]  (4.78) 
∡X[k]  = ∡X[k mod N]  (4.79) 
ReX[k]  = ReX[k mod N]  (4.80) 
ImX[k]  = ImX[k mod N]  (4.81) 
 (4.82) 
while, for real antisymmetric signals we have that the DFT is purely imaginary.
Linearity and Shifts. The DFT is obviously a linear operator. A circular shift in the discretetime domain leads to multiplication by a phase term in the frequency domain:
 (4.83) 
while the finitelength equivalent of the modulation theorem states
 (4.84) 
Energy Conservation. Parseval’s theorem for the DFT is (obviously) identical to (4.70):
 (4.85) 
In the previous Sections, we have developed three frequency representations for the three main types of discretetime signals; the derivation was eminently theoretical and concentrated mostly upon the mathematical properties of the transforms seen as a change of basis in Hilbert space. In the following Sections we will see how to put the Fourier machinery to practical use.
We have seen two fundamental ways to look at a signal: its timedomain representation, in which we consider the values of the signal as a function of discrete time, and its frequencydomain representation, in which we consider its energy and phase content as a function of digital frequency. The information contained in each of the two representations is exactly the same, as guaranteed by the invertibility of the Fourier transform; yet, from an analytical point of view, we can choose to concentrate on one domain or the other according to what we are specifically seeking. Consider for instance a piece of music; such a signal contains two coexisting perceptual features, meter and key. Meter can be determined by looking at the duration patterns of the played notes: its “natural” domain is therefore the time domain. The key, on the other hand, can be determined by looking at the pitch patterns of the played notes: since pitch is related to the frequency content of the sound, the natural domain of this feature is the frequency domain.
We can recall that the DTFT is mostly a theoretical analysis tool; the DTFTs which can be computed exactly (i.e. those in which the sum in (4.13) can be solved in closed form) represent only a small set of sequences; yet, these sequences are highly representative and they will be used over and over to illustrate a prototypical behavior. The DFT,^{(8)} on the other hand, is fundamentally a numerical tool in that it defines a finite set of operations which can be computed in a finite amount of time; in fact, a very efficient algorithmic implementation of the DFT exists under the name of Fast Fourier Transform (FFT) which only requires a number of operations on the order of N(log N) for an Npoint data vector. The DFT, as we know, only applies to finitelength signals but this is actually acceptable since, in practice, all measured signals have finite support; in principle, therefore, the DFT suffices for the spectral characterization of realworld sequences. Since the transform of a finitelength signal and its DTFT are related by (4.43) or by (4.44) according to the underlying model for the infinitelength extension, we can always use the DTFT to illustrate the fundamental concepts of spectral analysis for the general case and then particularize the results for finitelength sequences.
The first question that we ask ourselves is how to represent spectral data. Since the transform values are complex numbers, it is customary to separately plot their magnitude and their phase; more often than not, we will concentrate on the magnitude only, which is related to the energy distribution of the signal in the frequency domain.^{(9)} For infinite sequences whose DTFT can be computed exactly, the graphical representation of the transform is akin to a standard function graph – again, the interest here is mostly theoretical. Consider now a finitelength signal of length N; its DFT can be computed numerically, and it yields a lengthN vector of complex spectral values. These values can be displayed as such (and we obtain a plain DFT plot) or they can be used to obtain the DTFT of the periodic or finitesupport extension of the original signal.
Consider for example the length16 triangular signal x[n] in Figure 4.15; note in passing that the signal is symmetric according to our definition in (4.74) so that its DFT is real. The DFT coefficients X[k] are plotted in Figure 4.16; according to the fact that x[n] is a real sequence, the set of DFT coefficients is symmetric (again according to (4.74)). The kth DFT coefficient corresponds to the frequency (2π∕N)k and, therefore, the plot’s abscissa extends implicitly from 0 to 2π; this is a little different than what we are used to in the case of the DTFT, where we usually consider the [π,π] interval, but it is customary. Furthermore, the difference is easily eliminated if we consider the sequence of X[k] as being Nperiodic (which it is, as we showed in Section 4.3) and plot the values from k∕2 to k∕2 for k even, or from (k  1)∕2 to (k  1)∕2 for k odd.

This can be made explicit by considering the Nperiodic extension of x[n] and by using the DFSDTFT relationship (4.23); the standard way to plot this is as in Figure 4.17. Here the N pulse trains (ω  (2π∕N)k) are represented as lines (or arrows) scaled by the magnitude of the corresponding DFT coefficients. By plotting the representative [π,π] interval, we can appreciate, in full, the symmetry of the transform’s magnitude.
By considering the finitesupport extension of x[n] instead, and by plotting the magnitude of its DTFT, we obtain Figure 4.18. The points in the plot can be computed directly from the summation defining the DTFT (which, for finitesupport signals only contains a finite number of terms) and by evaluating the sum over a sufficiently fine grid of values for ω in the [π,π] interval; alternatively, the whole set of points can be obtained in one shot from an FFT with a sufficient amount of zeropadding (this method will be precised later). Again, the DTFT of a finitesupport extension is just a smooth interpolation of the original DFT points and no new information is added.


The Fast Fourier Transform, or FFT, is not another type of transform but simply the name of an efficient algorithm to compute the DFT. The algorithm, in its different flavors, is so ubiquitous and so important that the acronym FFT is often used liberally to indicate the DFT (or the DFS, which would be more appropriate since the underlying model is that of a periodic signal).
We have already seen in (4.6) that the DFT can be expressed in terms of a matrix vector multiplication:
as such, the computation of the DFT requires a number of operations on the order of N^{2}. The FFT algorithm exploits the highly structured nature of W to reduce the number of operations to N log(N). In matrix form this is equivalent to decomposing W into the product of a series of matrices with mostly zero or unity elements. The algorithmic details of the FFT can be found in the bibliography; we can mention, however, that the FFT algorithm is particularly efficient for data lengths which are a power of 2 and that, in general, the more prime factors the data length can be decomposed into, the more efficient the FFT implementation.
FFT algorithms are tailored to the specific length of the input signal. When the input signal’s length is a large prime number or when only a subset of FFT algorithms is available (when, for instance, all we have is the radix2 algorithm, which processes input vectors with lengths of a power of 2) it is customary to extend the length of the signal to match the algorithmic requirements. This is usually achieved by zero padding, i.e. the lengthN data vector is extended to a chosen length M by appending (M  N) zeros to it. Now, the maximum resolution of an Npoint DFT, i.e. the separation between frequency components, is 2π∕N. By extending the signal to a longer length M, we are indeed reducing the separation between frequency components. One may think that this artificial increase in resolution allows the DFT to show finer details of the input signal’s spectrum. It is not so.
The Mpoint DFT X^{(M)} of an Npoint data vector x, obtained via zeropadding, can be obtained directly from the “canonical” Npoint DFT of the vector X^{(N)} via a simple matrix multiplication:
 (4.86) 
where the M × N matrix M_{M,N} is given by
where W_{N} is the standard DFT matrix and W_{M}′ is the M × N matrix obtained by keeping just the first N columns of the standard DFT matrix W_{M}. The fundamental meaning of (4.86) is that, by zero padding, we are adding no information to the spectral representation of a finitelength signal. Details of the spectrum which were not apparent in an Npoint DFT are still not apparent in a zeropadded version of the same. It can be shown that (4.86) is a form of Lagrangian interpolation of the original DFT samples; therefore the zeropadded DFT is more attractive in a “cosmetic” fashion since the new points, when plotted, show a smooth curve between the original DFT points (and this is how plots such as the one in Figure 4.18 are obtained).
The spectrum is a complete, alternative representation of a signal; by analyzing the spectrum, one can obtain, at a glance, the fundamental information, reguired to characterize and classify a signal in the frequency domain.
Magnitude The magnitude of a signal’s spectrum, obtained by the Fourier transform, represents the energy distribution in frequency for the signal. It is customary to broadly classify discretetime signals into three classes:

For realvalued signals, the magnitude spectrum is a symmetric function and the above classifications take this symmetry into account. Remember also, that all spectra of discretetime signals are 2πperiodic functions so that the above definitions are to be interpreted in a 2πperiodic fashion. For once, this is made explicit in Figures 4.19 to 4.21 where the plotting range, instead of the customary [π,π] interval, is extended from 2π to 2π.
Phase As we have stated before, the Fourier representation allows us to think of any signal as the sum of the outputs of a (potentially infinite) number of sinusoidal generators. While the magnitude of the spectrum defines the inherent power produced by each of the generators, its phase defines the relative alignment of the generated sinusoids. This alignment determines the shape of the signal in the discretetime domain. To illustrate this with an example, consider the following 64periodic signal:^{(10)}
 (4.87) 
The magnitude of its DFS [k] is independent of the values of φ_{i} = 0, i = 0,1,2,3, and it is plotted in Figure 4.22. If the phase terms are uniformly zero, i.e. φ_{i} = 0, i = 0,1,2,3, [n] is the discretetime periodic signal plotted in Figure 4.23; the alignment of the constituent sinusoids is such that the “square wave” exhibits a rather sharp transition between halfperiods and a rather flat behavior over the halfperiod intervals. In addition, it should be noted with a zero phase term, the periodic signal is symmetric and that therefore the DFS coefficients are real. Now consider modifying the individual phases so that φ_{i} = 2πi∕3; in other words, we introduce a linear phase term in the constituent sinusoids. While the DFS magnitude remains exactly the same, the resulting timedomain signal is the one depicted in Figure 4.24; lack of alignment between sinusoids creates a “smearing” of the signal which no longer resembles a square wave.



Recall our example at the beginning of this Chapter, when we considered the time and frequency information contained in a piece of music. We stated that the melodic information is related to the frequency content of the signal; obviously this is only partially true, since the melody is determined not only by the pitch values but also by their duration and order. Now, if we take a global Fourier Transform of the entire musical piece we have a comprehensive representation of the frequency content of the piece: in the resulting spectrum there is information about the frequency of each played note.^{(11)} The time information, however, that is the information pertaining to the order in which the notes are played, is completely hidden by the spectral representation. This makes us wonder whether there exists a timefrequency representation of a signal, in which both time and frequency information are readily apparent.
The simplest timefrequency transformation is called the spectrogram. The recipe involves splitting the signal into small consecutive (and possibly overlapping) lengthN pieces and computing the DFT of each. What we obtain is the following function of discretetime and of a dicrete frequency index:
 (4.88) 
where M, 1 ≤ M ≤ N controls the overlap between segments. In matrix notation we have
 (4.89) 
The resulting spectrogram is therefore an N ×⌊L∕M⌋ matrix, where L is the total length of the signal x[n]. It is usually represented graphically as a plot in which the xaxis is the discretetime index m, the yaxis is the discrete frequency index k and a color is the magnitude of S[k,m], with darker colors for larger values.
As an example of the insight we can gain from the spectrogram, consider analyzing the wellknown Bolero by Ravel. Figure 4.25 shows the spectrogram of the initial 37 seconds of the piece. In the first 13 seconds the only instrument playing is the snare drum, and the vertical line in the spectrogram represents, at the same time, the wide frequency content of a percussive instrument and its rhythmic pattern: if we look at the spacing between lines, we can identify the “trademark” drum pattern of Ravel’s Bolero. After 13 seconds, the flute starts playing the theme; this is identifiable in the dark horizontal stripes which denote a high energy content around the frequencies which correspond to the pitches of the melody; with further analysis we could even try to identify the exact notes. The clarity of this plot is due to the simple nature of the signal; if we now plot the spectrogram of the last 20 seconds of the piece, we obtain Figure 4.26. Here the orchestra is playing full blast, as indicated by the high energy activity across the whole spectrum; we can only detect the onset of the rhythmic shouts that precede the final chord.

Each of the columns of S represents the “local” spectrum of the signal for a time interval of length N. We can therefore say that the time resolution of the spectrogram is N samples since the value of the signal at time n_{0} influences the DFT of the Npoint window around n_{0}. Seen from another point of view, the time information is “smeared” over an Npoint interval. At the same time, the frequency resolution of the spectrogram is 2π∕N (and we cannot increase it by zeropadding, as we have just shown). The conflict is therefore apparent: if we want to increase the frequency resolution we need to take longer windows but in so doing, we lose the time localization of the spectrogram; likewise, if we want to achieve a fine resolution in time, the corresponding spectral information for each “time slice” will be very coarse. It is rather easy to show that the amount of overlap does not change the situation. In practice, we need to choose an optimal tradeoff taking the characteristics of the signal into consideration.
The above problem, described for the case of the spectrogram, is actually a particular instance of a general uncertainty principle for timefrequency analysis. The principle states that, independently of the analysis tools that we put in place, we can never hope to achieve arbitrarily good resolution in both time and frequency since there exists a lower bound greater than zero for the product of the localization measure in time and frequency.
The conceptual representation of discretetime signals relies on the notion of a dimensionless “time”, indicated by the integer index n. The absence of a physical dimension for time has the happy consequence that all discretetime signal processing tools become indifferent to the underlying physical nature of the actual signals: stock exchange values or sampled orchestral music are just sequences of numbers. Similarly, we have just derived a frequency representation for signals which is based on the notion of a dimensionless frequency; because of the periodicity of the Fourier basis, all we know is that π is the highest digital frequency that we can represent in this model. Again, the power of generality is (or will soon be) apparent: a digital filter which is designed to remove the upper half of a signal’s spectrum can be used with any type of input sequence, with the same results. This is in stark contrast with the practice of analog signal processing in which a halfband filter (made of capacitors, resistors and other electronic components) must be redesigned for any new class of input signals.
This dimensionless abstraction, however, is not without its drawbacks from the point of view of handson intuition; after all, we are all very familiar with signals in the real world for which time is expressed in seconds and frequency is expressed in hertz. We say, for instance, that speech has a bandwidth up to 4 KHz, that the human ear is sensitive to frequencies up to 20 KHz, that a cell phone transmits in the GHz band, and so on. What does “π” mean in these cases? The precise, formal link between realworld signal and discretetime signal processing is given by the Sampling Theorem, which we will study later. The fundamental idea, however, is that we can remove the abstract nature of a discretetime signal (and, correspondingly, of a dimensionless frequency) by associating a time duration to the interval between successive discretetime indices in the sequence.
Let us say that the “realworld” time between indices n and n + 1 in a discretetime sequence is T_{s} seconds (where T_{s} is generally very small); this can correspond to sampling a signal every T_{s} seconds or to generating a synthetic sequence with a DSP chip whose clock cycle is T_{s} seconds. Now, recall that the phase increment between successive samples of a generic complex exponential e^{jω0n} is ω_{0} radians. The oscillation, therefore, completes a full cycle in n_{0} = (2π∕ω_{0}) samples. If T_{s} is the realworld time between samples, the full cycle is completed in n_{0}T_{s} seconds and so its “realworld” frequency is f_{0} = 1∕(n_{0}T_{s}) hertz. The relationship between the digital frequency ω_{0} and the “real” frequency f_{0} in Hertz as determined by the “clock” period T_{s} is therefore
 (4.90) 
In particular, the highest real frequency which can be represented in the discretetime system (which corresponds to ω = π) is
where we have used F_{s} = (1∕T_{s}); F_{s} is just the operating frequency of the discrete time system (also called the sampling frequency or clock frequency). With this notation, the digital frequency ω_{0} corresponding to a real frequency f_{0} is
The compact disk system, for instance, operates at a frequency F_{s} = 44.1 KHz; the maximum representable frequency for the system is 22.05 KHz (which constitutes the highestpitched sound which can be encoded on, and reproduced by, a CD).
Example 4.1: The structure of DFT formulas
The DFT and inverse DFT (IDFT) formulas have a high degree of symmetry. Indeed, we can use the DFT algorithm to compute the IDFT with just a little manipulation: this can be useful if we have a “black box” FFT routine and we want to compute an inverse transform.
In the space of lengthN signals, indicate the DFT of a signal x as
so that we can also write
Now consider the timereversed signal
we can show that
so that the inverse DFT can be obtained as the DFT of a timereversed and scaled version of the original DFT. Indeed, with the change of variable m = (N  1)  k, we have
DFT{X_{r}}[n]  = ∑ _{k=0}^{N1}X(N  1)  ke^{jkn}  
= ∑ _{m=0}^{N1}X[m]e^{j(N1m)n}  
= ∑ _{m=0}^{N1}X[m]e^{jmn} e^{jn} e^{jNn}  
= e^{jn} ∑ _{m=0}^{N1}X[m]e^{jmn} = e^{jn} ⋅ N ⋅ x[n] 
Example 4.2: The DTFT of the step function
In the deltafunction formalism, the Fourier transform of the unit signal x[n] = 1 is the pulse train (ω). Intuitively, the reasoning goes as follows: the unit signal has the same value over its entire infinite, twosided support; nothing ever changes, there is not even the minutest glimpse of movement in the signal, ergo its spectrum can only have a nonzero value at the zero frequency. Recall that a frequency of zero is the frequency of dead quiet; the spectral value at ω = 0 is also known as the DC component (for Direct Current), as opposed to a livelier AC (Alternating Current). At the same time, the unit signal has a very large energy, an infinite energy to be precise; imagine it as the voltage at the poles of a battery connected to a light bulb: to keep the light on for all eternity (i.e. over Z) the energy must indeed be infinite. Our delta function captures this duality very effectively, if not rigorously.
Now consider the unit step u[n]; this is a somewhat stranger entity since it still possesses infinite energy and it still is a very quiet signal – except in n = 0. The transition in the origin is akin to flipping a switch in the battery/light bulb circuit above with the switch remaining on for the rest of (positive) eternity. As for the Fourier transform, intuitively we will still have a delta in zero (because of the infinite energy) but also some nonzero values over the entire frequency range because of the “movement” in n = 0. We know that for a < 1 it is
so that it is tempting to let a → 1 and just say
This is not quite correct; even intuitively, the infinite energy delta is missing. To see what’s wrong, let us try to find the inverse Fourier transform of the above expression; by using the substitution exp(jω) = z and contour integration on the unit circle we have
Since there is a pole on the contour, we need have to use Cauchy’s principal value theorem for the indented integration contour shown in Figure 4.27. For n ≥ 0 there are no poles other than in z = 1 and we can use the “halfresidue” theorem to obtain
so that
For n < 0 there is a (multiple) pole in the origin; with the change of variable v = z^{1} we have
where C′′ is the same contour as C′ but oriented clockwise. Because of this inversion it is
In conclusion
But this is almost good! Indeed,
so that finally the DTFT of the unit step is
 (4.91) 
and its magnitude is sketched in Figure 4.28.
A nice engineering book on Fourier theory is The Fourier Transform and Its Applications, by R. Bracewell (McGraw Hill, 1999). A more mathematically oriented textbook is Fourier Analysis, by T. W. Korner (Cambridge University Press, 1989), as is P. Bremaud’s book, Mathematical Principles of Signal Processing (Springer, 2002).
Exercise 4.1: DFT of elementary functions. Derive the formula for the DFT of the lengthN signal x[n] = cos(2π∕N)Ln + φ.
Exercise 4.2: Real DFT. Compute the DFT of the length4 signal x[n] = {a,b,c,d}. For which values of a,b,c,d is the DFT real?
Exercise 4.3: Limits. What is the value of the limit
(in a signal processing sense)?
Exercise 4.4: Estimating the DFT graphically. Consider a length64 signal x[n] which is the sum of the three sinusoidal signals plotted in the following Figure (the signals are plotted as continuous lines just for clarity). Compute the DFT coefficients X[k], k = 0,1,…,63.

Exercise 4.5: The structure of DFT formulas. Consider a lengthN signal x[n], N = 0,…,N  1; what is the lengthN signal y[n] obtained as
(i.e. by applying the DFT algorithm twice in a row)?
Exercise 4.6: Two DFTs for the price of one. When you compute the DFT of an lengthN real signal, your data contains N real values while the DFT vector contains N complex values: there is clearly a redundancy of a factor of two in the DFT vector, which is apparent when you consider its Hermitian symmetry (i.e. X[k] = X^{*}[N  k]). You can exploit this fact to compute the DFT of two real lengthN signals for the price of one. This is useful if you have a prepackaged FFT algorithm which you want to apply to real data.
Assume x[n] and y[n] are two lengthN real signals. Build a complex signal c[n] = x[n] + jy[n] and compute its DFT C[k], k = 0,1,…,N  1. Show that
where X[k] and Y [k] are the DFTs of x[n] and y[n], respectively.
Exercise 4.7: The PlancherelParseval equality. Let x[n] and y[n] be two complex valued sequences and X(exp(jw)) and Y (exp(jw)), their corresponding DTFTs.
where we use the inner products for ℓ_{2}(Z) and L_{2}[π,π], respectively.
Exercise 4.8: Numerical computation of the DFT. Consider the signal x(n) = cos(2πf_{0}n). Compute and draw the DFT of the signal in N = 128 points, for
You can use any numerical package to do this. Explain the differences that we can see in these two spectra.
Exercise 4.9: DTFT vs. DFT. Consider the following infinite nonperiodic discrete time signal:
x[n] = 
We want to visualize the magnitude of X(exp(jω)) using a numerical package (for instance, Matlab). Most numeric packages cannot handle continuous sequences such as X(exp(jω)); therefore we need to consider only a finite number of points for the spectrum.
The DTFT is mostly a theoretical analysis tool, and in many cases, we will compute the DFT. Moreover, for obvious reasons, numeric computation programs as, Matlab, only compute the DFT. Recall that in Matlab we use the Fast Fourier Transform (FFT), an efficient algorithm to compute the DFT.
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
The previous Chapters gave us a thorough overview on both the nature of discretetime signals and on the tools used in analyzing their properties. In the next few Chapters, we will study the fundamental building block of any digital signal processing system, that is, the linear filter. In the discretetime world, filters are nothing but procedures which store and manipulate mathematically the numerical samples appearing at their input and their output; in other words, any discretetime filter can be described procedurally in the form of an algorithm. In the special case of linear and timeinvariant filters, such an algorithm can be concisely described mathematically by a constantcoefficient difference equation.
In its most general form, a discretetime system can be described as a black box accepting a number of discretetime sequences as inputs and producing another number of discretetime sequences at its output.
In this book we are interested in studying the class of linear timeinvariant (LTI) discretetime systems with a single input and a single output; a system of this type is referred to as a filter. A linear timeinvariant system can thus be viewed as an operator which transforms an input sequence into an output sequence:
Linearity is expressed by the equivalence
 (5.1) 
for any two sequences x_{1}[n] and x_{2}[n] and any two scalars α,β C. Timeinvariance is expressed by
 (5.2) 
Linearity and timeinvariance are very reasonable and “natural” requirements for a signal processing system. Imagine a recording system: linearity implies that a signal obtained by recording a violin and a piano playing together is the same as the sum of the signals obtained recording the violin and the piano separately (but in the same recording room). Multitrack recordings in music production are an application of this concept. Time invariance basically means that the system’s behavior is independent of the time the system is turned on. Again, to use a musical example, this means that a given digital recording played back by a digital player will sound the same, regardless of when it is played.
Yet, simple as these properties, linearity and timeinvariance taken together have an incredibly powerful consequence on a system’s behavior. Indeed, a linear timeinvariant system turns out to be completely characterized by its response to the input x[n] = δ[n]. The sequence h[n] = {δ[n]} is called the impulse response of the system and h[n] is all we need to know to determine the system’s output for any other input sequence. To see this, we know that for any sequence we can always write the canonical orthonormal expansion (i.e. the reproducing formula in (2.18))
and therefore, if we let δ[n] = h[n], we can apply (5.1) and (5.2) to obtain
 (5.3) 
The summation in (5.3) is called the convolution of sequences x[n] and h[n] and is denoted by the operator “*” so that (5.3) can be shorthanded to
This is the general expression for a filtering operation in the discretetime domain. To indicate a specific value of the convolution at a given time index n_{0}, we may use the notation y[n_{0}] = (x * h)[n_{0}]
Clearly, for the convolution of two sequences to exist, the sum in (5.3) must be finite and this is always the case if both sequences are absolutely summable. As in the case of the DTFT, absolute summability is just a sufficient condition and the sum (5.3) can be well defined in certain other cases as well.
Basic Properties. The convolution operator is easily shown to be linear and timeinvariant (which is rather intuitive seeing as it describes the behavior of an LTI system):
 (5.4) 
 (5.5) 
The convolution is also commutative:
 (5.6) 
which is easily shown via a change of variable in (5.3). Finally, in the case of square summable sequences, it can be shown that the convolution is associative:
 (5.7) 
This last property describes the effect of connecting two filters and in cascade and it states that the resulting effect is that of a single filter whose impulse response is the convolution of the two original impulse responses. As a corollary, because of the commutative property, the order of the two filters in the cascade is completely irrelevant. More generally, a sequence of filtering operations can be performed in any order.
Please note that associativity does not necessarily hold for sequences which are not squaresummable. A classic counterexample is the following: consider the three sequences
x[n]  = u[n] the unit step  
y[n]  = δ[n]  δ[n  1] the firstdifference operator  
w[n]  = 1 a constant signal 
x[n] * (y[n] * w[n])  = 0  
(x[n] * y[n]) * w[n]  = 1 
Convolution and Inner Product. It is immediately obvious that, for two sequences x[n] and h[n], we can write:
that is, the value at index n of the convolution of two sequences is the inner product (in ℓ_{2}(Z)) of the first sequence – conjugated,^{(1)} timereversed and recentered at n – with the input sequence. The above expression describes the output of a filtering operation as a series of “localized” inner products; filtering, therefore, measures the timelocalized similarity (in the inner product sense, i.e. in the sense of the correlation) between the input sequence and a prototype sequence (the timereversed impulse response).
In general, the convolution operator for a signal is defined with respect to the inner product of its underlying Hilbert space. For the space of Nperiodic sequences, for instance, the convolution is defined as
[n] *ỹ[n]  = ∑ _{k=0}^{N1}[k]ỹ[n  k]  (5.8) 
= ∑ _{k=0}^{N1}[n  k]ỹ[k]  (5.9) 
X(e^{jω}) * Y (e^{jω})  = X^{*}(e^{j(ωσ)}),Y (e^{jσ})  (5.10) 
= ∫ _{π}^{π}X(e^{j(ωσ)})Y (e^{jσ})dσ  (5.11)  
= ∫ _{π}^{π}X(e^{jσ})Y (e^{j(ωσ)})dσ  (5.12) 
As we said, an LTI system is completely described by its impulse response, i.e. by h[n] = x[n]}.
FIR vs IIR. Since the impulse response is defined as the transformation of the discretetime delta and since the delta is an infinitelength signal, the impulse response is always an infinitelength signal, i.e. a sequence. The nonzero values of the impulse response are usually called taps. Two distinct cases are possibles:
Note that in the case of FIR filters, the convolution operator entails only a finite number of sums and products; if h[n] = 0 for n < N and n ≥ M, we can invoke commutativity and rewrite (5.3) as
Thus, convolution sums involving a finitesupport impulse response are always well defined.
Causality. A system is called causal if its output does not depend on future values of the input. In practice, a causal system is the only type of “realtime” system we can actually implement, since knowledge of the future is normally not an option in real life. Yet, noncausal filters maintain a practical interest since in some application (usually called “batch processing”) we may have access to the entirety of a discretetime signal, which has been previously stored on some form of memory support.^{(2)} A filter whose output depends exclusively on future values of the input is called anticausal.
For an LTI system, causality implies that the associated impulse response is zero for negative indices; this is the only way to remove all “future” terms in the convolution sum (5.3). Similarly, for anticausal systems, the impulse response must be zero for all positive indices. Clearly, between the strict causal and anticausal extremes, we can have intermediate cases: consider for example a filter whose impulse response is zero for n < M with M N^{+}. This filter is technically noncausal, but only in a “finite” way. If we consider the pure delay filter , whose impulse response is
we can easily see that can be made strictly causal by cascading M delays in front of it. Clearly, an FIR filter is always causal up to a delay.
Stability. A system is called boundedinput boundedoutput stable (BIBO stable) if its output is bounded for all bounded input sequences. Again, stability is a very natural requirement for a filter, since it states that the output will not “blow up” when the input is reasonable. Linearity and timeinvariance do not guarantee stability (as anyone who has ever used a handsfree phone has certainly experienced).
A bounded sequence x[n] is one for which it is possible to find a finite value L R^{+} so that x[n] < L for all n. A necessary and sufficient condition for an LTI system to be BIBO stable is that its impulse response h[n] be absolutely summable. The sufficiency of the condition is proved as follows: if x[n] < L for all n, then we have

and the last term is finite if h[n] is absolutely summable. Conversely, assume that h[n] is not absolutely summable and consider the signal
Note that in the case of FIR filters, the convolution sum only involves a finite number of terms. As a consequence, FIR filters are always stable.
So far, we have described a filter from a very abstract point of view, and we have shown that a filtering operation corresponds to a convolution with a defining sequence called the impulse response. We now take a diametrically opposite standpoint: we introduce a very practical problem and arrive at a solution which defines an LTI system. Once we recognize that the solution is indeed a discretetime filter, we will be able to make use of the theoretical results of the previous Sections in order, to analyze its properties.
Consider a sequence like the one in Figure 5.2; we are clearly in the presence of a “smooth” signal corrupted by noise, which appears as little wiggles in the plot. Our goal is the removal of the noise, i.e. to smooth out the signal, in order to improve its readability.
An intuitive and basic approach to remove noise from data is to replace each point of the sequence x[n] by a local average, which can be obtained by taking the average of the sample at n and its N  1 predecessors. Each point of the “denoised” sequence can therefore be computed as
 (5.13) 
This is easily recognized as a convolution sum, and we can obtain the impulse response of the associated filter by letting x[n] = δ[n]; it is easy to see that
 (5.14) 
The impulse response, as it turns out, is a finitesupport sequence so the filter that we have just built, is an FIR filter; this particular filter goes under the name of Moving Average (MA) filter. The “smoothing power” of this filter is dependent on the number of samples we take into account in the average or, in other words, on the length N of its impulse response. The filtered version of the original sequence for small and large values of N is plotted in Figures 5.3 and 5.4 respectively. Intuitively we can see that as N grows, more and more wiggles are removed. We will soon see how to handle the “smoothing power” of a filter in a precise, quantitative way. A general characteristic of FIR filters, that should be immediately noticed is that the value of the output does not depend on values of the input which are more than N steps away; FIR filters are therefore finite memory filters. Another aspect that we can mention at this point concerns the delay introduced by the filter: each output value is the average of a window of N input values whose representative sample is the one falling in the middle; thus, there is a delay of N∕2 samples between input and output, and the delay grows with N.
The moving average filter that we built in the previous Section has an obvious drawback; the more we want to smooth the signal, the more points we need to consider and, therefore, the more computations we have to perform to obtain the filtered value. Consider now the formula for the output of a lengthM moving average filter:
 (5.15) 
We can easily see that

where we have defined λ = (M  1)∕M. Now, as M grows larger, we can safely assume that if we compute the average over M  1 or over M points, the result is basically the same: in other words, for M large, we can say that y_{M1}[n] ≈ y_{M}[n]. This suggests a new way to compute the smoothed version of a sequence in a recursive fashion:
 (5.16) 
This no longer looks like a convolution sum; it is, instead, an instance of a constant coefficient difference equation. We might wonder whether the transformation realized by (5.16) is still linear and timeinvariant and, in this case, what its impulse response is. The first problem that we face in addressing this question stems from the recursive nature of (5.16): each new output value depends on the previous output value. We need to somehow define a starting value for y[n] or, in system theory parlance, we need to set the initial conditions. The choice which guarantees that the system defined by (5.16) is linear and timeinvariant corresponds to the requirement that the system response to a sequence identically zero, be zero for all n; this requirement is also known as zero initial conditions, since it corresponds to setting y[n] = 0 for n < N_{0} where N_{0} is some time in the past.
The linearity of (5.16) can now be proved in the following way : assume that the output sequence for the system defined by (5.16) is y[n] when the input is x[n]. It is immediately obvious that y_{1}[n] = αy[n] satisfies (5.16) for an input equal to αx[n]. All we need to prove is that this is the only solution. Assume this is not the case and call y_{2}[n] the other solution; we have
y_{1}[n]  = λy_{1}[n  1] + (1  λ)αx[n]  
y_{2}[n]  = λy_{2}[n  1] + (1  λ)αx[n] 
Now that we know that (5.16) defines an LTI system, we can try to compute its impulse response. Assuming zero initial conditions and x[n] = δ[n], we have
 (5.17) 
so that the impulse response (shown in Figure 5.7) is
 (5.18) 
The impulse response clearly defines an IIR filter and therefore the immediate question is whether the filter is stable. Since a sufficient condition for stability is that the impulse response is absolutely summable, we have
 (5.19) 
We can see that the above limit is finite for λ < 1 and so the system is BIBO stable for these values. The value of λ (which is, as we will see, the pole of the system) determines the smoothing power of the filter (Fig. 5.5). As λ → 1, the input is smoothed more and more as can be seen in Figure 5.6, at a constant computational cost. The system implemented by (5.16) is often called a leaky integrator, in the sense that it approximates the behavior of an integrator with a leakage (or forgetting) factor λ. The delay introduced by the leaky integrator is more difficult to analyze than for the moving average but, again, it grows with the smoothing power of the filter; we will soon see how to proceed in order to quantify the delay introduced by IIR filters.
As we can infer from this simple analysis, IIR filters are much more delicate entities than FIR filters; in the next Chapters we will also discover that their design is also much less straightforward and offers less flexibility. This is why, in practice, FIR filters are the filters of choice. IIR filters, however, and especially the simplest ones such as the leaky integrator, are extremely attractive when computational power is a scarce resource.
The above examples have introduced the notion of filtering in an operational and intuitive way. In order to make more precise statements on the characteristics of a discretetime filter we need to move to the frequency domain. What does a filtering operation translate to in the frequency domain? The fundamental result of this Section is the convolution theorem for discretetime signals: a convolution in the discretetime domain is equivalent to a multiplication of Fourier transforms in the frequency domain. This result opens up a very fruitful perspective on filtering and filter design, together with alternative approaches to the implementation of filtering devices, as we will see momentarily.
Consider the case of a complex exponential sequence of frequency ω_{0} as the input to a linear timeinvariant system ; we have
 (5.20) 
where H(e^{jω0}) (i.e. the DTFT of h[n] at ω = ω_{0}) is called the frequency response of the filter at frequency ω_{0}. The above result states the fundamental fact that complex exponentials are eigensequences^{(3)} of lineartime invariant systems. We notice the following two properties:
i.e. the output oscillation is scaled in amplitude by A_{0} = H(e^{jω0}, the magnitude of the DTFT, and it is shifted in phase by θ_{0} = ∡H(e^{jω0}), the phase of the DTFT.
Consider two sequences x[n] and h[n], both absolutely summable. The discretetime Fourier transform of the convolution y[n] = x[n] * h[n] is
 (5.21) 
The proof is as follows: if we take the DTFT of the convolution sum, we have
and by interchanging the order of summation (which can be done because of the absolute summability of both sequences) and by splitting the complex exponential, we obtain
from which the result immediately follows after a change of variable. Before discussing the implications of the theorem, we to state and prove its dual, which is called the modulation theorem.
Consider now the discretetime sequences x[n] and w[n], both absolutely summable, with discretetime Fourier transforms X(e^{jω}) and W(e^{jω}). The discretetime Fourier transform of the product y[n] = x[n]w[n] is
 (5.22) 
where the DTFT convolution is via the convolution operator for 2πperiodic functions, defined in (5.12). This is easily proven as follows: we begin with the DTFT inversion formula of the DTFT convolution:

and we split the last integral to obtain

These fundamental results are summarized in Table 5.1.

Since an LTI system is completely characterized by its impulse response, it is also uniquely characterized by its frequency response. The frequency response provides us with a different perspective on the properties of a given filter, which are embedded in the magnitude and the phase of the response.
Just as the impulse response completely characterizes a filter in the discretetime domain, its Fourier transform, called the filter’s frequency response, completely characterizes the filter in the frequency domain. The properties of LTI systems are described in terms of their DTFTs magnitude and phase, each of which controls different features of the system’s behavior.
Magnitude. The most powerful intuition arising from the convolution theorem is obtained by considering the magnitude of the spectra involved in a filtering operation. Recall that a Fourier spectrum represents the energy distribution of a signal in frequency; by appropriately “shaping” the magnitude spectrum of a filter’s impulse response we can easily boost, attenuate, and even completely eliminate, a given part of the frequency content in the filtered input sequence. According to the way the magnitude spectrum is affected by the filter, we can classify filters into three broad categories (here as before we assume that the impulse response is real, and therefore the associated magnitude spectrum is symmetric; in addition, the 2π periodicity of the spectrum is implicitly understood):
The frequency interval (or intervals) for which the magnitude of the frequency response is zero (or practically negligible) is called the stopband. Conversely, the frequency interval (or intervals) for which the magnitude is nonnegligible is called the passband.
Phase. The phase response of a filter has an equally important effect on the output signal, even though its impact is less intuitive.
By and large, the phase response acts as a generalized delay. Consider Equation (5.20) once more; we can see that a single sinusoidal oscillation undergoes a phase shift equal to the phase of the impulse response’s Fourier transform. A phase offset for a sinusoid is equivalent to a delay in the time domain. This is immediately obvious in the case of a trigonometric function defined on the real line since we can always write
For discretetime sinusoids, it is not always possible to express the phase offset in terms of an integer number of samples (exactly for the same reasons for which a discrete time sinusoid is not always periodic in its index n); yet the effect is the same, in that a phase offset corresponds to an implicit delay of the sinusoid. When the phase offset for a complex exponential is not an integer multiple of its frequency, we say that we are in the presence of a fractional delay. Now, since each sinusoidal component of the input signal may be delayed by an arbitrary amount, the output signal will be composed of sinusoids whose relative alignment may be very different from the original. Phase alignment determines the shape of the signal in the time domain, as we have seen in Section 4.7.4. A filter with unit magnitude across the spectrum, which does not affect the amplitude of the sinusoidal components, but whose phase response is not linear, can completely change the shape of a filtered signal.^{(5)}
Linear Phase. A very important type of phase response is linear phase:
 (5.23) 
Consider a simple system which just delays its input, i.e. y[n] = x[n  D] with D Z; this is obviously an LTI system with impulse response h[n] = δ[nD] and frequency response H(e^{jω}) = e^{jωD}. This means that, if the value d in (5.23) is an integer, (5.23) defines a pure delay system; since the magnitude is constant and equal to one, this is an example of an allpass filter. If d is not an integer, (5.23) still defines an allpass delay system for which the delay is fractional, and we should interpret its effect as explained in the previous Section. In particular, if we think of the original signal in terms of its Fourier reconstruction formula, the fractionally delayed output is obtained by stepping forward the initial phase of all oscillators by a noninteger multiple of the frequency. In the discretetime domains, we have a signal which takes values “between” the original samples but, since the relative phase of any one oscillator, with respect to the others, has remained the same as in the original signal, the shape of the signal in the time domain is unchanged.
For a general filter with linear phase we can always write
In other words, the net effect of the linear phase filter is that of a cascade of two systems: a zerophase filter which affects only the spectral magnitude of the input and therefore introduces no phase distortion, followed by a (possibly fractional) delay system (which, again, introduces just a delay but no phase distortion).
Group Delay. When a filter does not have linear phase, it is important to quantify the amount of phase distortion both in amount and in location. Nonlinear phase is not always a problem; if a filter’s phase is nonlinear just in the stopband, for instance, the actual phase distortion is negligible. The concept of group delay is a measure of nonlinearity in the phase; the idea is to express the phase response around any given frequency ω_{0} using a first order Taylor approximation. Define φ(ω) = ∡H(e^{jω}) and approximate φ(ω) around ω_{0} as φ(ω_{0} + τ) = φ(ω_{0}) + τφ′(ω_{0}); we can write
 (5.24) 
so that, approximately, the frequency response of the filter is linear phase for at least a group of frequencies around a given ω_{0}. The delay for this group of frequencies is the negative of the derivative of the phase, from which the definition of group delay is
 (5.25) 
For truly linear phase systems, the group delay is a constant. Deviations from a constant value quantify the amount of phase distortion introduced by a filter in terms of the (possibly noninteger) number of samples by wich a frequency component is delayed.
Now that we know what to look for in a filter, we can revisit the “empirical” denoising filters introduced in Section 5.3. Both filters are realizable, in the sense that they can be implemented with practical and efficient algorithms, as we will study in the next Chapters. Their frequency responses allow us to qualify and quantify precisely their smoothing properties, which we previously described, in an intuitive fashion.
Moving Average. The frequency response of the moving average filter (Sect. 5.3.1) can be shown to be
 (5.26) 
In the above expression, it is easy to separate the magnitude and the phase, which are plotted in Figure 5.8. The group delay for the filter is the constant (N  1)∕2, which means that the filter delays its output by (N  1)∕2 samples (i.e. there is a fractional delay for N even). This formalizes the intuition that the “representative sample” for an averaging window of N samples is the sample in the middle. If N is even, this does not correspond to a real sample but to a “ghost” sample in the middle.
Leaky Integrator. The frequency response of the leaky integrator in Section 5.3.2 is
 (5.27) 
Magnitude and phase are, respectively,
 (5.28) 
 (5.29) 
and they are plotted in Figure 5.9. The group delay, also plotted in Figure 5.9, is obtained by differentiating the phase response:
 (5.30) 
The group delay indicates that, for the frequencies for which the magnitude is not very small, the delay increases with the smoothing power of the filter.
Note that, according to the classification in Section 5.4.3, both the moving average and the leaky integrator are lowpass filters.
The frequency characterization introduced in Section 5.4.3 immediately leads to questions such as “What is the best lowpass filter?” or “Can I have a highpass filter with zero delay?” It turns out that the answers to such questions are given by ideal filters. Ideal filters are what the (Platonic) name suggests: theoretical abstractions which capture the essence of the basic filtering operation but which are not realizable in practice. In a way, they are the “gold standard” of filter design.
Ideal Lowpass. The ideal lowpass filter is a filter which “kills” all frequency content above a cutoff frequency ω_{c} and leaves all frequency content below ω_{c} untouched; it is defined in the frequency domain as
 (5.31) 
and clearly, the filter has zero phase delay. The ideal lowpass can also be defined in terms of its bandwidth ω_{b} = 2ω_{c}. The DTFT inversion formula gives the corresponding impulse response:
 (5.32) 
The impulse response is a symmetric infinite sequence and the filter is therefore IIR; unfortunately, however, it can be proved that no realizable system (i.e. no algorithm with a finite number of operations per output sample) can exactly implement the above impulse response. More bad news: the decay of the impulse response is slow, going to zero only as 1∕n, and it is not absolutely summable; this means that any FIR approximation of the ideal lowpass obtained by truncating h[n] needs a lot of samples to achieve some accuracy and that, in any case, convergence to the ideal frequency response is only be in the mean square sense. An immediate consequence of these facts is that, when designing realizable filters, we will take an entirely different approach.
Despite these practical difficulties, the ideal lowpass and its associated DTFT pair are so important as a theoretical paradigm, that two special function names are used to denote the above expressions. These are defined as follows:
rect(x)  =  (5.33) 
sinc(x)  =  (5.34) 
 (5.35) 
(obviously 2πperiodized over all R). Its impulse response in terms of bandwidth becomes
 (5.36) 
or, in terms of cutoff frequency,
 (5.37) 
The DTFT pair:
 (5.38) 
constitutes one of the fundamental relationships of digital signal processing. Note that as ω_{b} → 2π, we reobtain the wellknown DTFT pair δ[n] ↔ 1, while as ω_{b} → 0 we can renormalize by (2π∕ω_{b}) to obtain 1 ↔ (ω).
Ideal Highpass. The ideal highpass filter with cutoff frequency ω_{c} is the complementary filter to the ideal lowpass, in the sense that it eliminates all frequency content below the cutoff frequency. Its frequency response is
 (5.39) 
where the 2πperiodicity is as usual implicitly assumed. From the relation H_{h}(e^{jω}) = 1  rect(ω∕2ω_{c}) the impulse response is easily obtained as
Ideal Bandpass. The ideal bandpass filter with center frequency ω_{0} and bandwidth ω_{b}, ω_{b}∕2 < ω_{0} is defined in the frequency domain between π and π as
 (5.40) 
where the 2πperiodicity is, as usual, implicitly assumed. It is left as an exercise to prove that the impulse response is
 (5.41) 
Hilbert Filter. The Hilbert filter is defined in the frequency domain as
 (5.42) 
where the 2πperiodicity is, as usual, implicitly assumed. Its impulse response is easily computed as
 (5.43) 
Clearly H(e^{jω}) = 1, so this filter is allpass. It introduces a phase shift of π∕2 in the input signal so that, for instance,
 (5.44) 
as one can verify from (4.39) and (4.40). More generally, the Hilbert filter is used in communication systems to build efficient demodulation schemes, as we will see later. The fundamental concept is the following: consider a real signal x[n] and its DTFT X(e^{jω}); consider also the signal processed by the Hilbert filter y[n] = h[n] * x[n]. This can be defined as
i.e. A(e^{jω}) is the positivefrequency part of the spectrum of x[n]. Since x[n] is real, its DTFT has symmetry X(e^{jω}) = X^{*}(e^{jω}) and therefore we can write
By separating the real and imaginary parts we can always write A(e^{jω}) = A_{R}(e^{jω}) + jA_{I}(e^{jω}) and so
For the filtered signal, we know that Y (e^{jω}) = H(e^{jω})X(e^{jω}) and therefore
It is, thus, easy to see that
 (5.45) 
i.e. the spectrum of the signal a[n] = x[n] + jy[n] contains only the positivefrequency components of the original signal x[n]. The signal a[n] is called the analytic signal associated to x[n].
Contrary to ideal filters, realizable filters are LTI systems which can be implemented in practice; this means that there exists an algorithm which computes every output sample with a finite number of operations and using a finite amount of memory storage. Note that the impulse response of a realizable filter need not be finitesupport; while FIR filters are clearly realizable we have seen at least one example of realizable IIR filter (i.e. the leaky integrator).
Let us consider (informally) the possible mathematical description of an LTI system, seen as a “machine” which takes one input sample at a time and produces a corresponding output sample. Linearity in the inputoutput relationship implies that the description can involve only linear operations, i.e. sums and multiplications by scalars. Time invariance implies that the scalars be constants. Finally, realizability implies that, inside the above mentioned “machine”, there can be only a finite number of adders and multipliers (and, correspondingly, a finite number of memory cells). Such a mathematical relationship goes under the name of constantcoefficient difference equation (CCDE).
In its most general form, a constantcoefficient difference equation defines a relationship between an input signal x[n] and an output signal y[n] as
 (5.46) 
In the rest of this book we restrict ourselves to the case in which all the coefficients a_{k} and b_{k} are real. Usually, a_{0} = 1, so that the above equation can easily be rearranged as
 (5.47) 
Clearly, the above relation defines each output sample y[n] as a linear combination of past and present input values and past output values. However, it is easy to see that if a_{N1}0 we can for instance rearrange (5.46) as
where a′_{k} = a_{k}∕a_{N1} and b′_{k} = b_{k}∕a_{N1}. With the change of variable m = n  N + 1, this becomes
 (5.48) 
which shows that the difference equation can also be computed in another way, namely by expressing y[m] as a linear combination of future values of input and output. It is rather intuitive that the first approach defines a causal behavior, while the second approach is anticausal.
Contrary to the differential equations used in the characterization of continuoustime systems, difference equations can be used directly to translate the transformation operated by the system into an explicit algorithmic form. To see this, and to gain a lot of insight into the properties of difference equations, it may be useful to consider a possible implementation of the system in (5.47), shown as a C code sample in Figure 5.14.
extern double a[N]; // The a’s coefficients extern double b[M]; // The b’s coefficients static double x[M]; // Delay line for x static double y[N]; // Delay line for y double GetOutput(double input) { int k; // Shift delay line for x: for (k = N1; k > 0; k) x[k] = x[k1]; // new input value x[n]: x[0] = input; // Shift delay line for y: for (k = M1; k > 0; k) y[k] = y[k1]; double y = 0; for (k = 0; k < M; k++) y += b[k] * x[k]; for (k = 1; k < M; k++) y = a[k] * y[k]; // New value for y[n]; store in delay line return (y[0] = y); }

It is easy to verify that
If we try to compile and execute the code, however, we immediately run into an initialization problem: the first time (actually, the first max(N,M  1) times) we call the function, the delay lines which hold past values of x[n] and y[n] will contain undefined values. Most likely, the compiler will notice this condition and will print a warning message signaling that the static arrays have not been properly initialized. We are back to the problem of setting the initial conditions of the system. The choice which guarantees linearity and time invariance is called the zero initial conditions and corresponds to setting the delay lines to zero before starting the algorithm. This choice implies that the system response to the zero sequence is the zero sequence and, in this way, linearity and time invariance can be proven as in Section 5.3.2.
CCDEs provide a powerful operational view of filtering; in very simple case, such as in Section 5.3.2 or in the case of FIR filters, the impulse response (and therefore its frequency response) can be obtained directly from the filter’s equation. This is not the general case however, and to analyze a generic realizable filter from its CCDE, we need to be able to easily derive the transfer function from the CCDE. Similarly, in order to design a realizable filter which meets a set of requirement, we need to devise a procedure which “tunes” the coefficients in the CCDE until the frequency response is satisfactory while preserving stability; in order to do this, again, we need a convenient tool to link the CCDE to the magnitude and phase response. This tool will be introduced in the next Chapter, and goes under the name of ztransform.
Example 5.1: Radio transmission
AM radio was one of the first forms of telecommunication and remains to this day a ubiquitous broadcast method due to the ease with which a robust receiver can be assembled. From the hardware point of view, an AM transmitter uses a transducer (i.e. a microphone) to convert sound to an electric signal, and then modulates this signal into a frequency band which correspond to a region of the electromagnetic spectrum in which propagation is wellbehaved (see also Section 12.1.1). An AM receiver simply performs the reverse steps. Here we can neglect the physics of transducers and of antennas and concentrate on an idealized digital AM transmitter.

Modulation. Suppose x[n] is a real, discretetime signal representing voice or music. Acoustic signals are a type of lowpass (or baseband) signal; while good for our ears (which are baseband receivers) baseband signals are not suitable for direct electromagnetic transmission since propagation in the baseband is poor and since occupancy of the same band would preclude the existence of multiple radio channels. We need to use modulation in order to shift a baseband signal in frequency and transform it into a bandpass signal prior to transmission. Modulation is accomplished by multiplying the baseband signal by an oscillatory carrier at a given center frequency; note that modulation is not a timeinvariant operation. Consider the signal
where ω_{c} is the carrier frequency. This corresponds to a cosine modulation since
and (see (4.56)):
The complex signal c[n] = x[n]e^{jωcn} is called the complex bandpass signal and, while not transmissible in practice, it is a very useful intermediate representation of the modulated signal especially in the case of Hilbert demodulation.
Assume that the baseband signal has spectral support [ω_{b}∕2, ω_{b}∕2] (i.e. assume that its energy is zero for ω > ω_{b}∕2); a common way to express this concept is to say that the bandwidth of the baseband signal is ω_{b}. What is the maximum carrier frequency ω_{c} that we can use to create a bandpass signal? If we look at the effect of modulation on the spectrum and we take into account its 2πperiodicity as in Figure 5.15 we can see that if we choose too large a modulation frequency the positive passband overlaps with the negative passband of the first repetition. Intuitively, we are trying to modulate too fast and we are falling back into the folding of frequencies larger than 2π which we have seen in Example 2.1. In our case the maximum frequency of the modulated signal is ω_{c} + ω_{b}∕2. To avoid overlap with the first repetition of the spectrum, we must guarantee that:
 (5.49) 
which limit the maximum carrier frequency to ω_{c} < π  ω_{b}∕2.
Demodulation. An AM receiver must undo the modulation process; again, assume we’re entirely in the discretetime domain. The first step is to isolate the channel of interest by using a sharp bandpass filter centered on the modulation frequency ω_{c} (see also Exercise 5.7). Neglecting the impairments introduced by the transmission (noise and distortion) we can initially assume that after filtering the receiver possesses an exact copy of the modulated signal y[n]. The original signal x[n] can be retrieved in several ways; an elegant scheme, for instance, is Hilbert demodulation. The idea behind Hilbert demodulation is to reconstruct the complex bandpass signal c[n] from y[n] as c[n] = y[n] + jh[n] * y[n] where h[n] is the impulse response of the Hilbert filter as given in (5.43). Once this is done, we multiply c[n] by the complex exponential e^{jωcn} and take the real part. This demodulation scheme will be studied in more detail in Section 12.3.1.
A more classic scheme involves multiplying y[n] by a sinusoidal carrier at the same frequency as the carrier and filtering the result with a lowpass filter with cutoff at ω_{b}∕2. After the multiplication, the signal is

and the corresponding spectrum is therefore

This spectrum is shown in Figure 5.16, with explicit repetitions; note that if the maximum frequency condition in (5.49) is satisfied, the components at twice the carrier frequency that may leak into the [π,π] interval from the neighboring spectral repetitions do not overlap with the baseband. From the figure, if we choose
then (e^{jω}) = H(e^{jω})U(e^{jω}) = X(e^{jω}). The component at ω = 2ω_{c} is filtered out and thus the spectrum of the demodulated signal is equal to the spectrum of the original signal. Of course the ideal lowpass is in practice replaced by a realizable IIR or an FIR filter with adequate properties.
Finally, for the fun of it, we can look at a “digital galena” demodulator. Galena receivers (whose general structure is shown in Figure 5.17) are the simplest (and oldest) type of radio receiver: the antenna and the tuning coil form a variable LC bandpass filter to select the band while a galena crystal, touched by a thin metal wire called the “cat’s whisker”, acts as a rectifying nonlinearity. A pair of highimpedance headphones is connected between the cat’s whisker and ground; the mechanical inertia of the headphones acts as a simple lowpass filter which completes the radio receiver. In a digital simulation of a galena receiver, the antenna and coil are replaced by our sharp digital bandpass filter, at the output of which we find y[n]. The rectified signal at the cat’s whisker can be modeled as y_{r}[n] = y[n]; since y_{r}[n] is positive, the integration realized by the crude lowpass in the headphone can reveal the baseband envelope and eliminate most of the high frequency content. The process is best understood in the time domain and is illustrated in Figure 5.18. Note that, spectrally, the qualitative effect of the nonlinearity is indeed to bring the bandpass component back to baseband; as for most nonlinear processing, however, no simple analytical form for the baseband spectrum is available.
Example 5.2: Can IIRs see the future?
If we look at the bottom panel of Figure 5.9 we can notice that the group
delay is negative for frequencies above approximately π∕7. Does that mean
that we can look into the future?
(Hint: no.)

To see what we mean, consider the effect of group delay on a narrowband signal x[n] centered at ω_{0}; a narrowband signal can be easily constructed by modulating a baseband signal s[n] (i.e. a signal so that S(e^{jω}) = 0 for ω > ω_{b} and ω_{b} very small). Set
and consider a realvalued filter H(e^{jω}) such that for τ small it is H(e^{j(ω0+τ)})≈ 1 and the antisymmetric phase response is

clearly the group delay of the filter around ω_{0} is g_{d}. If we filter the narrowband signal x[n] with H(e^{jω}), we can write the DTFT of the output for 0 ≤ ω < π as
since, even though the approximation for H(e^{jω}) holds only in a small neighborhood of ω_{0}, X(e^{jω}) is zero everywhere else so “we don’t care”. If we write out the expression for the full spectrum we have

where we have put φ = θ + g_{d}ω_{0}. We can recognize by inspection that the first term is simply s[n] modulated by a cosine with a phase offset of φ; the trailing linear phase term is just a global delay. If we assume g_{d} is an integer we can therefore write
 (5.50) 
so that the effect of the group delay is to delay the narrowband envelope by exactly g_{d} samples. The analysis still holds even if g_{d} is not an integer, as we will see in Chapter 9 when we deal with fractional delays.
Now, if g_{d} is negative, (5.50) seems to imply that the envelope s[n] is advanced in time so that a filter with negative group delay is able to produce a copy of the input before the input is even applied; we would have a time machine which can look into the future! Clearly there must something wrong but the problem cannot be with the filter since the leaky integrator is an example of a perfectly realizable filter with negative group delay in the stopband. In fact, the inconsistency lies with the hypothesis of having a perfectly narrowband signal: just like the impulse response of an ideal filter is necessarily an infinite twosided sequence, so any perfectly narrowband signal cannot have an identifiable “beginning”. When we think of “applying” the input to the filter, we are implicitly assuming a onesided (or, more likely, a finitesupport) signal and this signal has nonzero spectral components at all frequencies. The net effect of these is that the overall delay for the signal will always be nonnegative.
Discretetime filters are covered in all signal processing books, e.g. a good review is given in DiscreteTime Signal Processing, by A. V. Oppenheim and R. W. Schafer (PrenticeHall, last edition in 1999).
Exercise 5.1: Linearity and timeinvariance – I. Consider the transformation x[n] = nx[n]. Does define an LTI system?
Exercise 5.2: Linearity and timeinvariance – II. Consider a discretetime system {⋅}. When the input is the signal x[n] = cos(2π∕5)n, the output is x[n] = sin(π∕2)n. Can the system be linear and timebreak invariant? Explain.
Exercise 5.3: Finitesupport convolution. Consider the finitesupport signal h[n] defined as
Exercise 5.4: Convolution – I. Let x[n] be a discretetime sequence defined as
for some odd integer M.
Exercise 5.5: Convolution – II. Consider the following discretetime signals:
x[n]  = cos(1.5n)  
y[n]  = sinc 
Exercise 5.6: System properties. Consider the following inputoutput relations and, for each of the underlying systems, determine whether the system is linear, time invariant, BIBO stable, causal or anticausal. Characterize the eventual LTI systems by their impulse response.
Exercise 5.7: Ideal filters. Derive the impulse response of a bandpass filter with center frequency ω_{0} and passband ω_{b}:
(Hint: consider the following ingredients: a cosine of frequency ω_{0}, a lowpass filter of bandwidth ω_{b} and the modulation theorem.)
Exercise 5.8: Zerophase filtering. Consider an operator which turns a sequence into its timereversed version:
Suppose you have an LTI filter with impulse response h[n] and you perform the following sequence of operations in the followin order:
s[n]  = x[n]  
r[n]  = s[n]  
w[n]  = r[n]  
y[n]  = w[n] 
Exercise 5.9: Nonlinear signal processing. Consider the system implementing the inputoutput relation y[n] = x[n] = x^{2}[n].
Now consider the following cascade system:

where is the following ideal highpass filter:
(as per usual, G(e^{jω}) is 2πperiodic – i.e. prolonged by periodicity outside of [π,π]). The output of the cascade is therefore v[n] = .
Exercise 5.10: Analytic signals and modulation. In this exercise we explore a modulationdemodulation scheme commonly used in data transmission systems. Consider two real sequences x[n] and y[n], which represent two data streams that we want to transmit. Assume that their spectrum is of lowpass type, i.e. X(e^{jω}) = Y (e^{jω}) = 0 for ω > ω_{c}. Consider further the following derived signal:
and the modulated signal:
The signal r[n] is called a complex bandpass signal. Of course it cannot be transmitted as such, since it is complex. The transmitted signal is, instead,
This modulated signal is an example of Quadrature Amplitude Modulation (QAM).
Now we want to recover x[n] and y[n] from s[n]. To do so, follow these steps:
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
Mathematically, the ztransform is a mapping between complex sequences and analytical functions on the complex plane. Given a discretetime signal x[n], the ztransform of x[n] is formally defined as the complex function of a complex variable z C
 (6.1) 
Contrary to the Fourier transform (as well as to other wellknown transforms such as the Laplace transform or the wavelet transform), the ztransform is not an analysis tool per se, in that it does not offer a new physical insight on the nature of signals and systems. The ztransform, however, derives its status as a fundamental tool in digital signal processing from two key features:
Probably the best approach to the ztransform is to consider it as a clever mathematical transformation which facilitates the manipulation of complex sequences; for discretetime filters, the ztransform bridges the algorithmic side (i.e. the CCDE) to the analytical side (i.e. the spectral properties) in an extremely elegant, convenient and ultimately beautiful way.
To see the usefulness of the ztransform in the context of the analysis and the design of realizable filters, it is sufficient to consider the following two formal properties of the ztransform operator:
In the above, we have conveniently ignored all convergence issues for the ztransform; these will be addressed shortly but, for the time being, let us just make use of the formalism as it stands.
Consider the generic filter CCDE (ConstantCoefficient Difference Equation) in (5.46):
If we apply the ztransform operator to both sides and exploit the linearity and timeshifting properties, we have
 (6.2) 
 (6.3) 
 (6.4) 
H(z) is called the transfer function of the LTI filter described by the CCDE. The following properties hold:
 (6.5) 
which can easily be verified using an approach similar to the one used in Section 5.4.2.
As we saw in Section 5.7.1, a CCDE can be rearranged to express either a causal or a noncausal realization of a filter. This ambiguity is reflected in the ztransform and can be made explicit by the following example. Consider the sequences
x_{1}[n]  = u[n]  (6.6) 
x_{2}[n]  = δ[n]  u[n]  (6.7) 
 (6.8) 
(again, let us neglect convergence issues for the moment). For the second sequence we have
 (6.9) 
so that, at least formally, X_{1}(z) = X_{2}(z). In other words, the ztransform is not an invertible operator or, more precisely, it is invertible up to a causality specification. If we look more in detail, the sum in (6.8) converges only for z > 1 while the sum in (6.9) converges only for z < 1. This is actually a general fact: the values for which a ztransform exists define the causality or anticausality of the underlying sequence.
We are now ready to address the convergence issues that we have put aside so far. For any given sequence x[n], the set of points on the complex plane for which ∑ x[n]z^{n} exists and is finite, is called the region of convergence (ROC) for the ztransform. In order to study the properties of this region, it is useful to split the sum in (6.1) as
 (6.10) 
 (6.11) 
 (6.12) 
where N,M ≥ 0 and where both N and M can be infinity. Now, for X(z_{0}) to exist and be finite, both power series X_{a}(z) and X_{c}(z) must converge in z_{0}; since they are power series, when they do converge, they converge absolutely. As a consequence, for all practical purposes, we define the ROC in terms of absolute convergence:^{(1)}
 (6.13) 
Then the following properties are easily derived:
so that X_{c}(z) is absolutely convergent in z_{1} as well.
so that X_{a}(z) is absolutely convergent in z_{1} as well.
The ztransform provides us with a quick and easy way to test the stability of a linear system. Recall from Section 5.2.2 that a necessary and sufficient condition for an LTI system to be BIBO stable is the absolute summability of its impulse response. This is equivalent to saying that a system is BIBO stable if and only if the ztransform of its impulse response is absolutely convergent in z = 1. In other words, a system is BIBO stable if the ROC of its transfer function includes the unit circle.
For rational transfer functions, the analysis of the ROC is quite simple; indeed, the only ”trouble spots” for convergence are the values for which the denominator of (6.3) is zero. These values are called the poles of the transfer functions and clearly they must lie outside of the ROC. As a consequence, we have an extremely quick and practical rule to determine the stability of a realizable filter.
For an anticausal system the procedure is symmetrical; once the largestmagnitude pole is known, the ROC will be a disk of radius p_{0} and therefore in order to have stability, all the poles will have to be outside of the unit circle.
The rational transfer function derived in (6.3) can be written out explicitly in terms of the CCDEs coefficients, as follows:
 (6.14) 
The transfer function is the ratio of two polynomials in z^{1} where the degree of the numerator polynomial is M  1 and that of the denominator polynomial is N  1. As a consequence, the transfer function can be rewritten in factored form as
 (6.15) 
where the z_{n} are the M  1 complex roots of the numerator polynomial and are called the zeros of the system; the p_{n} are the N  1 complex roots of the denominator polynomial and, as we have seen, they are called the poles of the system. Both poles and zeros can have arbitrary multiplicity. Clearly, if z_{i} = p_{k} for some i and k (i.e. if a pole and a zero coincide) the corresponding firstorder factors cancel each other out and the degrees of numerator and denominator are both decreased by one. In general, it is assumed that such factors have already been removed and that the numerator and denominator polynomials of a given rational transfer function are coprime.
The poles and the zeros of a filter are usually represented graphically on the complex plane as crosses and dots, respectively. This allows for a quick visual assessment of stability which, for a causal system, consists of checking whether all the crosses are inside the unit circle (or, for anticausal systems, outside).
The polezero plot can exhibit distinctive patterns according to the properties of the filter.
RealValued Filters. If the filter coefficients are realvalued (and this is the only case that we consider in this text book) both the numerator and denominator polynomials are going to have realvalued coefficients. We can now recall a fundamental result from complex algebra: the roots of a polynomial with realvalued coefficients are either real or they occur in complex conjugate pairs. So, if z_{0} is a complex zero of the system, z_{0}^{*} is a zero as well. Similarly, if p_{0} is a complex pole, so is p_{0}^{*}. The polezero plot will therefore shows a symmetry around the real axis (Fig. 6.2a).

LinearPhase FIR Filters. First of all, note that the polezero plot for an FIR filter is actually just a zero plot, since FIR’s have no poles.^{(2)} A particularly important case is that of linear phase FIR filters; as we will see in detail in Section 7.2.2, linear phase imposes some symmetry constraints on the CCDE coefficients (which, of course, coincide with the filter taps). These constraints have a remarkable consequence: if z_{0} is a (complex) zero of the system, 1∕z_{0} is a zero as well. Since we consider realvalued FIR filters exclusively, the presence of a complex zero in z_{0} implies the existence of three other zeros, namely in 1∕z_{0}, z_{0}^{*} and 1∕z_{0}^{*} (Fig. 6.2b). See also the discussion in Section 7.2.2
We have seen in Section 5.2.1 that the effect of a cascade of two or more filters is that of a single filter whose impulse response is the convolution of all of the filters’ impulse responses. By the convolution theorem, this means that the overall transfer function of a cascade of K filters _{i}, i = 1,…,K is simply the product of the single transfer functions H_{i}(z):
If all filters are realizable, we can consider the factored form of each H_{i}(z) as in (6.15). In the product of transfer functions, it may happen that some of the poles of a given H_{i}(z) coincide with the zeros of another transfer function, which leads to a polezero cancellation in the overall transfer function. This is a method that can be used (at least theoretically) to stabilize an otherwise unstable filter. If one of the poles of the system (assuming causality) lies outside of the unit circle, this pole can be compensated by cascading an appropriate first or secondorder FIR section to the original filter. In practical realizations, care must be taken to make sure that the cancellation is not jeopardized by numerical precision problems.
The polezero plot represents a convenient starting point in order to estimate the shape of the magnitude for a filter’s transfer function. The basic idea is to consider the absolute value of H(z), which is a threedimensional plot (H(z) being a real function of a complex variable). To see what happens to H(z) it is useful to imagine a “rubber sheet” laid over the complex plane; then,
so that the shape of H(z) is that of a very lopsided “circus tent”. The magnitude of the transfer function is just the height of this circus tent measured around the unit circle.
In practice, to sketch a transfer function (in magnitude) given the polezero plot, we proceed as follows. Let us start by considering the upper half of the unit circle, which maps to the [0,π] interval for the ω axis in the DTFT plot; for realvalued filters, the magnitude response is an even function and, therefore, the [π,0] interval need not be considered explicitly. Then:


As an example, the magnitude responses of the polezero plots in Figure 6.2 are displayed in Figures 6.3 and 6.4.
We will quickly revisit the examples of the previous chapter to show the versatility of the ztransform.

Moving Average. From the impulse response in (5.14), the transfer function of the moving average filter is
 (6.16) 
from which the frequency response (5.26) is easily derived by setting z = e^{jω}. It is easy to see that the poles of the filter are on all the roots of unity except for z = 1, where the numerator and denominator in (6.17) cancel each other out. A factored representation of the transfer function for the moving average is therefore
 (6.17) 
and the polezero plot (for N = 8) is shown in Figure 6.5(a). There being no poles, the filter is unconditionally stable.
Leaky Integrator. From the CCDE for the leaky integrator (5.16) we immediately have
 (6.18) 
from which
 (6.19) 
The transfer function has therefore a single real pole in z = λ; for a causal realization, this implies that the ROC is the region of the complex plane extending outward from the circle of radius λ. The causal filter is stable if λ lies inside the unit circle, i.e. if λ < 1. An example of polezero plot together with the associated ROC is shown in Figure 6.5(b) for the (stable) case of λ = 0.65.
Example 6.1: Transform of periodic functions
The ztransform converges without fuss for infiniteenergy sequences which the Fourier transform has some difficulties dealing with. For instance, the ztransform manages to “bring down” the unit step because of the vanishing power of z^{n} for z > 1 and n large and this is the case for all onesided sequences which grow no more than exponentially. However, if z^{n}→ 0 for n →∞ then necessarily z^{n}→∞ for n →∞ and this may pose a problem for the convergence of the ztransform in the case of twosided sequences. In particular, the ztransform does not converge in the case of periodic signals since only one side of the repeating pattern is “brought down” while the other is amplified limitlessly. We can circumvent this impasse by “killing” half of the periodic signal with a unit step. Take for instance the onesided cosine:
its ztransform can be derived as
X(z)  = ∑ _{n=∞}^{∞}z^{n} cos(ω_{ 0}n)u[n]  
= ∑ _{n=0}^{∞}z^{n} cos(ω_{ 0}n)  
= ∑ _{n=0}^{∞}e^{jω0n}z^{n} + ∑ _{n=0}^{∞}e^{jω0n}z^{n}  
=  
= 
Example 6.2: The impossibility of ideal filters
The ztransform of an FIR impulse response can be expressed as a simple polynomial P(z) of degree L  1 where L is the number of nonzero taps of the filter (we can neglect leading factors of the form z^{N}). The fundamental theorem of algebra states that such a polynomial has at most L  1 roots; as a consequence, the frequency response of an FIR filter can never be identically zero over a frequency interval since, if it were, its ztransform would have an infinite number of roots. Similarly, by considering the polynomial P(z)  C, we can prove that the frequency response can never be constant C over an interval which proves the impossibility of achieving ideal (i.e. “brickwall”) responses with an FIR filter. The argument can be easily extended to rational transfer functions, confirming the impossibility of a realizable filter whose characteristic is piecewise perfectly flat.
The ztransform is closely linked to the solution of linear, constant coefficient difference equations. For a more complete treatment, see, for example, R. Vich, Z Transform Theory and Applications (Springer, 1987), or A. J. Jerri, Linear Difference Equations with Discrete Transforms Method (Kluwer, 1996).
Exercise 6.1: Interleaving. Consider two twosided sequences h[n] and g[n] and consider a third sequence x[n] which is built by interleaving the values of h[n] and g[n]:
x[n]  = …, h[3], g[3], h[2], g[2], h[1], g[1], h[0],  
g[0], h[1], g[1], h[2], g[2], h[3], g[3], … 
Exercise 6.2: Properties of the ztransform. Let x[n] be a discretetime sequence and X(z) be its corresponding ztransform with appropriate ROC.
Exercise 6.3: Stability. Consider a causal discrete system represented by the following difference equation:
Exercise 6.4: Polezero plot and stability – I. Consider a causal LTI system with the following transfer function:
Sketch the polezero plot of the transfer function and specify its region of convergence. Is the system stable?
Exercise 6.5: Polezero plot and stability – II. Consider the transfer function of an anticausal LTI system
Sketch the polezero plot of the transfer function and specify its region of convergence. Is the system stable?
Exercise 6.6: Polezero plot and magnitude response. In the following polezero plots, multiple zeros at the same location are indicated by the multiplicity number shown to the upper right of the zero. Sketch the magnitude of each frequency response and determine the type of filter.

Exercise 6.7: ztransform and magnitude response. Consider a causal LTI system described by the following transfer function:
Now fire up your favorite numerical package (or write some C code) and consider the following length128 input signal:
Exercise 6.8: DFT and ztransform. It is immediately obvious that the DTFT of a sequence x[n] is simply its ztransform evaluated on the unit circle, i.e. for z = e^{jω}. Equivalently, for a finitelength signal x, the DFT is simply the ztransform of the finite support extension of the signal evaluated at z = W_{N}^{k} for k = 0,…,N  1:
By taking advantage of this fact, derive a simple expression for the DFT of the timereversed signal
Exercise 6.9: A CCDE. Consider an LTI system described by the following constantcoefficient difference equation:
Exercise 6.10: Inverse transform. Write out the discretetime signal x[n] whose ztransform is
Exercise 6.11: Signal transforms. Consider a discretetime signal x[n] whose ztransform is
Consider now the length4 signal y[n]:
© Presses polytechniques et universitaires romandes, 2008 All rights reserved 
In discretetime signal processing, filter design is the art of turning a set of requirements into a welldefined numerical algorithm. The requirements, or specifications, are usually formulated in terms of the filter’s frequency response; the design problem is solved by finding the appropriate coefficients for a suitable difference equation which implements the filter and by specifying the filter’s architecture. Since realizable filters are described by rational transfer functions, filter design can usually be cast in terms of a polynomial optimization procedure for a given error measure. Additional design choices include the computational cost of the designed filters, i.e. the number of mathematical operations and storage necessary to compute each output sample. Finally, the structure of the difference equation defines an explicit operational procedure for computing the filter’s output values; by arranging the terms of the equation in different ways, we can arrive at different algorithmic structures for the implementation of digital filters.
As we have seen, a realizable filter is described by a rational transfer function; designing a filter corresponds to determining the coefficients of the polynomials in transfer function with respect to the desired filter characteristics. For an FIR filter of length M, there are M coefficients that need to be determined, and they correspond directly to the filter’s impulse response. Similarly, for an IIR filter with a numerator of degree M  1 and a denominator of degree N  1, there are M + N  1 coefficients to determine (since we always assume a_{0} = 1). The main questions are the following:
As is apparent, realworld filters are designed with a variety of practical requirements in mind, most of which are conflicting. One such requirement, for instance, is to obtain a low “computational price” for the filtering operation; this cost is obviously proportional to the number of coefficients in the filter, but it also depends heavily on the underlying hardware architecture. The tradeoffs between disparate requirements such as cost, precision or numerical stability are very subtle and not altogether obvious; the art of the digital filter designer, although probably less dazzling than the art of the analog filter designer, is to determine the best design strategy for a given practical problem.
Filter design has a long and noble history in the analog domain: a linear electronic network can be described in terms of a differential equation linking, for instance, the voltage as a function of time at the input of the network to the voltage at the output. The arrangement of the capacitors, inductances and resistors in the network determine the form of the differential equation, while their values determine its coefficients. A fundamental difference between an analog filter and a digital filter is that the transformation from input to output is almost always considered instantaneous (i.e. the propagation effects along the circuit are neglected). In digital filters, on the other hand, the delay is always explicit and is actually the fundamental building block in a processing system. Because of the physical properties of capacitors, which are ubiquitous in analog filters, the transfer function of an analog filter (expressed in terms of its Laplace transform) is “similar” to the transfer function of an IIR filter, in the sense that it contains both poles and zeros. In a sense, IIR filters can be considered as the discretetime counterpart of classic analog filters. FIR filters, on the other hand, are the flagship of digital signal processing; while one could conceive of an analog equivalent to an FIR, its realization would require the use of analog delay lines, which are costly and impractical components to manufacture. In a digital signal processing scenario, on the other hand, the designer can freely choose between two lines of attack with respect to a filtering problem, IIR or FIR, and therefore it is important to highlight advantages and disadvantages of each.
FIR Filters. The main advantages of FIR filters can be summarized as follows:
While their disadvantages are mainly:
IIR Filters. IIR filters are often an afterthought in the context of digital signal processing in the sense that they are designed by mimicking established design procedures in the analog domain; their appeal lies mostly in their compact formulation: for a given computational cost, i.e for a given number of operations per input sample, they can offer a much better magnitude response than an equivalent FIR filter. Furthermore, there are a few fundamental processing tasks (such as DC removal, as we will see later) which are the natural domain of IIR filters. The drawbacks of IIR filters, however, mirror in the negative the advantages of FIR’s. The main advantages of IIR filters can be summarized as follows:
While their disadvantages are mainly:
For these reasons, in this book, we will concentrate mostly on the FIR design problem and we will consider of IIR filters only in conjunction with some specific processing tasks which are often encountered in practice.
A set of filter specifications represents a set of guidelines for the design of a realizable filter. Generally, the specifications are formulated in the frequency domain and are cast in the form of boundaries for the magnitude of the frequency response; less frequently, the specifications will take the phase response into account as well.
A set of filter specifications is best illustrated by example: suppose our goal is to design a halfband lowpass filter, i.e. a lowpass filter with cutoff frequency π∕2. The filter will possess a passband, i.e. a frequency range over which the input signal is unaffected, and a stopband, i.e. a frequency range where the input signal is annihilated. In order to turn these requirements into specifications the following practical issues must be taken into account:
These specifications can be represented graphically as in Figure 7.1; note that, since we are dealing with realvalued filter coefficients, it is sufficient to specify the desired frequency response only over the [0,π] interval, the magnitude response being symmetric. The filter design problem consists now in finding the minimum size FIR or IIR filter which fulfills the required specifications. As an example, Figure 7.2 shows an IIR filter which does not fulfill the specifications since the stopband error is above the maximum error at the beginning of the stopband. Similarly, Figure 7.3 shows an FIR filter which breaks the specifications in the passband. Finally, Figure 7.4 shows a monotonic IIR filter which matches and exceeds the specifications (i.e. the error is always smaller than the maximum error).
In this section we will explore two fundamental strategies for FIR filter design, the window method and the minimax (or ParksMcClellan) method. Both methods seek to minimize the error between a desired (and often ideal) filter transfer function and the transfer function of the designed filter; they differ in the error measure which is used in the minimization. The window method is completely straightforward and it is often used for quick designs. The minimax method, on the other hand, is the procedure of choice for accurate, optimal filters. Both methods will be illustrated with respect to the design of a lowpass filter.
We have already seen in Section 5.6 that if there are no constraints (not even realizability) the best lowpass filter with cutoff frequency ω_{c} is the ideal lowpass. The impulse response is therefore the inverse Fourier transform of the desired transfer function:

The resulting filter, as we saw, is an ideal filter and it cannot be represented by a rational transfer function with a finite number of coefficients.
Impulse Response Truncation. A simple idea to obtain a realizable filter is to take a finite number of samples from the ideal impulse response and use them as coefficients of a (possibly rather long) FIR filter:^{(2)}
 (7.1) 
This is a (2N + 1)tap FIR obtained by truncating an ideal impulse response (Figs 5.10 and 5.11). Note that the filter is noncausal, but that it can be made causal by using an Ntap delay; it is usually easier to design FIR’s by considering a noncausal version first, especially if the resulting impulse response is symmetric (or antisymmetric) around n = 0. Although this approximation was derived in a sort of “intuitive” way, it actually satisfies a very precise approximation criterion, namely the minimization of the mean square error (MSE) between the original and approximated filters. Denote by E_{2} this error, that is
We can apply Parseval’s theorem (see (4.59)) to obtain the equivalent expression in the discretetime domain:
If we now recall that ĥ[n] = 0 for n > N, we have
Obviously the last two terms are nonnegative and independent of ĥ[n]. Therefore, the minimization of E_{2} with respect to ĥ[n] is equivalent to the minimization of the first term only, and this is easily obtained by letting
In spite of the attractiveness of such a simple and intuitive solution, there is a major drawback. If we consider the frequency response of the approximated filter, we have
which means that Ĥ(e^{jω}) is an approximation of H(e^{jω}) obtained by using only 2N + 1 Fourier coefficients. Since H(e^{jω}) has a jump discontinuity in ω_{c}, Ĥ(e^{jω}) incurs the wellknown Gibbs phenomenon around ω_{c}. The Gibbs phenomenon states that, when approximating a discontinuous function with a finite number of Fourier coefficients, the maximum error in an interval around the jump discontinuity is actually independent of the number of terms in the approximation and is always equal to roughly 9% of the jump. In other words, we have no control over the maximum error in the magnitude response. This is apparent in Figure 7.5 where Ĥ(e^{jω}) is plotted for increasing values of N; the maximum error does not decrease with increasing N and, therefore, there are no means to meet a set of specifications which require less than 9% error in either stopband or passband.

The Rectangular Window. Another way to look at the resulting approximation is to express ĥ[n] as
 (7.2) 
 (7.3) 
w[n] is called a rectangular window of length (2N + 1) taps, which in this case is centered at n = 0.
We know from the modulation theorem in (5.22) that the Fourier transform of (7.2) is the convolution (in the space of 2πperiodic functions) of the Fourier transforms of h[n] and w[n]:
It is easy to compute W(e^{jω}) as
 (7.4) 
An example of W(e^{jω}) for N = 6 is shown in Figure 7.6. By analyzing the form of W(e^{jω}) for arbitrary N, we can determine that:
In order to understand the shape of the approximated filter, let us go back to the lowpass filter example and try to visualize the effect of the convolution in the Fourier transform domain. First of all, since all functions are 2πperiodic, everything happens circularly, i.e. what “goes out” on the right of the [π,π] interval “pops” immediately up on the left. The value at ω_{0} of Ĥ(e^{jω}) is the integral of the product between H(e^{jω}) and a version of W(e^{jω}) circularly shifted by ω_{0}. Since H(e^{jω}) is zero except over [ω_{c},ω_{c}], where it is one, this value is actually:
When ω_{0} is such that the first right sidelobe of W(e^{jω}) is outside of the [ω_{c},ω_{c}] interval, then the integral reaches its maximum value, since the sidelobe is negative and it’s the largest. The maximum value is dependent on the shape of the window (a rectangle in this case) but not on its length. Hence the Gibbs phenomenon.
To recap, the windowing operation on the ideal impulse response, i.e. the circular convolution of the ideal frequency response with W(e^{jω}), produces two main effects:
The sharpness of the transition band and the size of the ripples are dependent on the shape of the window’s Fourier transform; indeed, by carefully designing the shape of the windowing sequence we can trade mainlobe width for sidelobe amplitude and obtain a more controlled behavior in the frequency response of the approximation filter (although the maximum error can never be arbitrarily reduced).
Other Windows. In general, the recipe for filter design by windowing involves two steps: the analytical derivation of an ideal impulse response followed by a suitable windowing to obtain an FIR filter. The ideal impulse response h[n] is obtained from the desired frequency response H(e^{jω}) by the usual DTFT inversion formula
While the analytical evaluation of the above integral may be difficult or impossible in the general case, for frequency responses H(e^{jω}) which are piecewise linear, the computation of h[n] can be carried out in an exact (if nontrivial) way; the result will be a linear combination of modulated sinc and sincsquared sequences.^{(3)} The FIR approximation is then obtained by applying a finitelength window w[n] to the ideal impulse response as in (7.2). The shape of the window can of course be more sophisticated than the simple rectangular window we have just encountered and, in fact, a hefty body of literature is devoted to the design of the “best” possible window. In general, a window should be designed with the following goals in mind:
It is clear that the first two requirements are openly in conflict; indeed, the width of the main lobe Δ is inversely proportional to the length of the window (we have seen, for instance, that for the rectangular window Δ = 4π∕M, with M, the length of the filter). The second and third requirements are also in conflict, although the relationship between mainlobe width and sidelobe amplitude is not straightforward and can be considered a design parameter. In the frequency response, reduction of the sidelobe amplitude implies that the Gibbs phenomenon is decreased, but at the “price” of an enlargement of the filter’s transition band. While a rigorous proof of this fact is beyond the scope of this book, consider the simple example of a triangular window (with N odd):
 (7.5) 
It is easy to verify that w_{t}[n] = w[n] * w[n], with w[n] = rect2n∕(N  1) (i.e. the triangle can be obtained as the convolution of a halfsupport rectangle with itself) so that, as a consequence of the convolution theorem, we have
 (7.6) 
The net result is that the amplitude of the sidelobes is quadratically reduced but the amplitude of the mainlobe Δ is roughly doubled with respect to an equivalentlength rectangular window; this is displayed in Figure 7.7 for a 17point window (values are normalized so that both frequency responses are equal in ω = 0). Filters designed with a triangular window therefore exhibit a much wider transition band.

Other commonly used windows admit a simple parametric closed form representation; the most important are the Hamming window (Fig. 7.8):
and the Blackman window (Fig. 7.9):
w(n)  = 0.42  0.5cos + 0.08cos, n≤ N  1 

Limitations of the Window Method. Lack of total control on passband and stopband error is the main limitation inherent to the window method; this said, the method remains a fundamental staple of practical signal processing as it yields perfectly usable filters via a quick, flexible and simple procedure. The error characteristic of a windowdesigned filter can be particularly aggravating in sensitive applications such as audio processing, where the peak in the stopband error can introduce unacceptable artifacts. In order to improve on the filter performance, we need to completely revise our design approach. A more suitable optimization criterion may, for instance, be the minimax criterion, where we aim to explicitly minimize the maximum approximation error over the entire frequency support; this is thoroughly analyzed in the next section. We can already say, however, that while the minimum square error is an integral criterion, the minimax is a pointwise criterion; or, mathematically, that the MSE and the minimax are respectively L_{2}[π,π] and L_{∞}[π,π]norm minimizations for the error function E(ω) = Ĥ(e^{jω})  H(e^{jω}). Figure 7.11 illustrates the typical result of applying both criteria to the ideal lowpass problem. As can be seen, the minimum square and minimax solutions are very different.
As we saw in the opening example, FIR filter design by windowing minimizes the overall mean square error between the desired frequency response and the actual response of the filter. Since this might lead to a very large error at frequencies near the transition band, we now consider a different approach, namely the design by minimax optimization. This technique minimizes the maximum allowable error in the filter’s magnitude response, both in the passband and in the stopband. Optimality in the minimax sense requires therefore the explicit stating of a set of tolerances in the prototypical frequency response, in the form of design specifications as seen in Section 7.1.2. Before tackling the design procedure itself, we will need a series of intermediate results.
Generalized Linear Phase. In Section 5.4.3, we introduced the concept of linear phase; a filter with linear phase response is particularly desirable since the phase response translates to just a time delay (possibly fractional) and we can concentrate on the magnitude response only. We also introduced the notion of group delay and showed that linear phase corresponds to constant group delay. Clearly, the converse is not true: a frequency response of the type
has constant group delay but differs from a linear phase system by a constant phase factor e^{jα}. We will call this type of phase response generalized linear phase. Important cases are those for which α = 0 (strictly linear phase) and α = π∕2 (generalized linear phase used in differentiators).
FIR Filter Types. Consider a causal, Mtap FIR filter with impulse response h[n], n = 0,1,…,M  1; in the following, we are interested in filters whose impulse response is symmetric or antisymmetric around the “midpoint”. If the number of taps is odd, the midpoint of the impulse response coincides with the center tap h[(M  1)∕2]; if the number of taps is even, on the other hand, the midpoint is still at (M  1)∕2 but this value does not coincide with a tap since it is located “right in between” taps h[M∕2  1] and h[M∕2]. Symmetric and antisymmetric FIR filters are important since their frequency response has generalized linear phase. The delay introduced by these filters is equal to (M  1)∕2 samples; clearly, this is an integer delay if M is odd, and it is fractional (half a sample more) if M is even. There are four different possibilities for linear phase FIR impulse responses, which are listed here with their corresponding generalized linear phase parameters :

The generalized linear phase of (anti)symmetric FIRs is easily shown. Consider for instance a Type I filter, and define C = (M  1)∕2, the location of the center tap; we can compute the transfer function of the shifted impulse response h_{d}[n] = h[n + C], which is now symmetric around zero, i.e. h_{d}[n] = h_{d}[n]:
 (7.7) 
By undoing the time shift we obtain the original Type I transfer function:
 (7.8) 
On the unit circle (7.7) becomes
 (7.9) 
which is a real function; the original Type I frequency response is obtained from (7.8):
which is clearly linear phase with delay d = (M  1)∕2 and α = 0. The generalized linear phase of the other three FIR types can be shown in exactly the same way.
Zero Locations. The symmetric structures of the four types of FIR filters impose some constraints on the locations of the zeros of the transfer function. Consider again a Type I filter; from (7.7) it is easy to see that H_{d}(z^{1}) = H_{d}(z); by using (7.8) we therefore have
which leads to
 (7.10) 
It is easy to show that the above relation is also valid for Type II filters, while for Type III and Type IV (antisymmetric filters) we have
 (7.11) 
These relations mean that if z_{0} is a zero of a linear phase FIR, then so is z_{0}^{1}. This result, coupled with the usual fact that all complex zeros come in conjugate pairs, implies that if z_{0} is a zero of H(z), then:
Consider now equation (7.10) again; if we set z = 1,
 (7.12) 
for Type II filters, M  1 is an odd number, which leads to the conclusion that H(1) = 0; in other words, Type II filters must have a zero at ω = π. Similar results can be demonstrated for the other filter types, and they are summarized below:

These constraints are important in the choice of the filter type for a given set of specifications. Type II and Type III filters are not suitable in the design of highpass filters, for instance; similarly, Type III and Type IV filters are not suitable in in the design of lowpass filters.
Chebyshev Polynomials. Chebyshev polynomials are a family of orthogonal polynomials T_{k}(x)_{kN} which have, amongst others, the following interesting property:
 (7.13) 
in other words, the cosine of an integer multiple of an angle ω can be expressed as a polynomial in the variable cosω. The first few Chebyshev polynomials are
T_{0}(x)  = 1  
T_{1}(x)  = x  
T_{2}(x)  = 2x^{2}  1  
T_{3}(x)  = 4x^{3}  3x  
T_{4}(x)  = 8x^{4}  8x^{2} + 1 
 (7.14) 
From the above table it is easy to see that we can write, for instance,
The interest in Chebyshev polynomials comes from the fact that the zerocentered frequency response of a linear phase FIR can be expressed as a linear combination of cosine functions, as we have seen in detail for Type I filters in (7.9). By using Chebyshev polynomials we can rewrite such a response as just one big polynomial in the variable cosω. Let us consider an explicit example for a length7 Type I filter with nonzero coefficients h[n] = [d c b a b c d]; we can state that
and by using the first four Chebyshev polynomials we can write
H_{d}(e^{jω})  = a + 2bcosω + 2c(2cos^{2}ω  1) + 2d(4cos^{3}ω  3cosω)  
= (a  2c) + (2b  6d)cosω + 4ccos^{2}ω + 8dcos^{3}ω  (7.15) 
 (7.16) 
 (7.17) 
where P(x) is a polynomial of degree (M  1)∕2 whose coefficients c_{k} are derived as linear combinations of the original filter coefficients a_{k} as illustrated in (7.15). For the other types of linear phase FIR, a similar representation can be obtained after a few trigonometric manipulations. The general expression is
 (7.18) 
 (7.19) 
where the c_{k} are still linear combinations of the original filter coefficients and where f(ω) is a weighting trigonometric function. Both f(ω) and the polynomial degree K vary as a function of the filter type.^{(4)} In the following Sections, however, we will concentrate only on the design of Type I filters, so these details will be overlooked; in practice, since the design is always carried out using numerical packages, the appropriate formulation for the filter expression is taken care of automatically.
Polynomial Optimization. Going back to the filter design problem, we stipulate that the FIR filters are (generalized) linear phase, so we can concentrate on the real frequency response of the zerocentered filter, which is represented by the trigonometric polynomial (7.19). Moreover, since the impulse response is real and symmetric, the aforementioned real frequency response is also symmetric around ω = 0. The filter design procedure can thus be carried out only for values of ω over the interval [0,π], with the other half of the spectrum obtained by symmetry. For these values of ω, the variable x = cosω is mapped onto the interval [1,1] and the mapping is invertible. Therefore, the filter design problem becomes a problem of polynomial approximation over intervals.
To illustrate the procedure by example, consider once more the set of filter specifications in Figure 7.1 and suppose we decide to use a Type I filter. Recall that we required the prototype filter to be lowpass, with a transition band from ω_{p} = 0.4π to ω_{s} = 0.6π; we further stated that the tolerances for the realized filter’s magnitude must not exceed 10% in the passband and 1% in the stopband. This implies that the maximum magnitude error between the prototype filter and the FIR filter response H(e^{jω}) must not exceed δ_{p} = 0.1 in the interval [0,ω_{p}] and must not exceed δ_{s} = 0.01 in the interval [ω_{s},π]. We can formulate this as follows: the frequency response of the desired filter is

(note that H_{D}(e^{jω}) is not specified in the transition band). Since the tolerances on passband and stopband are different, they can be expressed in terms of a weighting function H_{W }(ω) such that the tolerance on the error is constant over the two bands:
 (7.20) 
With this notation, the filter specifications amount to the following:
 (7.21) 
and the question now is to find the coefficients for h[n] (their number M and their values) which minimize the above error. Note that we leave the transition band unconstrained (i.e. it does not affect the minimization of the error).
The next step is to use (7.19) to reformulate the above expression as a polynomial optimization problem. To do so, we replace the frequency response H_{d}(e^{jω}) with its polynomial equivalent and set x = cosω; the passband interval [0,ω_{p}] and the stopband interval [ω_{s},π] are mapped into the intervals for x:
I_{p}  = [cosω_{p},1]  
I_{s}  = [1,cosω_{s}] 
 (7.22) 
and the weighting function becomes:
 (7.23) 
The new set of specifications are shown in Figure 7.12. Within this polynomial formulation, the optimization problem becomes:
 (7.24) 
where P(x) is the polynomial representation of the FIR frequency response as in (7.19).

Alternation Theorem. The optimization problem stated by (7.24) can be solved by using the following theorem:
Theorem 7.1 Consider a set {I_{k}} of closed, disjoint intervals on the real axis and their union I = ⋃ _{k}I_{k}. Consider further:
Consider now the approximation error function
and the associated maximum approximation error over the set of closed intervals
Then P(x) is the unique orderL polynomial which minimizes E_{max} if and only if there exist at least L + 2 successive values x_{i} in I such that E(x_{i}) = E_{max} and
In other words, the error function must have at least L + 2 alternations between its maximum and minimum values. Such a function is called equiripple.
Going back to our lowpass filter example, assume we are trying to design a 9tap optimal filter. This theorem tells us that if we found a polynomial P(x) of degree 4 such that the error function (7.24) over I_{p} and I_{s} as is shown in Figure 7.13 (6 alternations), then the polynomial would be the optimal and unique solution. Note that the extremal points (i.e. the values of the error function at the edges of the optimization intervals) do count in the number of alternations since the intervals I_{k} are closed.

The above theorem may seem a bit farfetched since it does not tell us how to find the coefficients but it only gives us a test to verify their optimality. This test, however, is at the core of an iterative algorithm which refines the polynomial from an initial guess to the point when the optimality condition is met. Before considering the optimization procedure more in detail, we will state without formal proof, three consequences of the alternation theorem as it applies to the design of Type I lowpass filters:
Optimization Procedure. Finally, by putting all the elements together, we are ready to state an algorithmic optimization procedure for the design of optimal minimax FIR filters; this procedure is usually called the ParksMcClellan algorithm. Remember, we are trying to determine a polynomial P(x) such that the approximation error in (7.24) is equiripple; for this, we need to determine both the degree of the polynomial and its coefficients. For a given degree L, for which the resulting filter has 2L + 1 taps, the L coefficients are found by an iterative procedure which successively refines an initial guess for the L + 2 alternation points x_{i} until the error is equiripple.^{(5)} After the iteration has converged, we need to check that the corresponding E_{max} satisfies the upper bound imposed by the specifications; when this is not the case, the degree of the polynomial (and therefore the length of the filter) must be increased and the procedure must be restarted. Once the conditions on the error are satisfied, the filter coefficients can be obtained by inverting the Chebyshev expansion.
As a final note, an initial guess for the number of taps can be obtained using the empirical formula by Kaiser; for an Mtap FIR h[n], n = 0,…, M  1:
where δ_{p} is the passband tolerance, δ_{s} is the stopband tolerance and Ω = ω_{s}  ω_{p} is the width of the transition band.
The Final Design. We now summarize the design steps for the specifications in Figure 7.1. We use a Type I FIR. We start by using Kaiser’s formula to obtain an estimate of the number of taps: since δ_{p}δ_{s} = 10^{3} and Ω = 0.2π, we obtain M = 12.6 which we round up to 13 taps. At this point we can use any numerical package for filter design to run the ParksMcClellan algorithm. In Matlab this would be
The resulting frequency response is plotted in Figure 7.14; please note that we are plotting the frequency responses of the zerocentered filter h_{d}[n], which is a real function of ω. We can verify that the filter has indeed (M  1)∕2 = 6 alternation by looking at enlarged picture of the passband and the stopband, as in Figure 7.15.

The maximum error as returned by Matlab is however 0.102 which is larger than what our specifications called for, i.e. 0.1. We are thus forced to increase the number of taps; since we are using a Type I filter, the next choice is M = 15. Again, the error turns out to be larger than 0.1, since in this case we have E_{max} = 0.1006. The next choice, M = 17, finally yields an error E_{max} = 0.05, which exceeds the specifications by a factor of 2. It is the designer’s choice to decide whether the computational gains of a shorter filter (M = 15) outweigh the small excess error. The impulse response and the frequency response of the 17tap filter are plotted in Figure 7.16 and Figure 7.17. Figure 7.18 shows the zero locations for the filter; note the typical linearphase zero pattern as well as the zeros on the unit circle in the stopband.
Other Types of Filters. The ParksMcClellan optimal FIR design procedure can be made to work for arbitrary filter types as well, such as highpass and bandpass, but also for more sophisticated frequency responses. The constraints imposed by the zero locations as we saw on page § determine the type of filter to use; once the desired response H_{D}(e^{jω}) is expressed as a trigonometric function, the optimization algorithm can take its course. For arbitrary frequency responses, however, the fact that the transition bands are left unconstrained may lead to unacceptable peaks which render the filter useless. In these cases, visual inspection of the obtained response is mandatory and experimentation with different filter lengths and tolerance may improve the final result.

As we mentioned earlier, no optimal procedure exists for the design of IIR filters. The fundamental reason is that the optimization of the coefficients of a rational transfer function is a highly nonlinear problem and no satisfactory algorithm has yet been developed for the task. This, coupled with the impossibility of obtaining an IIR with linear phase response^{(6)} makes the design of the IIR filter a much less formalized art. Many IIR designed techniques are described in the literature and their origin is usually in triedand true analog filter design methods. In the early days of digital signal processing, engineers would own voluminous books with exhaustive lists of capacitance and inductance values to be used for a given set of (analog) filter specifications. The idea behind most digital IIR filter design techniques was to be able to make use of that body of knowledge and to devise formulas which would translate the analog design into a digital one. The most common such method is known as bilinear transformation. Today, the formal step through an analog prototype has become unnecessary and numerical tools such as Matlab can provide a variety of routines to design an IIR.
Here we concentrate only on some basic IIR filters which are very simple and which are commonly used in practice.
There are a few applications in which simple IIR structures are the design of choice. These filters are so simple and so well behaved that they are a fundamental tool in the arsenal of any signal processing engineer.
DC Removal and Mean Estimation. The DC component of a signal is its mean value; a signal with zero mean is also called an AC signal. This nomenclature comes from electrical circuit parlance: DC is shorthand for direct current, while AC stands for alternating current;^{(7)} you might be familiar with these terms in relation to the current provided by a battery (constant and hence DC) and the current available from a mains socket (alternating at 50 or 60 Hz and therefore AC).
For a given sequence x[n], one can always write
where x_{DC} is the mean of the sequence values. Please note the followings:
In most signal processing applications, where the input signal comes from an acquisition device (such as a sampler, a soundcard and so on), it is important to remove the DC component; this is because the DC offset is often a random offset caused by ground mismatches between the acquisition device and the associated hardware. In order to eliminate the DC component we need to first estimate it, i.e. we need to estimate the mean of the signal.
For finitelength signals, computation of the mean is straightforward since it involves a finite number of operations. In most cases, however, we do not want to wait until the end of the signal before we try to remove its mean; what we need is a way to perform DC removal on line. The approach is therefore to obtain, at each instant, an estimate of the DC component from the past signal values, with the assumption that the estimate converges to the real mean of the signal. In order to obtain such an estimate, i.e. in order to obtain the average value of the past input samples, both approaches detailed in Section 5.3 are of course valid (i.e. the Moving Average and the Leaky Integrator filters) . We have seen, however, that the leaky integrator provides a superior cost/benefit tradeoff and therefore the output of a leaky integrator with λ very close to one (usually 10^{3}) is the estimate of choice for the DC component of a signal. The closer λ is to one, the more accurate the estimation; the speed of convergence of the estimate however becomes slower and slower as λ → 1. This can easily be seen from the group delay at ω = 0, which is
Resonator Filter. Let us look again at how the leaky integrator works. Consider its ztransform:
and notice that what we really want the filter to do is to extract the zerofrequency component (i.e. the frequency component that does not oscillate, that is, the DC component). To do so, we placed a pole near z = 1, which of course corresponds to z = e^{jω} for ω = 0. Since the magnitude response of the filter exhibits a peak near a pole, and since the peak will be higher, the closer the pole is to the unit circle, we are in fact amplifying the zerofrequency component; this is apparent from the plot of the filter’s frequency response in Figure 5.9. The numerator, 1  λ, is chosen such that the magnitude of the filter at ω = 0 is one; the net result is that the zerofrequency component will pass unmodified while all the other frequencies will be attenuated. The value of a filter’s magnitude at a given frequency is often called the gain.
The very same approach can now be used to extract a signal component at any frequency. We will use a pole whose magnitude is still close to one (i.e. a pole near the unit circle) but whose phase is that of the frequency we want to extract. We will then choose a numerator so that the magnitude is unity at the frequency of interest. The one extra detail is that, since we want a realvalued filter, we must place a complex conjugate pole as well. The resulting filter is called a resonator and a typical polezero plot is shown in Figure 7.19. The ztransform of a resonator at frequency ω_{0} is therefore determined by the pole p = λe^{jω0} and by its conjugate:
 (7.25) 
The numerator value G_{0} is computed so that the filter’s gain at ±ω_{0} is one; since in this case H(e^{jω0}) = H(e^{jω0}), we have
The magnitude and phase of a resonator with λ = 0.9 and ω_{0} = π∕3 are shown in Figure 7.20.
A simple variant on the basic resonator can be obtained by considering the fact that the resonator is just a bandpass filter with a very narrow passband. As for all bandpass filters, we can therefore place a zero at z = ±1 and sharpen its midband frequency response. The corresponding ztransform is now
with
The corresponding magnitude response is shown in Figure 7.21.
We have seen in Section 5.7.2 a practical implementation of a constantcoefficient difference equation (written in C). That was just one particular way of translating Equation (5.46) into a numerical procedure; in this Section we explore other alternatives for both FIR and IIR and introduce the concept of computational efficiency for filters.
The cost of a numerical filter is dependent on the number of operations per output sample and on the storage (memory) required in the implementation. If we consider a generic CCDE, it is easy to see that the basic building blocks which make up the recipe for a realizable filter are:
By properly combining these elements and by exploiting the different possible decomposition of a filter’s rational transfer function, we can arrive at a variety of different working implementations of a filter. To study the possibilities at hand, instead of relying on a specific programming language, we will use self explanatory block diagrams.
Cascade Forms. Recall that a rational transfer function H(z) can always be written out as follows:
 (7.26) 
where the z_{n} are the M  1 (complex) roots of the numerator polynomial and the p_{n} are the N  1 (complex) roots of the denominator polynomial. Since the coefficients of the CCDE are assumed to be real, complex roots for both polynomials always appear in complexconjugate pairs. A pair of firstorder terms with complexconjugate roots can be combined into a secondorder term with real coefficients:
 (7.27) 
As a consequence, the transfer function can be factored into the product of first and secondorder terms in which the coefficients are all strictly real; namely:
 (7.28) 
where M_{r} is the number of real zeros, M_{c} is the number of complexconjugate zeros and M_{r} + 2M_{c} = M (and, equivalently, for the poles, N_{r} + 2N_{c} = N). From this representation of the transfer function we can obtain an alternative structure for a filter; recall that if we apply a series of filters in sequence, the overall transfer function is the product of the single transfer functions. Working backwards, we can interpret (7.28) as the cascade of smaller sections. The resulting structure is called a cascade and it is particularly important for IIR filters, as we will see later.
Parallel Forms. Another interesting rewrite of the transfer function is based on a partial fraction expansion of the type:
 (7.29) 
where the multiplicity of the three types of terms as well as the relative coefficients are dependent (in a nontrivial way) on the original filter coefficients. This generates a parallel structure of filters, whose outputs are summed together. The first branch corresponds to the first sum and it is an FIR filter; a further set of branches are associated to each term in the second sum, each one of them a first order IIR; the last set of branches is a collection of second order sections, one for each term of the third sum.
In an FIR transfer function all the denominator coefficients a_{n} other than a_{0} are zero; we have therefore:
where, of course, the coefficients correspond to the nonzero values of the impulse response h[n], i.e. b_{n} = h[n]. Using the constitutive elements outlined above, we can immediately draw a block diagram of an FIR filter as in Figure 7.22. In practice, however, additions are distributed as shown in Figure 7.23; this kind of implementation is called a transversal filter. Further, adhoc optimizations for FIR structures can be obtained in the the case of symmetric and antisymmetric linear phase filters; these are considered in the exercises.
For an IIR filter, all the a_{n} and b_{n} in (5.46) are nonzero. One possible implementation based on the direct form of the transfer function is given in Figure 7.24. This implementation is called Direct Form I and it can immediately be seen that the Ccode implementation in Section 5.7.2 realizes a Direct Form I algorithm. Here, for simplicity, we have assumed N = M but obviously we can set some a_{n} or b_{n} to zero if this is not the case.
By the commutative properties of the ztransform, we can invert the order of computation to turn the Direct Form I structure into the structure shown in Figure 7.25 (shown for a second order section); we can then combine the parallel delays together to obtain the structure in Figure 7.26. This implementation is called Direct Form II; its obvious advantage is the reduced number of the required delay elements (hence of memory storage).