Sampling and the Mysterious Scaling Factor

An atom and the solar system

This post treats the signals in continuous time which is different than the approach I adopted in my book. The book deals exclusively in discrete time.

Some time ago, I came across an interesting problem. In the explanation of sampling process, a representation of impulse sampling shown in Figure below is illustrated in almost every textbook on DSP and communications. The question is: how is it possible that during sampling, the frequency axis gets scaled by 1/Ts — a very large number? For an ADC operating at 10 MHz for example, the amplitude of the desired spectrum and spectral replicas is 10^7! I thought that there must be something wrong somewhere.

A signal sampled through an impulse train

Spectral replicas scaled by the sampling rate

Figure 1: Sampling in time domain creates spectral replicas in frequency domain, each of which gets scaled by 1/T_s

I asked a few DSP experts this question. They did not know the answer as well. Slowly I started to understand the reason why it is true, and in fact, makes perfect sense. The answer is quite straightforward but I did not realize it immediately. Since this is a blog post, I can take the liberty of explaining the route I took for this simple revelation.

A Unit Impulse


My first impression was that whenever impulses are involved, everything that branches from there becomes fictitious as well. A continuous-time unit impulse is defined as shown in Figure 2.

Area under a rectangle = \Delta \cdot \frac{1}{\Delta} = 1, or \int \limits_{-\infty} ^{+\infty} \delta (t) dt = 1

Definition of unit impulse

Figure 2: Definition of a unit impulse

Clearly, the sampling scale factor is independent of how we define a unit impulse and its infinite height. The next step might be to look into a sequence of unit impulses: an impulse train.

An Impulse Train


An impulse train is a sequence of impulses with period T_s defined as

    \[p(t) = \sum \limits _{n=-\infty} ^{+\infty} \delta (t-nT_s)\]

. The standard method to derive its Fourier Transform is through expressing the above periodic signal as a Fourier series and then finding Fourier series coefficients as follows.

First, remember that a signal x(t) with period T_s can be expressed as

    \[x(t) = \sum \limits _{k=-\infty} ^{+\infty} x_k e^{jk \Omega_s t}\]

where \Omega_s = 2\pi/T_s and it can be seen as the fundamental harmonic in the sequence of frequencies k\Omega_s with k ranging from -\infty to +\infty. Here, x_k are the Fourier series coefficients given as

    \[x_k = \frac{1}{T_s} \int _{-T_s/2} ^{T_s/2} x(t) e^{-jk \Omega_s t} dt\]

When p(t) is an impulse train, one period between -T_s/2 and +T_s/2 only contains a single impulse \delta (t). Thus, p_k can be found to be

    \[p_k = \frac{1}{T_s} \int _{-T_s/2} ^{T_s/2} \delta (t) e^{-jk \Omega_s t} dt = \frac{1}{T_s} \cdot e^{-jk \Omega_s \ \cdot \ 0}= \frac{1}{T_s}\]

Plugging this in definition above and using the fact that \mathcal{F}\left\{e^{jk \Omega_s t}\right\} = 2\pi \delta \left(\Omega - k\Omega_s \right) and \Omega_s = 2\pi/T_s, we get

    \[P(\Omega) = \frac{1}{T_s} \sum \limits _{k=-\infty} ^{+\infty} \mathcal{F}\left\{ e^{jk \Omega_s t}\right\} = \frac{2\pi}{T_s} \sum \limits _{k=-\infty} ^{+\infty} \delta \left( \Omega - k \frac{2\pi}{T_s}\right)\]

Impulse train in time domain
Impulse train in frequency domain

Figure 3: Fourier Transform of an impulse train in time domain is another impulse train in frequency domain

The resulting time and frequency domain sequences are drawn in Figure 3 above which is a standard figure in DSP literature. However, it really comes as a surprise that an impulse train has a Fourier Transform as another impulse train, although the Fourier Transform of a unit impulse is a constant.
What happened to the concept that a signal narrow in time domain has a wide spectral representation? Going through this method, it is not immediately clear why.

So I thought of deriving it through some other technique. Note that I prefer to use the frequency variable F instead of \Omega.

    \[\int _{-\infty} ^{+\infty} \sum _{n=-\infty} ^{+\infty} \delta(t - nT_s) e^{-j2\pi F t} dt = \sum _{n=-\infty} ^{+\infty} \int _{-\infty} ^{+\infty} \delta(t - nT_s) e^{-j2\pi F t} dt\]

    \[= \sum _{n=-\infty} ^{+\infty} e^{-j2\pi F nT_s}\]

From here, there are many techniques to prove the final expression, and the mathematics becomes quite cumbersome. So I just take the route of visualizing it and see where it leads.

Where the Impulses Come From


First, the Fourier Transform of a time domain impulse train can be seen as the sum of frequency domain complex sinusoids with “frequencies” equal to nT_s, or periods equal to \frac{1}{nT_s} = \frac{F_s}{n}. Furthermore, the above signal is periodic with period of first harmonic 1/T_s=F_s.

    \[\sum _{n=-\infty} ^{+\infty} e^{-j2\pi (F+F_s)   nT_s} = \sum _{n=-\infty} ^{+\infty} e^{-j2\pi F   nT_s}\cdot \underbrace{e^{-j2\pi n}}_{1} = \sum _{n=-\infty} ^{+\infty} e^{-j2\pi F   nT_s}\]

Therefore, for the sake of better visualization, we limit ourselves to the range [0,F_s) when drawing the figures in frequency domain. Whatever happens here gets repeated elsewhere. Figure 4 illustrates 3 frequency domain complex sinusoids e^{-j2\pi F nT_s} for n=1,2 and 3, with frequencies T_s, 2T_s and 3T_s, respectively. Note how within a period of F_s, the first completes one cycle, while the other two complete two and three cycles, respectively. Here, we deliberately refrained from drawing the result for n=0 which is a straight line at 1 on I axis.

3 complex sinusoids in IQ plane

Figure 4: 3 frequency domain complex sinusoids e^{-j2\pi F nT_s} for n=1,2 and 3, with frequencies T_s, 2T_s and 3T_s, respectively, in the range [0,F_s)

For a closer look, see the signal decomposed into I and Q axes. Clearly, the I axis consists of \cos (2\pi FnT_s) and the Q axis consists of -\sin(2\pi F   nT_s) again with n = 1,2 and 3. This is shown in Figure 5.

Complex sinusoids decomposed in IQ components

Figure 5: I and Q decomposition of Figure 4. The I axis consists of \cos (2\pi F nT_s) and the Q axis consists of -\sin(2\pi F   nT_s) again with n = 1,2 and 3, in the range [0,F_s)

Having understood the basic concept, we now divide the range [0,F_s) into N segments each with width \nu.

    \begin{equation*} \nu = \frac{F_s}{N} \end{equation*}

This implies that for integer k,

    \begin{equation*} F \approx k\nu = k \frac{F_s}{N}, \qquad fT_s = \frac{k}{N} \end{equation*}

The result of summing the sinusoids shown in Figure 5 for a large N is drawn in Figure 6.

Summing a large number of complex sinusoids

Figure 6: Summing the I and Q parts from Figure 5 but for a very large N in the range [0,F_s). The Q part is seen converging to 0 while the I part approaches an impulse, half of which is visible at 0 and half at F_s

The rectangle at k=0 has a height of N since it is a sum of 1 for all N values of n. It can now be concluded that

  • I arm — the expression \sum _{n=-\infty} ^{+\infty} \cos(2\pi nk/N) — approaches N at k=0 in each period [0,F_s). As \nu \rightarrow 0, N \rightarrow \infty and it becomes an impulse (strictly speaking, the limit does not exist as a function and is rather known as a “distribution”).
  • On the other hand, the Q arm — the expression \sum _{n=-\infty} ^{+\infty} -\sin(2\pi n k/N) — vanishes.

The scaling factor here can be found again by taking the area under the first rectangle. Thus,

Area under the rectangle = N \cdot \nu = N \cdot \frac{F_s}{N} = F_s,

the same as found in Figure 3 (except the factor of 2\pi). Since it is a periodic signal, it consists of impulses repeating at integer multiples of F_s — just what we saw in Figure 3 (except the 2\pi factor as we are dealing in F now). The complete sequence is now drawn in Figure 7.

Approaching an impulse train

Figure 7: Displaying the I branch from Figure 6 in the range (-3F_s,+3F_s) for large N

It was then I realized my problem. All the graphs and curves I have drawn, and DSP textbooks draw for that matter, use the wrong scaling on the axes! Although the time and frequency domain impulse sequences look pretty much similar to each other, they are not. Imagine a very large number, say L. Now if I draw the time and frequency domain representations of an impulse train within the range -L to +L, the signals would look like as illustrated in Figure 8 for a sample rate of 10 MHz. As it is evident, all the confusion in the y-axis was due to incorrect scaling at the x-axis!

Impulse train in time domain drawn again
Impulse train in frequency domain drawn again

Figure 8: For a 10 MHz sample rate, there are (10 million)^2 impulses in time domain for just 1 in frequency domain

Here, it is easier to recognize that for an ADC operating at 10 MHz, there are (10 million)^2 impulses on the time axis, for just one such impulse on the frequency axis. Let us express the equivalence of energy in time and frequency domains (known as Parseval’s relation) in a different way.

    \begin{equation*} \lim_{L \to \infty} \int \limits _{-L}^{+L} |x(t)|^2 dt = \lim_{L \to \infty} \int \limits _{-L}^{+L} |X(F)|^2 df \end{equation*}

Clearly, the scaling of the frequency axis by F_s seems perfectly logical. A signal narrow in time domain is wide in frequency domain, finally! I must say that this is the same figure as Figure 3 but until I went this route, I did not figure (pun intended) this out.

This reminded me of drawing the structure of an atom along with the solar system to indicate some similarity, without mentioning their respective scales, as in Figure 9.

An atom and the solar system

Figure 9: An atom and the solar system

This also reinforced my concept of using the frequency variable F instead of \Omega. While \Omega has its advantages at places, the variable F connects us with what is out there: the reality. The commonly used ISM band is 2.4 GHz, not 4.8\pi G-radians/second.

Going into Discrete Domain


Let us turn our thoughts in discrete domain now. There are two points to consider in this context.

  1. The main difference between a continuous-time impulse and a discrete-time impulse is that a discrete-time impulse actually has an amplitude of unity; there is no need to define the concept related to the area under the curve.
  2. There is a factor of N that gets injected into expressions due to the way we define Fourier Transforms. This is the equivalent of the factor 2\pi for continuous frequency domain \omega in Discrete-Time Fourier Transform (DTFT).

    \begin{equation*} x[n] = \frac{1}{2\pi} \int \limits _{\omega=-\pi} ^{+\pi} X(e^{j\omega})\cdot e^{j\omega n} d \omega, \qquad x[n] = \frac{1}{N} \sum \limits_{k=-N/2}^{+N/2-1} X[k]\cdot e^{j2\pi \frac{k}{N}n} \end{equation*}

A discrete impulse train p[n] is a sequence of unit impulses repeating with a period M within our observation window (and owing to DFT input periodicity, outside the observation window as well). Figure 10 illustrates it in time domain for N=15 and M=3.

A sampling sequence with period 3 in time domain

Figure 10: Discrete impulse train in time domain for N=15 and M=3

To compute its N-point DFT P[k],

    \begin{align*} P[k] =& \sum \limits _{n=0} ^{N-1} p[n] e^{-j2\pi\frac{k}{N}n} = \sum \limits _{m=0} ^{N/M-1} 1 \cdot e^{-j2\pi\frac{k}{N/M}m} \\ =& \begin{cases}                    \frac{1}{M}\cdot N   &  k = integer \cdot \frac{N}{M} \\                    0  & elsewhere                \end{cases}     \end{align*}

A few things to note in this derivation are as follows.

  • I write DFT emphasizing on the expression \frac{k}{N} because it corresponds to the continuous-time frequency \frac{F}{F_s}, explicitly exposing n as the time variable similar to its counterpart t (think of 2\pi \frac{F}{F_s}n, or 2\pi ft).
  • From going to second equation from the first, I use the fact that p[n]=0 except where n=mM.
  • The third equation can be derived, again, in many ways such as the one described in the continuous-time case described before.

The final DFT output is shown in Figure 11 for N=15 and M=3.
 

DFT of a sampling sequence in frequency domain with period 3 in time domain

Figure 11: Discrete impulse train in frequency domain for N=15 and M=3

Note how impulse train in time domain is an impulse train in frequency domain. So for N=15 and a discrete-time impulse train with a period of 3, the DFT is also an impulse train with a period of N/3 =5. Moreover, the unit impulses in frequency domain have an amplitude of N/3=5 as well. Observe that a shorter period M in time domain results in a longer period N/M in frequency domain. Also, the amplitude of those frequency domain impulses is N/M times larger as well: compare with 2\pi/T_s in Figure 3.

This is the same result as in continuous frequency domain with impulse train having a period of F_s as well as a scaling factor of F_s. However, it is much clearer to recognize in discrete domain due to the finite DFT size N. That is why I talked about limiting the continuous frequency range within \pm L for a large L and wrote Parseval’s relation as above.

Sampling in Discrete Domain: Downsampling


With the above findings, we want to see what happens when a discrete-time signal is sampled with a discrete-time impulse train. In DSP terminology, this process is known as downsampling. Although it is common to discuss downsampling as a separate topic from sampling a continuous-time signal, both processes are intimately related and offer valuable insights into one another.

A signal x[n] can be downsampled by a factor of D by retaining every Dth sample and discarding the remaining samples. Such a process can be imagined as multiplying x[n] with an impulse train p[n] of period D and then throwing off the intermediate zero samples (in practice, no such multiplication occurs as zero valued samples are to be discarded). As a result, there is a convolution between X[k] and P[k] and D spectral copies of X[k] arise: one at frequency bin k=0 and others at \pm integer \cdot N/D, all having a spectral amplitude of N/D. As D increases, the spectral replicas come closer and their amplitudes decrease proportionally. This is illustrated in Figure 12 for downsampling a signal by D=3.

Downsampling in time and frequency domains

Figure 12: Downsampling a signal x[n] by D=3 in time and frequency domains

This is the same concept as sampling a continuous-time signal. Just as we decrease the amplitudes and spread of the resulting spectral copies by reducing the sampling rate, we decrease the amplitudes and spread of the resulting spectral copies by throwing off more samples, i.e., reducing the sampling rate. In the former, this happens in proportion to 1/T_s; in the latter, in proportion to 1/D.

Put in this way, 1/D is choosing 1 out of every D samples. And 1/T_s is choosing 1 out of every T_s samples if there ever were. In this context, 1/T_s is the probability density function (pdf) of a uniform random variable in the range [0,T_s).

An Alternative View


Carrying on, now we can look into an alternative view on sampling. First, consider the scaling property of the Fourier Transform.

    \begin{equation*} x(a\cdot t) \quad \xrightarrow{{\mathcal{F}}} \quad \frac{1}{|a|} X\left(\frac{F}{a} \right) \end{equation*}

Second, instead of impulse sampling, we can just see it as a scaling of the time axis. Taking the constant a as T_s,
   

    \begin{equation*}        x(T_s \cdot t) = x(tT_s) \quad \xrightarrow{{\mathcal{F}}} \quad \frac{1}{T_s} X\left(F = \frac{f}{T_s} \right)    \end{equation*}

Here, x(tT_S) is the counterpart of x(nT_s) from impulse sampling with the difference being a real number t instead of an integer n. Also, instead of nano and micro seconds, T_s has no units and we just deal it in the limiting case where the interval [0,T_s) is divided into infinitesimally small segments and T_s is just a positive number. In this way, this case becomes similar to downsampling where D is always greater than 1.

The spectrum x-axis also gets scaled by T_s leading to the fundamental relationship between continuous-time frequency F and discrete-time frequency f.

    \begin{equation*} f = F\cdot T_s \qquad \rightarrow \qquad \omega = \Omega \cdot T_s \end{equation*}

Within this framework, it is only after discarding the intermediate “samples” that the spectral replicas emerge. This is a result of defining the frequency axis in relation to periodic signals. There might be an alternative definition of frequency that would make processing the samples easier.

If you found this article useful, you might want to subscribe to my email list below to receive new articles.


A Unit Impulse in Continuous-Time

Width of a sinc signal increases and approaches a constant, just like an eagle spreading its wings

This post treats the signals in continuous time which is different than the approach I adopted in my book. The book deals exclusively in discrete time.

A unit impulse is defined as

    \begin{equation*}     \delta (t) = \displaystyle{\lim_{\Delta \to 0}} \begin{cases}             \frac{1}{\Delta}   &  -\frac{\Delta}{2} < t < +\frac{\Delta}{2} \\             0  & \textmd{elsewhere}             \end{cases} \end{equation*}

The result is an impulse with zero width and infinite height, but a consequence of defining it in this way is that the area under the curve is unity.

    \begin{equation*}     \textmd{Area under a rectangle} = \Delta \cdot \frac{1}{\Delta} = 1 \end{equation*}

This is shown in Figure below.

A rectangle with unit area and how it transforms in a continuous-time unit impulse

Stated in another way,

    \begin{equation*}         \int \limits_{-\infty} ^{+\infty} \delta (t) dt = 1     \end{equation*}

A consequence of this property is that theoretically a particular value of a signal x(t) can be extracted in the following manner.

(1)   \begin{equation*}         \int \limits_{-\infty} ^{+\infty} x(t) \delta (t) dt = x(0), \qquad \int \limits_{-\infty} ^{+\infty} x(t) \delta (t-t_0) dt = x(t_0)     \end{equation*}

The Fourier Transform of a unit impulse can be derived through the help of Eq (1).

    \begin{equation*}         F\left\{\delta (t) \right\} = \int \limits_{-\infty} ^{+\infty} \delta (t) e^{-j 2\pi f t} dt = e^{-j 2\pi f\ \cdot \ 0} = 1     \end{equation*}

which is a constant for all frequencies from -\infty to +\infty. This is shown in Figure below.

Fourier transform of a unit impulse is a constant

Here, there is a chance of not emphasizing the underlying concept of a unit impulse — a rectangle with width approaching zero. An alternative and beautiful approach in this context is transforming a rectangular signal and taking the limit \delta \rightarrow 0. Since the Fourier Transform of a rectangle is a sinc function (\sin x / x),

    \begin{equation*}         F\left\{\delta (t) \right\} = \displaystyle{\lim_{\Delta \to 0}} \frac{\sin \pi f \delta}{\pi f \delta} \approx \frac{\pi f \delta}{\pi f \delta} = 1     \end{equation*}

where we used the approximation \sin \theta \approx \theta for small \theta. With \delta becoming small and approaching zero, the mainlobe of the sinc becomes wider and approaches \infty. Imagine an eagle spreading its wings. This illustrated in Figure below and perfectly demonstrates the concept that a signal narrow in time domain has a wide frequency domain representation and vice versa.

The Fundamental Problem of Synchronization

Phase jumps at every zero crossing from modulating data onto the carrier phase for a QPSK waveform

We have seen in the effect of phase rotation that the matched filter outputs do not map back perfectly onto the expected constellation, even in the absence of noise and no other distortion. Unless this rotation is small enough, it causes the symbol-spaced optimal samples to cross the decision boundary and fall in the wrong decision zone. And even for small rotations, relatively less amount of noise can cause decision errors in this case, i.e., noise margin is reduced. In fact, for higher-order modulation, the rotation becomes even worse because the signals are closely spaced with each other for the same Tx power. Similarly, sampling the incoming Rx signal anywhere other than the zero-ISI instants causes inter-symbol interference (ISI) among neighbouring symbols. A synchronization unit is vital to ensure the proper functionality of a wireless communication system. We explain the fundamental problem of synchronization through the help of the phase rotation impairment.

The task of a phase synchronization unit in a Rx is to estimate this phase offset and de-rotate (since the rotation of a complex number is defined with respect to anticlockwise fashion, de-rotation implies rotating the input in a clockwise direction) the matched filter outputs by this estimate either in a feedforward or a feedback manner. In such a way, the carrier at the Rx can be said to become synchronized with the carrier used at the Tx. We need to analyze the Rx phase as follows.

Breakdown of the Rx phase


The modulated signal arriving at the Rx consists of two different phase components:

  1. Unknown phase shifts arising due to modulating data occurring at symbol rate R_M. For example, in BPSK modulation, phase changes by 0^\circ or 180^\circ at the boundary of each symbol interval T_M. For QPSK, there are four different phase shift possibilities, i.e., 45^\circ, 135^\circ, -135^\circ, or -45^\circ. A similar argument holds for Quadrature Amplitude Modulated (QAM) signals as well.
  2. Unknown phase difference between Tx and Rx local oscillator, \theta_\Delta.

Imagine a QPSK signal input directly into a black box designed to estimate and compensate for the phase and frequency of a sinusoid (a PLL for example). Depending on its design parameters, the mechanism implemented in the black box will try to lock onto the incoming phase within a specific time duration. However, that phase is jumping around by 0^\circ, \pm 90^\circ or 180^\circ at every symbol boundary due to modulating data, never allowing our mechanism to converge. This is the fundamental problem every synchronization subsystem needs to address and is illustrated in Figure below.

Phase jumps at every zero crossing from modulating data onto the carrier phase for a QPSK waveform

We are faced with two options in this situation. Either transmit the synchronization signal in parallel to the data signal that costs increased power and/or bandwidth, or some procedure needs to be invoked to remove the modulation induced phase shifts in the data signal as a result of which

[for a phase offset] the output should become a constant complex number whose phase is our unknown parameter, and
[for a frequency offset] the output should become a simple complex sinusoid whose frequency is our unknown parameter.

Then, either a feedforward (one-shot) estimator or a feedback mechanism (a PLL) can detect this unknown phase. We can either utilize this observation directly to come up with some intuitive phase estimators, or employ our master algorithm — the correlation — to this problem and see where it leads. Interestingly, it turns out that the derivation of phase estimators through the maximum correlation process actually leads to the same intuitively satisfying results. Maximum correlation is a result of a procedure termed as maximum likelihood estimation. Almost all ad hoc synchronization algorithms can be derived through different approximations of the maximum likelihood estimation. For Gaussian noise under some constraints, maximum likelihood and maximum correlation are one and the same thing.

Example


Suppose that the training symbols a_I[m] and a_Q[m] are known (a data-aided system), then their angle can be subtracted from the angle of the matched filter output at each symbol time as

(1)   \begin{equation*}         \hat\theta_\Delta[m] = \measuredangle~ \frac{z_Q(mT_M)}{z_I(mT_M)} - \measuredangle ~\frac{a_Q[m]}{a_I[m]}     \end{equation*}

The intuition this approach is simple: the phase difference between received and expected symbols is the remaining phase offset.

Phase difference between known symbols and downsampled matched filter outputs can be utilized to estimate

Quadrature Amplitude Modulation (QAM)

A general QAM detector with respective waveforms at each block

We discussed earlier that Pulse Amplitude Modulation (PAM) transmits information through amplitude scaling of the pulse p(nT_S) according to the symbol value. To understand QAM, we have to differentiate between baseband and passband signals.

A baseband signal has a spectral magnitude that is nonzero only for frequencies around origin (F=0) and negligible elsewhere. An example spectral plot for a PAM waveform is shown below for 500 2-PAM symbols shaped by a Square-Root Raised Cosine pulse with excess bandwidth \alpha = 0.5 and samples/symbol L = 8. The PAM signal has its spectral contents around zero and hence it is a baseband signal.

Spectrum of a binary PAM modulated signal shaped y a Square-Root Raised Cosine pulse

For wireless transmission, this information must be transmitted through space as an electromagnetic wave. Also observe that once this spectrum is occupied, no other entity in the surrounding region can share the wireless channel for the purpose of communications. For these reasons (and a few others as well), a wireless system is allocated a certain bandwidth around a higher carrier frequency and the generated baseband signal is shifted to that specific portion of the spectrum. Such signals are called passband signals. That is why wireless signals in everyday use such FM radio, WiFi, cellular like 4G, 5G, and Bluetooth are all passband signals which execute their transmissions within their respective frequency bands sharing the same physical medium.

The easiest method to shift the spectrum to a designated carrier frequency is by multiplying, or mixing, it with a sinusoidal waveform. This is because time domain multiplication is frequency domain convolution between spectra of the sinusoid and the desired signal. Now in the article on DFT examples, we saw that the spectrum of a pure sinusoid at any frequency is composed of two impulses at that frequency (one positive and one negative and half amplitude). Also, convolution of a signal with a unit impulse results in the same signal shifted at the location of that impulse. Hence, a PAM waveform can be multiplied with a carrier sinusoid at frequency F_C to “park” the actual spectrum in its allocated slot. This is drawn in Figure below.

Upconversion implies multiplying the signal in time domain with a higer frequency sinusoid

After this spectral upconversion, both positive and negative portions of the baseband spectrum appear at +F_C and -F_C. The bandwidth — positive portion of the spectrum — hence becomes double. However, a relief comes from the fact that both \cos(\cdot) and \sin (\cdot) can be used to carry independent waveforms due to their orthogonality (having phases 90 ^\circ apart) to each other over a complete period, i.e.,

    \begin{equation*}                     \sum \limits _{n=0} ^{N-1} \cos 2\pi \frac{k}{N}n \cdot \sin 2\pi \frac{k}{N}n = 0             \end{equation*}

That is the birth of Quadrature Amplitude Modulation (QAM), in which two independent PAM waveforms are communicated through mixing one with a cosine and the other with a sine. Just like constellation, the term quadrature also comes from astronomy to describe position of two objects 90 ^\circ apart.

For mathematical expression for QAM, first consider two independent PAM waveforms with symbol streams a_I and a_Q, respectively.

(1)   \begin{equation*}           \begin{aligned}             v_I(nT_S)\: &= \sum _{m} a_I[m] p(nT_S-mT_M) \\             v_Q(nT_S) &= \sum _{m} a_Q[m] p(nT_S-mT_M)           \end{aligned}         \end{equation*}

Next, we multiply the resulting complex signal with a complex sinusoid at carrier frequency and collect its real part, i.e., multiply v_I arm with \sqrt{2}\cos 2\pi (k_C/N) n and v_Q arm with -\sqrt{2}\sin 2\pi (k_C/N) n. The reason of including these two factors, \sqrt{2} and the negative sign, will be discussed later during QAM detection. Consequently, a general QAM waveform can be written as

(2)   \begin{align*}         s(nT_S) &= v_I(nT_S) \sqrt{2} \cos 2\pi \frac{k_C}{N} n - v_Q(nT_S) \sqrt{2}\sin 2\pi \frac{k_C}{N} n  \\                 &= v_I(nT_S) \sqrt{2}\cos 2\pi \frac{F_C}{F_S}n - v_Q(nT_S) \sqrt{2}\sin 2\pi \frac{F_C}{F_S}n \nonumber                 \end{align*}

(3)   \begin{align*}                 &= \sum _{m} a_I[m] p(nT_S-mT_M) \sqrt{2}\cos 2\pi F_C nT_S - \nonumber \\                 &~ \hspace{1in} \sum _{m} a_Q[m] p(nT_S-mT_M) \sqrt{2}\sin 2\pi F_C nT_S \nonumber     \end{align*}

where we have used the fundamental relation between continuous and discrete frequencies k/N = F/F_S, and k_C corresponds to the carrier frequency F_C. Observe that s(nT_S) is a real signal with no Q component. After digital to analog conversion (DAC), the continuous-time signal s(t) can be expressed as

(4)   \begin{align*}         s(t) &= v_I(t) \sqrt{2} \cos 2\pi F_C t - v_Q(t) \sqrt{2}\sin 2\pi F_C t  \\                 &= \sum _{m} a_I[m] p(t-mT_M)\sqrt{2} \cos 2\pi F_C t ~- \nonumber \\                 &~ \hspace{1in} \sum _{m} a_Q[m] p(t-mT_M) \sqrt{2} \sin 2\pi F_C t \nonumber     \end{align*}

In Eq (4), symbols a_I[m] determine the I signal while a_Q[m] control the Q signal. Such representation of the QAM waveform as a sum of amplitude scaled and pulse shaped sinusoids is known as the rectangular form. Using trigonometric identity \cos (A+B) = \cos A \cos B - \sin A \sin B, Eq (4) can also be written as

    \begin{equation*}         s(t) = \sqrt{a_I^2[m] + a_Q^2[m]} \cdot p(t-mT_M) \cdot \sqrt{2} \cos \left(2\pi F_C t+ \tan^{-1} \frac{a_Q[m]}{a_I[m]}\right)     \end{equation*}

Called the polar form, here we have a single sinusoid whose amplitude and phase are determined by some combination of symbols a_I[m] and a_Q[m].

Constellation Diagram


Some examples of QAM constellations are discussed below.

  • For an even power of 2, square QAM is a constellation whose points are spaced on a grid in the form of a square. It is formed by a product of two \sqrt{M}-PAM constellations, one on I axis and the other on Q axis. For example, a 16-QAM constellation is formed by two \sqrt{16}=4 PAM constellations as drawn in Figure below, while some other square QAM constellations are also shown in the next Figure.

    A 16-QAM constellation is formed by two 4-PAM constellations

    QAM constellation diagrams for M  = 4, 16 and 64

    Notice how constellation points in higher-order QAM are closer to each other compared to lower-order QAM. A relatively lower noise power is then enough to cause a decision error by moving the received symbol over the decision boundary. This is the cost of increasing data rate by packing more bits in the same symbol. We will have more to say about it when we discuss Bit Error Rates (BER) for each constellation in another article.

    For M=4, the average symbol energy in square QAM is derived using Pythagoras theorem as

        \begin{equation*}             E_{M-\text{QAM}} = \frac{1}{4}\left\{ 4 \left( A^2 + A^2\right) \right\} = 2A^2         \end{equation*}

    For general M, a similar derivation as in the case of PAM yields

    (5)   \begin{equation*}             E_{M-\text{QAM}} = \frac{2}{3} \left(M-1\right)A^2         \end{equation*}

  • A special case of M-QAM is M-PSK, which stands for Phase Shift Keying. As the name PSK implies, the amplitude remains constant in this configuration while the information is conveyed by different phases. Examples of 2-PSK, 4-PSK and 8-PSK are drawn in Figure below.

    PSK constellation diagrams for M = 2, 4 and 8

    Notice that

    • 2-PSK constellation (also known as BPSK) looks similar to 2-PAM. However, the difference is that there is no carrier upconversion in PAM systems.
    • 4-PSK constellation (also known as QPSK) is exactly the same as square 4-QAM.
    • For M>4, square QAM packs the constellation points more efficiently than PSK and hence the modulation of choice in many wireless standards. Points come relatively closer in PSK and the closer the points, the larger the probability of receiving a symbol in error due to a jump across the decision boundary. On the other hand, QAM heavily depends on overall system linearity due to information conveyed in the signal amplitude. A constant envelope of the modulated signal is much more suitable for transmission over nonlinear channels where PSK is the preferred choice. In such situations, the performance of PSK is quite insensitive to nonlinear distortion and heavy filtering.
  • Since any constellation that uses both amplitude and phase modulation fall under the general category of QAM, there can be many other rearrangements of QAM constellation points. However, we focus on square QAM and PSK in this text which are widely used in practice. Non-regular constellations can be formed which yield a small performance advantage, though it is usually not much significant. Moreover, the more points are in the constellation, the lesser is the difference.

QAM Modulator


To build a conceptual QAM modulator, we follow similar steps as in a PAM modulator. The block diagram for a 4-QAM modulator is drawn in Figure below.

A general QAM modulator with respective waveforms at each block

  • Every T_b seconds, a new bit arrives at the input forming a serial bit stream.
  • A serial-to-parallel (S/P) converter collects \log_2 M such bits every T_M = \log_2 M \times T_b seconds that are used as an address to access two Look-Up Tables (LUT). One LUT stores \sqrt{M} symbol values a_I while the other \sqrt{M} symbol values a_Q specified by the constellation.
  • To produce a QAM waveform, the symbol sequences a_I[m] and a_Q[m] are converted to discrete-time impulse trains in separate arms (one I and the other Q) through upsampling by L, where L is samples/symbol defined in as ratio of symbol time to sample time T_M/T_S, or equivalently sample rate to symbol rate F_S/R_M.
  • As explained in sample rate conversion, upsampling inserts L-1 zeros between each symbol after which the interpolated intermediate samples can be raised from dead with the help of a pulse shaping filter in each arm that — in addition to shaping the spectrum — suppresses all the spectral replicas arising from upsampling except the primary one. These are the two I and Q PAM waveforms forming the signal v_I.
  • Next, I PAM waveform v_I is upconverted by mixing with the carrier \cos 2\pi F_C nT_S while the Q PAM waveform v_Q with -\sin 2\pi F_C nT_S, which are then summed to form the QAM signal s(nT_S). The carriers are generated through an oscillator at the Tx.
  • This discrete-time signal s(nT_S) is converted to a continuous-time signal s(t) by a DAC.

The mathematical derivation for the QAM modulator was shown in Eq (2) and Eq (4).

In the meantime, if you found this article useful, you might want to subscribe to my email list below to receive new articles.


QAM Detector


The received signal r(t) is the same as the transmitted signal s(t) but with the addition of additive white Gaussian noise (AWGN) w(t). The symbols are detected through the following steps illustrated in Figure below.

A general QAM detector with respective waveforms at each block

  • Out of the infinite spectrum, the desired signal r(t) is selected with the help of a Bandpass Filter (BPF).
  • Through an analog to digital converter (ADC), this signal is sampled at a rate of F_S samples/s to produce a sequence of T_S-spaced samples r(nT_S).
  • Next, a complex signal x(nT_S is produced when r(nT_S) by downconverted by mixing with two carriers, \cos 2\pi F_C nT_S and \sin 2\pi F_C nT_S which are generated through an oscillator at the Rx.
  • The resulting complex waveform x(nT_S) in I and Q arms is processed through two matched filters thus generating z_I(nT_S) and z_Q(nT_S). As discussed earlier, the output of the matched filters are continuous correlations of the symbol-scaled pulse shape with an unscaled and time-reversed pulse shape.
  • These I and Q outputs are downsampled by L at optimal sampling instants

        \begin{equation*}                 n = mL = m \frac{T_M}{T_S}             \end{equation*}

    to produce T_M-spaced numbers z_I(mT_M) and z_Q(mT_M) back from the signal.

  • The minimum distance decision rule is employed in IQ-plane to find the symbol estimates \hat a_I[m] and \hat a_Q[m] to decide on the final constellation point.

Let us discuss the mathematical details of this process for a noiseless received signal as in Eq (4),

(6)   \begin{equation*}         r(t) = s(t) = v_I(t) \sqrt{2} \cos 2\pi F_C t - v_Q(t) \sqrt{2}\sin 2\pi F_C t     \end{equation*}

where F_C is the carrier frequency and v_I(t) and v_Q(t) are defined in Eq (1). After bandlimiting the incoming signal through a bandpass filter, it is sampled by the ADC operating at F_S samples/second to produce

    \begin{align*}         r(nT_S) &= v_I(nT_S) \sqrt{2}\cos 2\pi F_C nT_S - v_Q(nT_S)\sqrt{2} \sin 2\pi F_C nT_S \\                 &= v_I(nT_S) \sqrt{2}\cos 2\pi \frac{k_C}{N}n - v_Q(nT_S) \sqrt{2}\sin 2\pi \frac{k_C}{N}n     \end{align*}

where the relation F/F_S = k/N is used. To produce a complex baseband signal from the received signal, the samples of this waveform are input to a mixer which multiplies them with discrete-time quadrature sinusoids \sqrt{2}\cos 2\pi (k_C/N)n and -\sqrt{2}\sin 2\pi (k_C/N)n yielding

    \begin{align*}         \begin{aligned}             x_I(nT_S) &= r(nT_S) \cdot \sqrt{2}\cos 2\pi \frac{k_C}{N}n \: \\                   &= 2v_I(nT_S) \cos^2 2\pi \frac{k_C}{N}n - 2v_Q(nT_S) \sin 2\pi \frac{k_C}{N}n \cos 2\pi \frac{k_C}{N}n         \end{aligned}     \end{align*}

    \begin{align*}         \begin{aligned}             x_Q(nT_S) &= r(nT_S) \cdot -\sqrt{2}\sin 2\pi \frac{k_C}{N} n \\                   &= 2v_Q(nT_S) \sin ^2 2\pi \frac{k_C}{N}n -2v_I(nT_S) \cos 2\pi \frac{k_C}{N}n \sin 2\pi \frac{k_C}{N}n         \end{aligned}     \end{align*}

Using the identities \cos^2 A = 1/2(1+\cos 2A), \sin^2 A = 1/2(1-\cos 2A) and 2 \sin A \cos A = \sin 2A,

    \begin{align*}       \begin{aligned}         x_I(nT_S) &= v_I(nT_S) + \underbrace{v_I(nT_S)\cos 2 \pi \frac{2k_C}{N}n - v_Q(nT_S) \sin 2\pi \frac{2k_C}{N}n}_{\text{Double frequency terms}} \\         x_Q(nT_S) &= v_Q(nT_S) - \underbrace{v_Q(nT_S)\cos 2\pi \frac{2k_C}{N}n - v_I(nT_S) \sin 2\pi \frac{2k_C}{N}n}_{\text{Double frequency terms}}       \end{aligned}     \end{align*}

Now we can observe why the two factors, a \sqrt{2} with both sinusoids and a negative sign with \sin 2\pi (k/N)n, were inserted.

  • A \sqrt{2} at the modulator and later another \sqrt{2} in the detector result in a gain of 2, which cancels the halving of sinusoid amplitudes in above trigonometric multiplications.
  • When a complex signal is multiplied with a complex sinusoid, a negative sign appears for phase addition of I and Q in the I section of cumulative waveform (see the multiplication rule for clarity). This I section of the cumulative waveform is the real part of the complex product which is transmitted through the channel. Another negative sign with \sin 2\pi (k/N)n at the Rx delivers a positive v_Q(nT_S) at the output.

The matched filter output is written as

    \begin{align*}         \begin{aligned}             z_I(nT_S) &= x_I(nT_S) * p(-nT_S) \\                       &= \left(v_I(nT_S) + \text{Double freq terms}\right) * p(-nT_S) \\                       &= \Big( \sum \limits _i a_I[i] p(nT_S - iT_M) + \frac{2k_C}{N}~ \text{terms}\Big) * p(-nT_S) \\                       &= \sum \limits _i a_I[i] r_p(nT_S - iT_M)         \end{aligned}     \end{align*}

    \begin{align*}         \begin{aligned}             z_Q(nT_S) &= x_Q(nT_S) * p(-nT_S) \\                       &= \left(v_Q(nT_S) + \text{Double freq terms}\right) * p(-nT_S) \\                       &= \Big( \sum \limits _i a_Q[i] p(nT_S - iT_M) + \frac{2k_C}{N}~ \text{terms}\Big) * p(-nT_S) \\                       &= \sum \limits _i a_Q[i] r_p(nT_S- iT_M)         \end{aligned}     \end{align*}

where r_p(nT_S) comes into play from the definition of auto-correlation function. The double frequency terms in the above equation are filtered out by the matched filter h(nT_S) = p(-nT_S), which also acts a lowpass filter due to its spectrum limitation in the range -0.5 R_M \le F \le +0.5R_M. To generate symbol decisions, T_M-spaced samples of the matched filter output are required at n = mL = mT_M/T_S. Downsampling the matched filter output generates

    \begin{align*}         \begin{aligned}             z_I(mT_M) &= z_I(nT_S) \bigg| _{n = mL = mT_M/T_S} \\                       &= \sum \limits _i a_I[i] r_p(mT_M - iT_M) = \sum \limits _i a_I[i] r_p\{(m-i)T_M\} \\                       &= \underbrace{a_I[m] r_p(0T_M)}_{\text{current symbol}} + \underbrace{\sum \nolimits _{i \neq m} a_I[i] r_p\{(m-i)T_M\}}_{\text{ISI}}         \end{aligned}     \end{align*}

and

    \begin{align*}         \begin{aligned}             z_Q(mT_M) &= z_Q(nT_S) \bigg| _{n = mL = mT_M/T_S} \\                       &= \sum \limits _i a_Q[i] r_p(mT_M - iT_M) = \sum \limits _i a_Q[i] r_p\{(m-i)T_M\} \\                       &= \underbrace{a_Q[m] r_p(0T_M)}_{\text{current symbol}} + \underbrace{\sum \nolimits _{i \neq m} a_Q[i] r_p\{(m-i)T_M\}}_{\text{ISI}}         \end{aligned}     \end{align*}

A square-root Nyquist pulse p(nT_S) has an auto-correlation r_p(nT_S) that satisfies no-ISI criterion,

    \begin{equation*}         r_p(mT_M) = \left\{ \begin{array}{l}         1, \quad m = 0 \\         0, \quad m \neq 0 \\         \end{array} \right.     \end{equation*}

Thus,

    \begin{align*}       \begin{aligned}         z_I(mT_M) &= a_I[m] \\         z_Q(mT_M) &= a_Q[m] \\       \end{aligned}     \end{align*}

In conclusion, the downsampled matched filter outputs map back to the Tx symbols in the absence of noise. If the world was simple, that would have been an end to it! But the world is complicated, and there are layers of issues that happen between the Tx information generation and Rx decision making. Anything we do after this will be to combat a subset of signal distortions and towards recovering the original information.

Observe that the system shown in Figure above is a multirate system. In the QAM detector, for example, the ADC and the matched filters operate at the sample rate F_S. After the outputs of the matched filters are downsampled by L, the symbol decisions are made at the symbol rate R_M. Furthermore, there are some hidden assumptions in the QAM detector:

[Resampling] The ADC in general does not produce an integer number of samples per symbol, i.e., T_M/T_S is not an integer. As we will see later, a resampling system is required in the Rx chain that changes the sample rate from the ADC rate to a rate that is an integer multiple of the symbol rate.
[Symbol Timing Synchronization] The peak samples at the end of symbol durations in both I and Q arms are not known in advance at the Rx and in fact do not necessarily coincide with generated samples as well. This is because ADC just samples the incoming continuous waveform without any information about the symbol boundaries. This is a symbol timing synchronization problem which we will discuss later.
[Carrier Frequency Synchronization] The carrier frequency of the oscillator at the Tx and that at the Rx are not exactly the same, equal to F_C. Instead, if the oscillator frequency at the Tx is denoted as F_C, then at the Rx, we have F_C+\Delta F_C, where \Delta F_C is the difference between the two and can be either positive or negative. Any movement by the Tx, the Rx or within the environment between them also causes a shift in frequency (known as Doppler shift) that needs to be compensated. We will discuss carrier frequency synchronization in another article.
[Carrier Phase Synchronization] Furthermore, the carrier phase at the Rx oscillators is not known beforehand and needs to be estimated which will be explained in another article.

QAM Eye Diagram and Scatter Plot


As discussed above, a QAM signal consists of two PAM signals riding on orthogonal carriers. At baseband, these two PAM signals appear as I and Q components of a complex signal at the Tx and Rx. Therefore, there are two eye diagrams for a QAM modulated signal, one for I and the other for Q and both of them are exactly the same as PAM.

The case of scatter plot is a little different. After downsampling the matched filter output to symbol rate, the samples thus obtained are mapped back to the constellation, previously illustrated in QAM detector block diagram and now drawn in Figure below for 4-QAM modulation. This cloud of samples around the constellation points is now 2-dimensional (for PAM, there was no Q component) and can also be understood as a plot of Q versus I for each mapped value.

A scatter plot for 4-QAM modulation

Looking at the scatter plot, one can readily deduce a lot of features for the particular transmission system. At this stage, however, it is enough to observe that the diameter of these clouds is a rough measure of the noise power corrupting the signal. For the ideal case of no noise, this diameter is zero and all the optimally timed samples coincide with their respective constellation points.

Finite Impulse Response (FIR) Filters

Time and frequency response of a lowpass FIR filter designed with Parks-McClellan algorithm for N=33

We learned in the concept of frequency that most signals of practical interest can be considered as a sum of complex sinusoids oscillating at different frequencies. The amplitudes and phases of these sinusoids shape the frequency contents of that signal and are drawn through magnitude response and phase response, respectively. In DSP, a regular goal is to modify these frequency contents of an input signal to obtain a desired representation at the output. This operation is called filtering and it is the most fundamental function in the whole field of DSP.

From this Eq, one can observe that it is possible to design a system, or filter, such that some desired frequency contents of the signal can be passed through by specifying H(k) \approx 1 for those values of k, while other frequency contents can be suppressed by specifying H(k) \approx 0. The terms lowpass, highpass and bandpass then make intuitive sense that low, high and band refer to desired spectral contents that need to be passed while blocking the rest.

Filter Frequency Response


Figure below shows the magnitude response |H(F)| (as a function of continuous frequency) of an ideal lowpass filter. A lowpass filter passes frequencies near 0 while blocks the remaining frequencies.

An ideal lowpass filter and aliases

As explained in the discussion about sampling, in a continuous frequency world, the middle filter is all that exists. However, in a sampled world, the frequency response of the filter — just like a sampled signal — repeats at intervals of F_S (sampling frequency) and there are infinite spectral replicas on both sides shown with dashed lines. The primary zone of the sampled frequency response is from -0.5F_S to +0.5F_S shown within dotted red lines with its spectral contents in solid line. From this Eq on sampling, we know that filter response near 0 is the same as that near F_S and the filter response around +0.5F_S is the same as around -0.5F_S.

On the other hand, a highpass filter passes both positive and negative frequencies near 0.5F_S while blocking everything else. Observe the symmetry around \pm 0.5F_S in Figure below. Similarly, bandpass filters only pass a specific portion of the spectrum.

An ideal highpass filter and aliases

Filter Impulse Response


Since a filter is a system that modifies the spectral contents of a signal, it naturally has an impulse response h[n] just like a system. When this impulse response is finite, the filter is called Finite Impulse Response (FIR). Impulse response h[n] of an FIR filter, i.e., the sequence of its coefficients, can be found by taking the iDFT of the frequency response H[k].

FIR Filter Design


Understanding of FIR filter design starts from the following argument for continuous frequency domain. Since ideal filters have unity passband and zero stopband and a direct transition between these two, this finite frequency domain support produces an impulse response h[n] in time domain which is infinite and hence non-realizable, as explained in the discussion about frequency. When this h[n] is truncated on the two sides, it is equivalent to multiplication of the ideal infinite h[n] by a rectangular window w_{\text{Rect}}[n] as

    \begin{equation*}         h_{\text{Truncated}}[n] = h[n] \cdot w_{\text{Rect}}[n]     \end{equation*}

As mentioned previously, windowing is a process of pruning an infinite sequence to a finite sequence. We know that multiplication in time domain is convolution in frequency domain, and that the spectrum of a rectangular window is a sinc signal. Thus, convolution between the spectrum of the ideal filter and this sinc signal generates the actual frequency response H(F) of the filter.

    \begin{equation*}         H(F) = \text{Ideal brickwall response} * sinc(F)     \end{equation*}

This sinc signal has decaying oscillations that extend from -\infty to \infty. Convolution with these decaying oscillations is the reason why a realizable FIR filter always exhibits ripple in the passband and in the stopband as well as a transition bandwidth between the two. Naturally, this transition bandwidth is equal to the width of the sinc mainlobe, which in turn is a function of the length of the rectangular window in time. Truncation with a rectangular window causes the FIR filters to have an impulse response h[n] of finite duration.

Sidelobes appearing in the stopband of the filter, transition bandwidth and ripple in the passband characterize the performance of a filter. These parameters are usually specified in terms of decibels (or dBs). The filter magnitude in dB is given by

    \begin{equation*}         |H(F)|_{dB} = 20 \log_{10} |H(F)|     \end{equation*}

It should be remembered that the multiplicative factor 20 above is replaced by 10 for power conversions (because power is a function of voltage or current squared). The advantage of converting magnitudes to dBs for plotting purpose is that very small and very large quantities can be displayed clearly on a single plot.

A filter is designed in accordance with the parameters in Table below and shown graphically in the next Figure.

Lowpass filter specifications table

A lowpass filter marked with specs frequencies and their amplitudes

FIR filter design basically requires finding the values of filter taps (or coefficients) that translate into a desired frequency response. Many software routines are available to accomplish this task. A standard method for FIR filter design is the Parks-McClellan algorithm. The Parks-McClellan algorithm is an iterative algorithm for finding the optimal FIR filter such that the maximum error between the desired frequency response and the actual frequency response is minimized. Filters designed this way exhibit an equiripple behavior in their frequency responses and are sometimes called equiripple filters (equiripple implies equal ripple within the passband and within the stopband, but not necessarily equal in both bands).

Example


We design a lowpass FIR filter through Parks-McClellan algorithm in Matlab that meets the following specifications.

    \begin{align*}             &\text{Sample rate}, \quad F_S = 12\: \text{kHz} \\             &\text{Passband edge} \quad F_{\text{pass}} = 2\: \text{kHz} \\             &\text{Stopband edge} \quad F_{\text{stop}} = 3\: \text{kHz} \\             &\text{Passband ripple} \quad \delta_{\text{pass,dB}} = 0.1\: \text{dB} \\             &\text{Stopband ripple} \quad \delta_{\text{stop,dB}} = -60\: \text{dB}           \end{align*}

First, noting from above Figure that the maximum amplitude in the passband of frequency domain is 1+\delta_{\text{pass}}, we can convert dB units into linear terms as

    \begin{align*}       20 \log_{10} \big(1+\delta_{\text{pass}}\big) &= \delta_{\text{pass,dB}} \\       1+\delta_{\text{pass}} &= 10^{\delta_{\text{pass,dB}}/20} \\         \delta_{\text{pass}} &= 0.012     \end{align*}

For the stopband,

    \begin{align*}       20 \log_{10} \delta_{\text{stop}} &= \delta_{\text{stop,dB}} \\       \delta_{\text{stop}} &= 10^{-60/20} = 0.001     \end{align*}

A Matlab function firpm() returns the coefficients of a length N+1 linear phase (which implies real and symmetric coefficients, see the concept of phase) FIR filter. For finding an approximate N, the Matlab command firpmord() — which stands for FIR Parks-McClellan order — can be used.

    \begin{equation*}         N = \text{firpmord} ([F_{\text{pass}}\quad F_{\text{stop}}],[1 \quad 0],[\delta_{\text{pass}} \quad \delta_{\text{stop}}],F_S)     \end{equation*}

where the vector [1\quad 0] specifies the desired amplitude pointing towards passband and stopband, respectively. For our example,

    \begin{equation*}         N = \text{firpmord} ([2000\quad 3000],[1 \quad 0],[0.012 \quad 0.001],12000) =29     \end{equation*}

A word of caution: it can raise confusion in the sense that firpmord() uses the band widths for input amplitudes while firpm() uses the band edges. For example, firpmord() uses the desired amplitude vector [1 \quad 0] to denote passband and stopband. On the other hand, firpm() uses the amplitude vector [1\quad  1\quad 0 \quad 0] to denote band edges.

For a trial value of N=29,

    \begin{align*}       h &= \text{firpm}(N-1,[0 \quad F_{\text{pass}} \quad F_{\text{stop}} \quad 0.5F_S]/(0.5F_S),[1\quad 1\quad 0\quad 0]) \\         &= \text{firpm}(28,[0 \quad 2000 \quad 3000 \quad 6000]/6000,[1\quad 1\quad 0\quad 0])     \end{align*}

where the vector [1\quad 1 \quad 0 \quad 0] specifies the desired amplitude at band edges of passband and stopband, respectively. The normalization by 0.5 F_S is of no significance as it is just a requirement of this function. Furthermore, the filter is designed with N+1 coefficients by Matlab and that is why we used N-1 as its argument.

If you try this little code, you will see that the result thus obtained had stopband levels of only around -45 dB and a passband ripple of 0.05 dB. The sidelobes do not meet the 60 dB attenuation while the passband ripple exceeds the requirement of 0.1 dB. Therefore, N can be increased until 60 dB attenuation is achieved for N = 41. To avoid excessive filter length increase which increases the processor workload, different weighting factors can be employed for passband and stopband ripples as well. For example, a weighting factor of 1:10 for passband to stopband ([1 \quad 10] in vector form) yields,

    \begin{equation*}       h = \text{firpm}(28,[0 \quad 2000 \quad 3000 \quad 6000]/6000,[1\quad 1\quad 0\quad 0],[1 \quad 10])     \end{equation*}

In this case, the sidelobe level reached -55 dB, a 10 dB improvement over the same N=29 when there were no band weights. Next, the filter order is gradually increased until it meets the desired specifications at N = 33, a difference of 8 taps from N=41 and no weighting factors. Such trial and error approach is often necessary to design a filter with given specifications.

The impulse response h[n] and frequency response 20 \log_{10} H(F) of the final filter are drawn in Figure below. Note the stopband attenuation of 60 dB and passband ripple within 0.1 dB.

Time and frequency response of a lowpass FIR filter designed with Parks-McClellan algorithm for N=33

Filtering as Convolution


We also learned above that the higher the number of taps, the more closely the filter follows an expected response (e.g., lowpass, highpass, bandpass, etc.) at a cost of increased computational complexity. Therefore, the defining feature of an FIR filter is the number of coefficients that determines the response length, the number of multipliers and the delay in samples required to compute each output. An FIR filter is built of multipliers and adders. A delay element, which is just a clocked register, is used between coefficients. A single delay element is represented with a D_1 symbol. Figure below illustrates a 5-tap FIR filter.

FIR filter implementation

Taking into account the structure in Figure above, this 5-tap filter output can be mathematically written as

    \begin{equation*}         r[n] = h_0  s[n] + h_1  s[n-1] + h_2  s[n-2] + h[3]  s[n-3] + h[4]  s[n-4]     \end{equation*}

For a general FIR filter of length N,

(1)   \begin{equation*}         r[n] = \sum \limits _{m=0} ^{N-1} h[m] s[n-m]     \end{equation*}

If this equation sounds familiar to you, you guessed it right. Compare it with convolution Eq, and this is nothing but a convolution between the input signal and the filter coefficients. Just like convolution, at each clock cycle, samples from input data and filter taps are cross-multiplied and the outputs of all multipliers are added to form a single output for that clock cycle. On the next clock cycle, the input data samples are shifted right by 1 relative to filter taps, and the process continues.

Filter Delay


Due to convolution, the length of the output signal r[n] is given as in the article on convolution, which is repeated below.

    \begin{equation*}         \textmd{Length}\{r[n]\} = \textmd{Length}\{s[n]\} + \textmd{Length}\{h[n]\} - 1     \end{equation*}

which is longer than the input by \textmd{length}\{h[n]\} - 1 samples. We can say that every FIR filter causes the output signal to be delayed by a number of samples given by

(2)   \begin{equation*}         D = \frac{\textmd{Length}\{h[n]\} - 1}{2}     \end{equation*}

As a result, D samples of the output at the start — when the filter response is moving into the input sequence — can be discarded. Similarly, the last D samples — when the filter is moving out of the input sequence — can be discarded as well.

Filter Phase Response


As discussed in the concept of phase, the phase response of filters is also important because the shape of the signal in time domain is determined by phase alignment of its constituent complex sinusoids. A filter with unity magnitude across the spectrum does not affect the amplitude of those sinusoids but if its phase response is not linear, it can completely change the shape of the filtered signal due to different sinusoids adding up with different phases at the output. A linear phase on the other hand just delays the output without distorting it. In the post on convolution, we discussed that the output response of an LTI system to a complex sinusoid at its input is nothing but the same complex sinusoid at the same frequency but with different amplitude and phase. If this phase of the filter is a linear function of frequency k, then rearranging Eq on time and phase shifts,

    \begin{flalign*}         \begin{aligned}               \textmd{Phase shift} \quad - \textmd{constant} \cdot k \cdot \textmd{delay} ~& \xrightarrow{\hspace*{1cm}}~ s[n-\textmd{delay}] \quad \textmd{Time shift}         \end{aligned}     \end{flalign*}

for all complex sinusoids in its spectrum. Fortunately, maintaining a linear phase response in FIR filters is straightforward by making the coefficients symmetric, see this Figure as an example. This is one of the main reasons for widespread use of FIR filters in DSP applications.