The purpose of this article is to summarize some useful DFT properties in a table. If you feel that this particular content is not as descriptive as the other posts on this website are, you are right. As opposed to the rest of the content on the website, we do not intend to derive all the properties here. Instead, based on what we have learned, some important properties of the DFT are summarized in Table below with an expectation that the reader can derive themselves by following a similar methodology of plugging in the time domain expression in DFT definition.

For example, in many of the figures encountered so far, we observed some kind of symmetry in DFT outputs. More specifically, parts of the DFT had even symmetry while the components were odd symmetric. Similarly, magnitude plots were even symmetric and phase plots had odd symmetry. This is true only for real input signals.

Such kind of symmetry is called conjugate symmetry defined as

To see why real signals have a conjugate symmetric DFT, refer to the DFT definition. For a real signal, is zero and

(4)

Now is defined as

Using the identities , , , , , and , we get

(5)

Eq (5) and Eq (4) satisfy the definition of conjugate symmetry in Eq (2) and the proof is complete. Conjugate symmetry for magnitude and phase plots in Eq (3) can also be proved through their polar representation.

Observe an even symmetry in part as well as magnitude of the DFT of a rectangular sequence in DFT examples. Similarly, an odd symmetry can be observed from the same figures for part and phase, respectively.

Effect of symmetry on plots

For a real input signal, due to an even symmetry in DFT and magnitude plots while odd symmetry in DFT and phase plots, it is quite normal to discard the negative half of all these plots for real signals, with the understanding that the reader knows their symmetry properties.

Finally, observe that for real and even signals, the DFT is purely real as part is . This can be verified from Eq (5) because term is then a product of an even signal and an odd signal resulting in an odd signal. Half the values in an odd signal are the same as the other half but of opposite sign, and their sum is zero. This can be observed from zero part as well as zero phase in this Figure and many other signals in the text.

A system is time-invariant if shifting the input sequence on time axis leads to an equivalent shift of the output sequence along the time axis, with no other changes. So if an input generates an output , then

In other words, if an input signal is delayed by some amount , output also just gets delayed by and remains exactly the same otherwise. Whether a system is time-invariant or not can be determined by the following steps.

Delay the input signal by samples to make it . Find the output for this input.

Delay the output signal by samples to make it . Call the new output .

If , the system is time-invariant. Otherwise, it is time-variant.

Example

Consider a system

We will follow the steps above to determine whether this system is time-invariant or not.

Delaying the input signal by samples, the output in response to is

Delaying the output signal by samples, we get

Since , this system is time-invariant.

Now consider another system

and apply the same steps.

Delaying the input signal by samples, the output in response to is

Delaying the output signal by samples, we get

Since , this system is time-variant.

In other words, a system is time-invariant if the order of delaying does not matter, i.e.,

is the same as

Linear Time-Invariant (LTI) System

A system that is both linear and time-invariant is, not surprisingly, called Linear Time-Invariant (LTI) system. It is a particularly useful class of systems that not only truly represents many real-world systems but also possesses an invaluable benefit of having a rich set of tools available for its design and analysis.

From the previous articles, we have seen how a Tx generates a signal from the numbers and the Rx recovers those numbers from that signal. In time domain, everything looks nice and perfect.

Let us investigate the system characteristics in frequency domain. As is clear from PAM block diagram, the main component that defines the spectral contents of the signal is the pulse shape at the Tx. We start with our attention towards the rectangular pulse shape used so far.

Frequency domain view

It is straightforward to find the DFT of many signals from DFT definition and the article on DFT examples, or a software routine. The discrete frequency plot as a function of k, however, can only show a limited amount of frequency content due to distinct frequency domain samples. In the spirit of simplicity we have adopted in this text, we will interpolate many of the future graphs for a finer resolution. All we have to do is choose a large value of when plotting.

As discussed in the article The Discrete Fourier Transform (DFT), for a reasonably band-limited signal, we can approximate the continuous frequency by discrete frequency . Then, does a larger imply that we can cover a larger frequency range in reality? No, recall from frequency relation that and

(1)

A large just makes the resolution finer and finer. The range stays within to and so does between and , until the sample rate itself changes. When we illustrate the frequency domain beyond this range, then we want to look into aliasing replicas of the spectrum as well.

Spectrum of a Rectangular Pulse

In time domain, a rectangular pulse has a clear benefit that Tx symbols do not naturally invade into each other’s territory. In case there are no reflections or dispersions in the channel, there are no adjacent symbols interfering into each other at the Rx as well — a phenomenon called Inter-Symbol Interference (ISI). ISI simplifies the detector in Rx design because everything related to a particular th symbol remains within the symbol interval . Or in other words, what happens in a symbol duration stays within that symbol duration.

As for the frequency domain, we derived an analytical expression for the DFT of a rectangular sequence in the article on DFT examples as

In the graph of this sinc function, nulls should occur wherever the numerator is zero, except at where the peak exists. Now when its argument is an integer multiple of . Thus, equating the argument of the numerator to returns the first null position.

Thus, the positions of nulls are with respect to discrete frequency . In Hz, with , the null positions are , , because

Finally, the range is

which for samples/symbol produces

The spectrum of a rectangular pulse shape shows these deductions in Figure below.

A great disadvantage of a rectangular pulse shape is now visible: the sidelobe level is just dB below the mainlobe, a very high value. Furthermore, the subsequent sidelobes are not decaying fast enough. If there is a communications link which occupies the drawn spectrum and the neighboring spectrum is allocated to someone else, then there will be a lot of interference between the two parties due to their sidelobe energies interfering with each other.

Just like real estate, radio spectrum is a very precious resource and must be utilized judiciously. This is why there are spectrum regulatory authorities in every country who impose strict restrictions to comply with a spectral mask. Even for wired channels, there is always a natural bandwidth of the medium (copper wire, coaxial cable, optical fiber) that imposes upper limits on its utilization.

Not only that a rectangular pulse shape is a poor choice due to its large spectral occupancy, another side effect of having a wider bandwidth pulse shape is that the matched filter at the Rx also has a similar bandwidth. Viewed in frequency domain, the larger the bandwidth of the Rx filter, the more noise and interference it allows to enter the system.

Randomness of bit stream

It is important to remember that such a spectrum approximately appears at the output of the transmitter only if there is a sufficient randomness in the bit stream, i.e., bits 0 and 1 occur with equal probability and so there are enough transitions between them. Any pattern in the bit stream can alter the output spectrum significantly.

As an example, imagine a constant bit stream of 1’s that is then modulated to a symbol level . After pulse scaling, the output is nothing but times an all-ones sequence that has a Fourier Transform of an impulse at bin (DC) and nothing else (see this Figure). In real applications, there is enough randomness in the modulated sequence and hence the spectrum will closely resemble the spectrum of a recangular pulse. We will see such an example a little later.

Reducing the bandwidth

Having known the disadvantages of a rectangular pulse shape, a superior pulse shaping filter needs to be looked for that should help in placing multiple channels adjacent to each other while minimizing inter-channel interference between them as well as noise bandwidth at the Rx. However, such advantages should not come at a price of introducing ISI in the system.

Nyquist No-ISI Criterion in Time Domain

To get that superior pulse shape, we trace the following sequence of steps through the help of Figure below that plots the auto-correlation of both a rectangular and a conceptual pulse shape. Auto-correlation of a pulse shape was defined earlier as

and plots are shown in continuous-time to make the related steps below more understandable.

Remember that we discussed that signals that are narrow in time domain are wide in frequency domain, and vice versa. Accordingly, a consequence of narrow time span of the pulse shape is that it gets expanded in frequency. That is why a rectangular pulse has a large bandwidth. To decrease the bandwidth, the length of the pulse shape needs to be extended in time domain — much more than a single symbol duration .

The effect of extending the length of the pulse shape is that every symbol now interferes with a number of symbols occurring both before and after that symbol. The dreaded ISI appears! Assuming a usual even shaped pulse, every symbol contributes towards ISI in symbols towards its left and symbols towards its right on time axis.

To strike peace with this ISI, we exploit a key property of digital communication waveforms explained before: the critical samples at the output of matched filter occur at the end of integer multiples of symbol duration . For symbol detection, only these -spaced samples are needed and the rest can be discarded. This is what we saw at the output of downsampler in PAM block diagram.

If all but one sample during each symbol interval are discarded, they can be assigned any value without any effect on the detector output. These are the “don’t care” samples we give up for ISI to play with in the expectation that we get to shape our desired spectrum in return, and with the result that our desired -spaced samples remain intact. Keep in mind that we are talking about the output of the matched filter — and hence auto-correlation of the pulse shape — here. The reason is to elaborate the nature of signal that is eventually downsampled to yield symbol estimates. The actual pulse shape will be extracted from this auto-correlation, as we will shortly see.

In the light of above discussion, three pulse shape auto-correlations are illustrated in Nyquist no-ISI Figure.

A rectangular pulse shape has a duration of seconds, or samples. Hence, its auto-correlation extending over seconds is shown as dotted red line in correlation of rectangular pulse as well. Notice that it has a maximum value at the desired current sampling instant and zero for adjacent two symbols. The time span, however, is too short giving rise to an unreasonably large spectrum.

A pulse shape auto-correlation that is expanded over many symbols is shown in blue curve, where it is scaled by a symbol . Note that at all the sampling instants (integer multiples of ), the effect of this particular symbol is zero. We intend to assign values to its “don’t care” samples in shaded regions to achieve a compact spectrum.

An adjacent symbol scales the same pulse shape at time . Observe the same zero-ISI effect on the current symbol at time and on all other symbols.

Not shown in the above figure are all other symbols shaped by pulses at , , and so on. But it is clear that sum total of ISI from all adjacent symbols is zero at sampling instant of every single symbol. For a concrete formulation of this zero-ISI criterion, we turn towards its mathematical foundation.

Although this equation was derived for a pulse with duration , it is also true for longer pulses because

This is because convolution between any signal and a unit impulse results in the same signal. Therefore, even for longer pulses, scaling each individual symbol with its own pulse shape is equivalent to convolution of symbol stream with a pulse shaping filter.

When this signal is input to matched filter , the output is written as

To generate symbol decisions, -spaced samples of the matched filter output at are

(2)

In the above equation, the first term is the currently desired symbol and the second term is inter-symbol interference (ISI). It is obvious that ISI can be zero if the pulse auto-correlation satisfies the condition

(3)

This is Nyquist no-ISI criterion in time domain — a mathematical expression of the same concept we explained above: to obtain zero ISI, the pulse auto-correlation should pass through zero for all integer multiples of before and after the current symbol.

Nyquist filter

A filter with impulse response coefficients satisfying Nyquist no-ISI criterion is called a Nyquist filter. In most texts, this is usually called a Nyquist pulse but we use the term Nyquist filter to avoid confusion, as this is actually not the pulse shape but the auto-correlation of the underlying pulse shape. The coefficients of the pulse shape itself will be derived later.

When Nyquist no-ISI criterion in time domain is satisfied, plugging Eq (3) into Eq (2) gives

i.e., the downsampled matched filter output maps back to the Tx symbol in the absence of noise. This is the crux of digital communications theory.

Nyquist no-ISI Criterion in Frequency Domain

Throughout the text, we have been avoiding complicated mathematical derivations. We will do the same here and instead of proving Nyquist no-ISI theorem which connects time and frequency domain properties of pulse auto-correlation, we again take the intuitive route.

As long as we operate at a sample rate , the spectral replicas are spaced at apart. But since we downsample by to get , our sample rate changes from at the output of matched filter to after the downsampler. To see what happens in frequency domain, observe that

In discrete domain, in Eq (3) is nothing but a single impulse at time . This can be seen as the blue dots in first part of Figure below. Hence, its DFT is an all-ones rectangular sequence in frequency domain, which was derived in the article on DFT examples as

Moreover, we know from the post on sample rate conversion that a consequence of downsampling by is appearance of new spectral aliases apart from each other.

Combining the above two facts produces second part of Figure below illustrating this relationship between symbol-spaced pulse auto-correlation and its Fourier transform.

We can also see that the signal bandwidth in this particular case is (never confuse a rectangular pulse shape in time domain with a pulse auto-correlation that is a rectangle in frequency domain)

This is the maximum bandwidth after which aliasing will occur from spectral replicas. This also sets the fundamental limit between the bandwidth and symbol rate as

(4)

Hence, with zero ISI, a maximum symbol rate equal to symbols/second can be supported within a bandwidth .

Remember that the point of this whole exercise was to limit the enormous bandwidth occupied by a time domain rectangular pulse. In other words, the quest is to design a pulse shape such that it complies with the spectral mask within a channel bandwidth . Nyquist no-ISI criterion in frequency domain helps here.

Nyquist showed that a necessary and sufficient condition for zero ISI condition of Eq (3) is that the Fourier transform of the pulse auto-correlation satisfies the condition

(5)

Its proof is not much complicated but we skip it anyway and rely on intuition. A hint about Nyquist no-ISI criterion in frequency domain comes from time and frequency views of Nyquist no-ISI: the spectrum of the pulse auto-correlation should be a constant. All it says is that for the DFT of pulse auto-correlation,

Draw the primary spectrum whose range is .

Draw its shifted replicas at frequencies . Repeat the same for all integer multiples of from to .

Add all these replicas together.

If this sum results in a constant, only then the time domain pulse auto-correlation will satisfy the no-ISI condition Eq (3).

Figure above shows two spectra, one of which satisfies the Nyquist no-ISI criterion while the other does not. The spectrum on the left is constant because the bandwidth is as wide as allowed by sampling theorem (which is for sample rate ). This not only avoids aliasing but owing to a flat spectrum within produces a flat spectrum in range , , and so on. Consequently, the cumulative spectrum as in Eq (5) is a constant function of frequency from to .

Interestingly, this maximum bandwidth allowed by sampling theorem is also the minimum bandwidth allowed by Nyquist no-ISI criterion in frequency domain. Any smaller than this, and it becomes impossible to obtain an overall flat spectrum. That is shown on the right of Figure above where there are spectral “holes” that cannot be filled regardless of the spectral shape.

Coefficients of an Ideal Pulse Auto-correlation

Above, we said that a filter whose impulse response coefficients satisfy Nyquist no-ISI criterion is a Nyquist filter, which is actually the auto-correlation of the underlying pulse shape. From Figure above and the discussion in the last section, we find our ideal Nyquist filter: a pulse auto-correlation whose spectrum is a rectangle from to . This filter is shown in Figure on time and frequency views of Nyquist no-ISI and left of Figure on minimum Nyquist bandwidth.

To derive the time domain representation of such a filter, remember that a rectangle in frequency domain is a sinc in time domain due to time frequency duality. This rectangle covers the entire continuous frequency range from to (or discrete frequency range to ) yielding in DFT of a rectangular signal (this is not samples/symbol, just the sequence length in that equation). There is a scaling factor of , furthermore, from the definition of iDFT. Combining these facts,

For our purpose with a sample rate of so far,

It satisfies Eq (3) because it is equal to at and zero for and shown as blue dots in Figure on time and frequency views of Nyquist no-ISI.

Using the identity for small , the term in the denominator is quite small as compared to numerator due to division by . Then, the above equation can be written as

This is the time domain waveform sampled at rate . As we are interested in time domain coefficients of this ideal pulse auto-correlation, we sample the waveform at rate instead of just , an increase by a factor of . Correspondingly, a decrease in sample time yields

(6)

These are the coefficients of our ideal pulse auto-correlation, which extends from to in time domain as shown in Figure below.

There are two main problems with this ideal pulse auto-correlation, both arising due to a very slow rate of decay for the tail.

[Truncation Error] Any practical system cannot use a filter with an infinite number of coefficients like the one shown in Figure of coefficients for an ideal pulse auto-correlation. Hence, the filter impulse response must be truncated at a few symbols to the left and a few symbols to the right of the current symbol. To ensure that this truncated filter closely approximates the ideal impulse response, only very small values can be ignored. That generates a large filter length .

[Timing Errors] We know that optimal timing instant coincides with symbol boundaries. Timing synchronization block in a Rx is responsible for extracting this symbol-aligned clock. However, even a small error in timing synchronization output causes a significant increase in ISI due to large value of samples at the tail interacting with neighbouring modulated pulses. The quicker the tail decay is, the lesser the errors arising from timing jitter when sampling adjacent pulses.

To address these issues, we take the reverse route now. We started with the aim of reducing the bandwidth of a rectangular pulse shape. We found an ideal pulse auto-correlation with a rectangular spectrum that satisfies Nyquist no-ISI criterion. To overcome the limitations in time domain associated with it, we seek to trade-off some bandwidth with desirable time domain behaviour.

Raised Cosine (RC) Filter

Remember that we observed regions of acceptable ISI in auto-correlation Figure of a conceptual pulse and deduced that these “don’t care” samples can be given up for ISI to play with in the expectation that we get to shape our desired spectrum in return, while our -spaced samples remain intact. If there is a relaxation and a restriction in time domain, then intuitively there should be a corresponding relaxation and restriction in frequency domain as well. We have found the restriction to be a flat spectrum across the whole frequency range. The relaxation is that how this spectrum becomes flat is a matter of no concern. The actual spectral shape is a “don’t care” case, as long as the sum of all shifted replicas is flat.

It is clear that we cannot reduce the bandwidth below the fundamental limit , see Figure on minimum Nyquist bandwidth. The only way to alter the spectral shape is by expanding the bandwidth beyond in such a way that the sum total of the replicas again becomes flat. For this purpose, an odd symmetry around the point is required. The advantage of such odd symmetry is that the spectral magnitudes before and after are rotated versions of each other. In other words, the spectral copies at fold around into the original bandwidth. That supplies the additional amplitude required to bring the spectrum in a flat shape, as illustrated in Figure below.

Higher frequency components of a signal arise from abrupt changes in time domain, which was the reason of large spectral occupancy of a rectangular pulse shape. Similarly, long tails in a time domain signal arise from abrupt transition in the flat spectrum as in Figure on time and frequency views of Nyquist no-ISI. Also recall from the post on FIR filters that the length of an FIR filter depends on its transition bandwidth.

Having located the abrupt transition bandwidth around as the root cause, we can extend the bandwidth of the pulse auto-correlation in any shape as long as it has odd symmetry around the points . The purpose is to smooth out its spectrum.

The smoothest spectral shape one can imagine is a sinusoid. If such a spectral taper is convolved with the ideal rectangular spectrum, the discontinuity in the spectrum can be removed. Since a half-cosine is an even symmetric shape, it is shown to be convolved with the rectangular spectrum in Figure below. The width of the half-cosine is where which forms the transition bandwidth of the resultant spectrum and its even symmetric shape preserves the odd symmetry around . As a consequence of this odd symmetry, this is also a Nyquist filter. The resultant spectrum out of this convolution is discussed soon.

It can be observed that the effect of spectral convolution is an increase in bandwidth from to . Since specifies the bandwidth beyond the minimum bandwidth (Nyquist frequency or folding frequency), it is called excess bandwidth or roll-off factor. For example, when , the total bandwidth is more than the minimum and when , the total bandwidth is exactly more than, or twice, the minimum. Typical values of range from to .

In time domain, this is equivalent to product of a sinc signal with an even signal of time (since the spectrum is real and even, the time domain signal is also real and even). The sinc signal guarantees the zero crossings as they cannot be moved by multiplication, and the even signal dampens the long tails in time. The higher the bandwidth, the narrower this signal is and hence faster the decay.

First, consider , the widest bandwidth case. Here, a rectangle of spectral width gets convolved with a half-cosine of width . The convolution of the resultant spectrum again generates a cosine shape from to for a total width of . This is plotted in Figure below, where smooth frequency transition can be seen with an odd symmetry around . It is commonly known as a Raised Cosine (RC) filter (Raised Cosine (RC) filter has nothing to do with an RC circuit consisting of a resistance and a capacitor). The name raised cosine comes from the fact that {the transition bandwidth of the spectrum consists of a cosine raised by a constant to make it non-negative, see Eq (7) and Eq (8) later.

Let us explore its mathematical expression. Due to the way it is usually written in literature, the formula looks more complicated that it actually is. Recall that a sinusoid in time domain is written as

A frequency domain sinusoid will just interchange these roles, with inverse period and independent variable defined in terms of frequency. A cosine from to implies that the period in frequency is (the inverse of which is )). Also, it is raised by a constant to make it non-negative and scaled by to get a unity gain. Thus, the expression for this cosine spectrum on the positive half is given as

The overall spectral shape from to can be given as

(7)

where the negative half of the spectrum is the same as the positive half, and hence the term .

We obtain its time domain expression through the following steps:

In frequency domain, the constant term and cosine in the above expression do not have a frequency support from to , but only from to .

In frequency domain, that is equivalent to multiplication with a rectangular window of the same width.

Multiplication in frequency domain is convolution in time domain.

In time domain, the constant term translates to an impulse at time location , and results in two impulses with half that amplitude at time locations (inverse period) .

In time domain, the rectangular window of width in frequency is a sinc signal with zero crossings at integer multiples of .

Convolution of a signal with an impulse is the signal itself. When convolved with sinc arising from the window, this generates three sinc signals: one at time while two at time .

The above sequence of steps result in time domain signal of an extended bandwidth pulse auto-correlation shown in Figure below. The reason of choosing a frequency support is now clear. With in frequency, the sinc resulting from cosine adds out of phase at with the sinc from the constant term. The signs of their tails are opposite to each other and that is what brings down the levels of long tails of an ideal pulse auto-correlation. Furthermore, sincs from both constant term and cosine have zero crossings at integer multiples of , as marked through the ellipse in the figure. Nevertheless, a time shift of for the sinc from cosine causes the cumulative signal to pass through zero at integer multiples of . This is how this pulse auto-correlation fulfills the Nyquist no-ISI criterion in time domain. Finally, sampling the pulse auto-correlation at samples/symbol produces the discrete-time coefficients that can be used for pulse shaping in digital domain.

Compare this discrete-time signal with the ideal Nyquist filter in Figure of coefficients for an ideal pulse auto-correlation. Now we can truncate the filter to a small length without much loss in accuracy. In addition, a sampling time misalignment at the Rx will have a minor influence on the current symbol and not much. Remember that the cost of this relief is the wider bandwidth . Observing this trend, we can think of striking a trade-off between the filter length and bandwidth expansion. The excess bandwidth comes into play here.

For this purpose, the excess bandwidth can be reduced from to a lower value. By reducing the bandwidth, the following sequence occurs.

When , the period in frequency of the half-cosine (on each side of zero) in Raised Cosine spectrum with excess bandwidth equal to 1 decreases, while still maintaining odd symmetry around .

As shown in Figure below, a decrease in frequency period results in three distinct regions of the spectrum: a half-cosine in the positive part of spectrum

a half-cosine in the negative portion of the spectrum

and a constant term

Note that when , the bandwidth was which can be written as . A decrease in results in new bandwidth as drawn in Figure above. Also observe that case coincides with a rectangular spectrum of ideal Nyquist filter as in Figure on time and frequency views of Nyquist no-ISI and Figure on minimum Nyquist bandwidth. This was the minimum bandwidth allowed by Nyquist no-ISI criterion.

A decrease in bandwidth results in multiple windowed cosines and a constant. To avoid complication, its detailed analysis is not necessary. We just note that as gets smaller and smaller, the sincs in time domain resulting from windowed cosine in frequency move away from each other, thus causing less tail suppression as compared to case. In conclusion, the excess bandwidth gives the system designer a trade-off between reduced bandwidth and time domain tail suppression.

Taking clue from Eq (7), the general expression for a Raised Cosine can be written as

(8)

The above equations looks intimidating but it is not. There are only three minor differences from Eq (7).

in the middle term shows a reduction in bandwidth and a spectral shift of cosine center from to . This is the edge of the passband now and is called the passband frequency.

The first term is a constant occupying the bandwidth left behind by the half-cosine.

The final term shows no bandwidth occupied from onwards. This is the start of the stopband and is called the stopband frequency.

Now it is clear why an oversampling factor of samples/symbol – a sampling rate of – is used in digital communication systems. Applying sampling theorem to the stopband frequency , the sampling theorem sets the minimum sampling rate at .

To obtain its time domain expression, recall Figure on spectral convolution where we found that the resultant waveform in time domain is equivalent to product of a sinc signal with an even signal of time.

(9)

This is drawn in Figure below for different values of . Note the simultaneous zero crossings of all waveforms at integer multiples of . Also, plugging above produces the coefficients of a sinc signal, the same ideal Nyquist filter in Eq (6).

The above equation becomes indeterminate for and . It can be shown that (we skip the derivation and use a mathematical technique called L’H\^{o}pital’s rule)

Resulting Pulse Shape

Until now in this section, everything we have discussed so far was about pulse auto-correlation that satisfies Nqyuist no-ISI criteria and is also called a Nyquist filter. However, how to derive the actual pulse shape is still not known. This is what we intend to find out next.

We start with a simple question: where should a Raised Cosine filter be placed in the system? There are only three possible choices.

[Receiver] The most straightforward way to incorporate it is to have it at the Rx. However, then a significant disadvantage is that there would be no mechanism to control the spectral sidelobes at the Tx. The spectral shaping required to minimize out-of-band energy cannot be avoided.

[Transmitter] If the spectrum is fully shaped at the Tx side, then any additional filtering at the Rx will not be “matched” to the incoming signal causing ISI. In the imaginary case of no filtering at all at the Rx, the noise and energy from adjacent channels will enter in the Rx in addition to unavoidable in-band noise. This significantly reduces the SNR as well as causing other issues such as increasing the interference sensitivity and dynamic range (adjacent channel energy can be quite higher than the desired band). So the Rx filter must be as compact as possible around the Tx spectrum so that maximal amounts of noise and adjacent channel interference can be eliminated.

[Both] The solution then is to split the Raised Cosine spectrum into two parts, one at the Tx to control the spectrum and the other at the Rx to reject noise and adjacent channels but still have zero ISI cumulative response. Remember that the pulse auto-correlation can be given by convolution formula as

The pulse shape can be placed at the Tx and its matched filter at the Rx. Above, the response of two filters in cascade is the convolution of their impulse responses, which implies multiplication of their frequency responses.

From above equation, the frequency response of the actual pulse shape can be derived as

(10)

A significant advantage of this arrangement is that the matched filter simultaneously maximizes the Rx SNR.

In the case of Raised Cosine pulse auto-correlation, the underlying pulse shape is called Square-Root Raised Cosine (SR-RC) pulse or filter, where the square-root is in frequency domain. Referring to Eq (8), taking the square-root does not affect and values. Using the identity and taking the square-root on both sides adjusts the middle term as

(11)

For various values of excess bandwidth , this is shown in Figure below. For , there is no difference between a Raised Cosine and a Square-Root Raised Cosine filter due to a rectangular spectrum. Also notice from Eq (11) that the transition band of a Square-Root Raised Cosine pulse is a quarter cycle of a cosine as compared to a half-cosine of a Raised Cosine filter. It has implications which we shortly see.

Without mathematical derivation, the time domain waveform is given by

(12)

Figure below plots these time domain waveforms of Square-Root Raised Cosine for different values of . Observe again that plugging produces a sinc signal, the same ideal Nyquist filter in Eq (6). Through the red ellipse in the figure, the zero crossings are seen at integer multiples of only for . For and , Square-Root Raised Cosine does not satisfy Nyquist no-ISI criterion. This is not surprising because it has to fulfill that criterion only after matched filtering with another Square-Root Raised Cosine at the Rx to form the cumulative Raised Cosine shape.

In a software routine, the above equation can be used to find the coefficients of the Square-Root Raised Cosine pulse shape used for Tx filtering. Just like in Raised Cosine case, the denominator in this equation becomes zero for and . Pulse coefficients at these values can be given as (again, L’H\^{o}pital’s rule and skipping exact derivation}

(13)

Being a band-limited signal, the Square-Root Raised Cosine has is infinitely long in time that cannot be implemented in real systems. Therefore, it must be truncated to symbols to the left and symbols to the right. This is equivalent to in samples, thus resulting in total filter length . This time span of symbols or samples is called group delay (Group delay is a more general concept but this definition is good enough here).

Figure below compares the spectra of a Square-Root Raised Cosine pulse with that of a rectangular pulse (rectangular in time, not frequency). The Square-Root Raised Cosine pulse is generated using , samples/symbol and for a total filter length of samples. The duration of rectangular pulse is obviously samples. A huge improvement in sidelobe suppression is fairly visible.

Example

As an example, Wideband Code-Division Multiple-Access (WCDMA) – the main technology behind 3rd-generation (3G) cellular systems – implements a Square-Root Raised Cosine pulse shape with excess bandwidth , which translates the signaling rate of MHz to a bandwidth of MHz (the factor 2 arises for the RF bandwidth – we will discuss that in a later post). Accounting for the guard-bands to minimize interference between neighboring channels, the signal bandwidth in WCDMA systems is 5 MHz.

In the meantime, if you found this article useful, you might want to subscribe to my email list below to receive new articles.

PAM Revisited

Let us now replace the Square-Root Raised Cosine filter above into our basic PAM system of PAM block diagram in place of a rectangular pulse. For a -PAM modulation system with symbols , the process unfolds as follows. Keep comparing the signals thus generated with those from rectangular pulse shape in PAM block diagram.

A source generates Tx bits to be sent to a destination. These bits are input to a look-up table that maps bits to symbols according to a chosen modulation scheme. Next, the Tx symbols are upsampled by samples/symbol. An example created of bits is illustrated in Figure below.

The pulse shaping block with samples/symbol, excess bandwidth and group delay symbols filters the waveform to output a smooth waveform which then is passed to the analog portion of the Tx. A DAC produces the continuous output illustrated as dashed red line in Figure below. The bit information behind the waveform is also shown.

In the best case scenario, i.e., a channel that only adds AWGN to the Tx signal, the Rx signal is drawn as in Figure below.

The Rx signal is sampled by an ADC to produce and then input to a matched filter which is the same pulse shaping filter as at the Tx. The matched filter output at samples/symbol is shown in Figure below. As long as the the combination of Tx and Rx filters (i.e., overall Raised Cosine filter) obeys Nyquist criterion, there is no ISI and one out of every samples at can be preserved to form a symbol estimate while the remaining samples are thrown away. If there was zero noise, these downsampled values directly map to the transmitted symbols without any error, from which the sequence of Rx bits can be found with the help of an LUT.

To decide which one sample to keep out of every samples is the job of timing synchronization subsystem.

When the noise is present which is always the case, the minimum distance rule is employed to produce symbol estimates according to its shortest distance from a constellation point, as illustrated in Figure below. Depending on numerous factors in system design, a proportion of bits will eventually end up in error, i.e., they will be different than Tx bits. The ratio of number of Rx bits in error to the total number of bits is called Bit Error Rate (BER).

Remember that in FIR filters, we said that every FIR filter causes the output signal to be delayed by a number of samples given by

(14)

and as a result, samples of the output at the start — when the filter response is moving into the input sequence — can be discarded. Similarly, the last samples — when the filter is moving out of the input sequence — should be discarded as well.

In summary, when the source information bits are filtered by a square-root Nyquist filter, the sharp edges visible for a rectangular pulse are smoothed out by a considerable margin, as shown in Figure below for 20 2-PAM symbols and excess bandwidth . This smoothness in time domain actually limits the bandwidth. Also observe that in the absence of noise, the values at optimum sampling locations are not all the same and exhibit ISI in a square-root Nyquist case. This is because the zero crossings of a square-root Nyquist shape are not necessarily at integer multiples of symbol times. However, after complete Nyquist filtering and no noise, all values coincide with the same optimum symbol amplitudes shown as red asterisks in Figure below.

Drawbacks of Square-Root Raised Cosine Pulse

As discussed above, Square-Root Raised Cosine pulse is much better than a rectangular pulse in shaping the spectrum but it has two major drawbacks.

[Insufficient Sidelobe Attenuation] There is a limit to the sidelobe suppression that a Square-Root Raised Cosine pulse can achieve. The sidelobe levels are reasonably higher than realistic spectral mask requirements imposed by regulatory authorities of attenuating out-of-band energy up to 60 to 80 dB. Looking back, recall that the transition band of a Raised Cosine pulse is half cycle of a cosine. Therefore, the transition band of a Square-Root Raised Cosine is a quarter cycle of a cosine, see Eq (11) and spectrum of Square-Root Raised Cosine. Its abrupt termination at the stopband results in a discontinuity causing a relatively poor sidelobe response.

[Increase in ISI] Looking at spectral comparison of Square-Root Raised Cosine with a rectangular pulse, an important question arises at this stage: The spectrum of Square-Root Raised Cosine should be precisely beyond Hz but why is it not? This is because as a consequence of truncation, the pulse is no more absolutely band-limited within and assumes infinite support in frequency in the form of sidelobes. Remember that truncation means multiplication by a rectangular window. This multiplication between Square-Root Raised Cosine pulse and rectangular window in time domain is convolution between Square-Root Raised Cosine spectrum and a sinc signal in frequency domain (the sidelobes and in-band ripple are inherited from that oscillating sinc signal and are a function of excess bandwidth and the pulse extension in symbols).

As a result of this truncation in time domain and subsequent convolution in frequency domain, the half amplitude values are moved away from the odd symmetry points of half symbol rate, or as illustrated in Figure below (compare to the case in Figure on odd symmetry). As a result, the spectral replicas now do not add up to exactly a constant value. In other words, Nyquist no-ISI criterion is only approximately satisfied by the truncated pulse shape, thus giving rise to increased ISI. The maximum magnitude of for is called peak ISI, shown in Figure below to be equal to for excess bandwidth and group delay .

The reader is encouraged to plot the spectra for different values of and to appreciate their role in spectral shaping.

Square-Root Raised Cosine and Raised Cosine are good starting points for pulse shape design and capture the involved details fairly well. Moreover, they have closed-form mathematical expressions that are good for analytical purposes. Nevertheless, the quarter cycle cosine of the Square-Root Raised Cosine results in high in-band ripples and insufficient out-of-band attenuation levels. There are other pulse shape design procedures that produce a Nyquist filter with high sidelobe attenuation and preserve Nyquist no-ISI criteria as well.

A Frequency Domain Window Based Pulse

A spectral shape is a discrete-time signal and hence just a sequence of numbers. This sequence of numbers in time domain can be generated through iDFT of a carefully designed discrete-time frequency response. Recall that a Raised Cosine filter was generated in frequency domain through convolution of an ideal rectangular spectrum with a half-cosine taper, see Figure on spectral convolution. Here, we replace the half-cosine with an alternative taper that is an improved spectral window. The span of this spectral window is determined by the excess bandwidth and the iDFT length.

The criteria for this spectral window design are

Narrow width of the mainlobe to effect a smoother transition band in resulting pulse spectrum. The width of the transition band remains unchanged

Small sidelobe levels that get inherited in the pulse spectrum

One such candidate is a Kaiser window in frequency domain (this technique is devised by Fred Harris in his paper “An Alternate Design Technique for Square-Root Nyquist Shaping Filters”) which has a minimum mainlobe spectral width for specified sidelobe levels. For a Kaiser window of a particular length, the sidelobe height is controlled by a parameter . In Matlab, a Kaiser window of length with parameter can be generated through the command kaiser(N,beta). Figure below compares this taper with a cosine taper with the same transition bandwidth and highlights the difference of smoothness between the two.

In frequency domain, the convolution of this Kaiser window taper with an ideal rectangular spectrum generates the desired pulse shape. This frequency domain result is shown in Figure below for excess bandwidth where it is compared with a Square-Root Raised Cosine pulse with the same excess bandwidth. This Figure depicts both the improved Nyquist filter and a Raised Cosine prior to square-root operation. Notice that a narrower mainlobe width of the improved taper has produced a smoother transition band in the improved Nyquist filter as compared to a Raised Cosine. This smoothness gets inherited by square-root Nyquist filter and it does not exhibit a similar kind of discontinuity as a Square-Root Raised Cosine, both of which are shown here. This discontinuity gives rise to high levels of time response even after many symbol intervals. When such an impulse response is truncated, it generates high sidelobes as well as in-band ripple in the spectrum of a Square-Root Raised Cosine. This is evident where an order of magnitude improvement in sidelobe suppression is visible. For example, an attenuation of dB is obtained through this procedure as compared to dB through Square-Root Raised Cosine.

Not shown here is the fact that low levels of in-band ripple accompanies the low levels of sidelobes because they are always equal in a window based design (arising from convolution of the same spectral taper sliding through both stopband and passband). For reasons that are beyond the scope of this text, the low in-band ripple generates less ISI after matched filtering of the square-root Nyquist filter. Therefore, the additional advantage of sidelobe suppression comes with a benefit, not at a cost.

Finally, there are other pulse shape design procedures as well that employ iterative techniques to convert an initial lowpass filter to a Nyquist filter with high sidelobe attenuation while preserving Nyquist no-ISI criteria as well. For the purpose of this text, we continue using the Square-Root Raised Cosine pulse and Raised Cosine pulse auto-correlation to keep things simple and assume that a reader who goes on to implement the system will use the better alternatives discussed above.

If you found this article useful, you might want to subscribe to my email list below to receive new posts.

Remember that in the article on correlation, we discussed that correlation of a signal with proper normalization is maximum with itself and lesser for all other signals. Since the number of possible signals is limited in a digital communication system, we can use the correlation between incoming signal and possible choices and in a digital receiver. Consequently, a decision can be made in favor of the one with higher correlation. It turns out that the theory of maximum likelihood detection formalizes this conclusion that it is the optimum receiver in terms of minimizing the probability of error.

Before correlating the received signal with possible choices, it is helpful to see the correlation of a rectangular pulse shape with itself — called the auto-correlation as defined in correlation. Figure below illustrates the auto-correlation of the rectangular pulse shape whose time support is equal to a symbol duration, i.e., from to samples. As a consequence, its auto-correlation is a triangular pulse shape with duration clearly from to . From the definition of correlation in Correlation Eq,

(1)

Since correlation is similar to convolution with one signal flipped, and flipping does not alter the length of a signal, the length of the correlation output is the same as convolution output, i.e., .

Eq (1) is extremely important in the design of communication systems and this term, , will appear over and over again in any communications text. I will highly recommend writing it down on a piece of paper and sticking it to your laptop or book.

Convenience of linearity

Notice that our focus is on linear modulation, i.e., the signal is generated by linear scaling of a basic pulse shape to vary its amplitude as , etc. as expressed in the article on modulation – from numbers to signals. Therefore, the effect of scaling the pulse at the Tx will appear as the same scaling at the Rx without affecting the pulse shape or its auto-correlation. This leads to the conclusion that instead of correlating the incoming signal with all possible transmitted signals ( and here) which are just the scaled versions of the same basic pulse shape, we can correlate it with that pulse shape.

Using this strategy, the peak of the auto-correlation will bear that amplitude scaling done at the Tx. If you have some confusion regarding this point, don’t worry. We will shortly see it graphically in a later article on PAM detection.

Now, let us compute the correlation between the received signal and the pulse shape as

(2)

for all possible lags . The question then is to find the optimum sampling instant: the value of that gives the best result through maximizing the correlation output. For both and with zero noise, Figure below draws the correlations for many different lags .

As the signal sliding occurs, the correlation outputs vary as

It is evident that at time instant , the output for is and the output for is . This time instant is the point of maximum overlap where the received signal is completely aligned with the pulse shape. For symbol detection, only this particular sample of correlation output is needed and the rest of the samples can be discarded. This is the process of demodulation that maps a signal back to a number.

Matched Filter in Time Domain

In the above discussion, since the maximum overlap is the only sample within a symbol time that is required out of samples, let us substitute in Eq (2) as

(3)

We conclude that the process of correlation can be implemented as convolution with a filter whose impulse response is a flipped version of the actual pulse shape. Most texts say that this is true only as far as the point of maximum overlap is concerned. We discuss this difference of opinion in the article on convoluted correlation between matched filter and correlator.

It is tempting to write , but observe from Figure on correlator outputs that the time base of our demodulator starts at samples or seconds, although a real receiver should begin processing the received signal at time . From , the peak or maximum overlap shifted by the same amount occurs at or seconds.

Also remember that flipping the pulse shape implies that the impulse response of this filter starts at seconds. Standing at seconds, our filter would need future samples due to arrive at . Since accessing future samples is not possible, the impulse response must be delayed by an amount seconds to yield . Remember from transforming a signal that for negative time axis as in the case here, a delay means shifting the signal to the right, which means not only that now starts at but also the filter output reaches its maximum at time seconds.

In summary, this flipped and delayed version of the template pulse shape is called the matched filter given by

Above statement is true because the pulse has real coefficients. For complex signals, complex conjugation is also required for which an intuitive explanation will be shortly described.

(4)

Figure above draws a matched filter for an example pulse shape. To find the matched filter output, consider that in the absence of noise, the received signal is

where we utilized the fact that convolution of a signal with its flipped version is its auto-correlation which is unity at the middle point ( sample). Thus, the output reflects the information embedded in the amplitude of the signal.

A question that immediately arises from the above derivation — but mostly not asked in communications texts — is the following: how do we know for sure that the remaining samples are of little use, as far as the process of demodulation is concerned? We answer this question in the next subsection during the matched filter viewpoint in frequency domain.

A natural approach

For an intuitive understanding of matched filtering approach, consider the following argument: In the post on correlation, where we learned the relation between convolution and correlation, we found that correlation can be implemented through convolution if one of the signals is flipped beforehand. During convolution with the received signal, this flipped version is folded again, hence bringing the original expected signal back.

By definition, a matched filter is a linear filter that maximizes the output signal-to-noise ratio (SNR) if the signal is buried in AWGN. Since the incoming signal structure is already known, the matched filter can be easily constructed through time-reversing and delaying this known signal for maximizing the output correlation.

As an example in everyday life, carefully watch out for the opinions you hold. In general, we are always defending the viewpoints we already have and rejecting (filtering out) either the unknown or what we decided to oppose in the past. That is matched filtering!

Expanding to a more general inference, implementing a matched filter to an incoming signal requires the following three steps:

Flip a true copy of the template signal.

Delay it by an amount that yields maximum correlation.

Take the complex conjugate, in case of complex signals.

Matched Filter in Frequency Domain

In frequency domain, matched filter has an interesting view as well. In some DFT properties, we learned that the DFT of a time-reversed and complex conjugated signal is given by the complex conjugate of its DFT, i.e.,

(5)

Furthermore, as explained in the article on the concept of phase, we saw that the time shift of an input signal results in a corresponding phase shift at each frequency of its DFT with no change in magnitude.

Since , and sample coincides with , the DFT of the matched filter can be deduced by combining the above two facts as

(6)

where is the DFT size and ranges from to . So the DFT of the matched filter has a magnitude that is the same as the magnitude of the underlying pulse while a phase shift that is negative of the phase of the pulse DFT (which arises from taking the complex conjugate), plus an additional factor proportional to the symbol time .

What happens when the received signal is passed through the matched filter? First, note that in presence of no noise, the received signal is expressed as

whose spectrum can be written as

where is if is positive and if is negative. At the output of the matched filter, the convolution in time domain is multiplication in frequency domain leading to the following results.

[Magnitude] Utilizing Eq (6), the magnitude of the matched filter output can be written as

Consider that for a number ,

Therefore, the matched filter has high gain at frequencies where is large and low gain at frequencies where is small. It can deduced that the matched filter enhances the strong spectral components and reduces the weak spectral components in . [Phase] On the other hand, the phase of takes the form

where we can notice the phase of the incoming signal being canceled by the matched filter. This phase adjustment enables all spectral components to align at sampling time instant. Figure below illustrates an example signal template and a corresponding matched filter where the blue lines represent the signal template while the red show the matched filter. Notice the same magnitude on each spectral line but exactly opposite phase. This is the reason complex conjugation is required for designing a matched filter for complex signals.

In terms of , incorporating in form, is

where the difference of and is due to the identities and .

Plugging and in iDFT above and using identities and , we get

(7)

Sampling this matched filter output at the instant yields

Figure below verifies the above finding for an all 1s sequence (no in the received sequence). Notice that there is no component and all frequency domain samples are perfectly aligned to contribute towards term.

Using Parseval’s relation in DFT examples that relates the signal energy in time domain to that in frequency domain,

the matched filter output in time domain can be written as

Owing to the unit energy pulse ,

and hence the modulated information is retrieved through the amplitude of the matched filter output.

In the meantime, if you found this article useful, you might want to subscribe to my email list below to receive new articles.

Optimality of Peak Sample

From this derivation in frequency domain, although we obtained the same result as in time domain, we get an additional insight into the operation of a digital receiver. That insight arises from answering the question asked in time domain viewpoint of the matched filter: as far as the process of demodulation is concerned, why do we discard the remaining samples? Why can’t we process them in a manner that improves the symbol estimate?

For the case where an all 1s sequence is transmitted (no in the received sequence) and signal in Eq (7) is sampled at a different time instant instead of , say at , we can trace the following sequence of steps.

which is nothing but a linear phase shift arising in all frequency bins due to sampling the output one unit of time earlier.

Now, the component of the output is not zero. We can say that a part of the actual signal energy — that should have appeared in branch — has leaked into the arm. The arm with reduced energy is not optimal anymore for symbol detection. This is drawn in Figure below. Notice that the spectral components have been rotated by symmetrical phase due to a time difference of sample.

This non-zero sample also reveals an interesting point. In time domain, all the samples of the correlation result are less in magnitude than the peak due to partial overlap instead of the maximum. Going into frequency domain, this disappeared energy can be found in arm associated with phase rotations. The same frequency response is very similar for all time shifts — the only difference being the scattered phases for resulting in misaligned summation.

One can argue that given the above information, we can still theoretically recover the modulated information from Eq (7). Since , the expression + would still have resulted in the same energy and correct sign can be estimated from the phase.

However, not only that this extra processing is unnecessary when pure samples are available at , but also remember that in an actual receiver, noise is also added to the transmitted signal and has to pass through the same filter (not matched for noise). For noise , the filtered noise samples are given by

Therefore, this filtering on the noise generates correlation in noise samples at its output that is directly proportional to the auto-correlation of the pulse shape.

For this reason, this correlation among noise samples is zero only when the underlying pulse auto-correlation is zero, which occurs at a spacing of samples either side of the peak. Moreover, no kind of processing on other samples can result in a better performance. For example, summing two samples doubles their noise power as well. That is how the SNR is maximized by the matched filter output sampled at .

General implementation

Here, we concentrated on the matched filter in relation to the underlying pulse shape due to modulation being linear. In the field of signal processing, the matched filter has a much broader meaning that works well for non-linear modulations, distinct communication/radar waveforms and every specialized area of digital signal processing, where the matched filter is designed in relation to the transmitted signal itself. The fundamental concept is still the same: utilize the knowledge of what is expected and suitably invert, delay and take its complex conjugate.

So far, we have learned how to map a symbol to a signal and then a signal back to a symbol. All of this was focused within each symbol duration . Later, we discuss the practical case of a bit stream and corresponding processes of modulation, waveform generation and detection.

The signals of our interest — wireless communication waveforms — are continuous-time as they have to travel through a real wireless channel. To process such a signal using digital signal processing techniques, the signal must be converted into a sequence of numbers. This can be done through the process of periodic sampling.

Consider a band-limited continuous-time signal and its frequency domain representation with bandwidth , shown in the above Figure. A discrete-time signal can be obtained by taking samples of at equal intervals of seconds. This process is shown in Figure below, and mathematically represented as

The time interval seconds between two successive samples is called the sampling period or sample interval, and its reciprocal is called the sample rate or sampling frequency. Sample rate is the most fundamental parameter encountered in digital signal processing applications.

Let us find out what happens in frequency domain as a result of this process. Consider a continuous-time sinusoidal signal

and obtain its sampled version at a rate samples/second.

(1)

Note that above is the frequency of a discrete-time sinusoid . Let us sample at the same rate another sinusoid with continuous frequency , where .

which is exactly the same as discrete-time sinusoid in Eq (1). It can be concluded that at the output of sampling process, it is impossible to distinguish between two discrete-time signals whose frequencies are Hz apart. So for an arbitrary frequency after sampling

and so on. It is just like saying that any two angles apart are all the same. For example,

Therefore, all the following frequency ranges are the same:

The range is called the primary zone. The spectrum of the continuous-time signal shown in this Figure is now drawn in Figure below. We adopt the convention of indicating this zone within dotted red lines and drawing its spectral contents with solid lines while the spectral replicas with dashed lines.

The fact that a continuous frequency higher than Hz appears similar to a frequency Hz apart from itself can be understood in time domain from Figure below. Observe that samples are taken at a rate such that both sinusoids pass through the same points. In fact, there are infinitely many sinusoids ( Hz apart) which pass through the same points, and hence become indistinguishable from each other after sampling.

In the light of above discussion, it is evident from this Figure that if a continuous-time signal has a bandwidth greater than , it will appear as an alias of a lower frequency within the range and distort the signal. This is illustrated in Figure below for a signal whose bandwidth extends beyond the primary zone.

Therefore, for a signal with bandwidth B, the sampling frequency should be such that the following inequality is satisfied to prevent any distortion in the sampled signal:

or written in another form

(2)

Sampling theorem

As shown above, sampling in time domain at intervals of creates periodicity in frequency domain with a period of . Therefore, a band-limited continuous-time signal with highest frequency (or bandwidth) Hz can be uniquely recovered from its samples provided that the sample rate samples/second.

The frequency is called the Nyquist rate while is called the Nyquist frequency or folding frequency (see this Figure). Sampling theorem is one of the two most fundamental relations in digital signal processing, the other being the relationship between continuous and discrete frequencies.

A natural extension is to understand the notion of time in a discrete-time setting. As long as the sampling interval or sample rate is known, one can easily determine the period or frequency of a signal. For the sinusoid of Figure below for example, the period is clearly samples, and to find it in actual seconds, sample interval or sample rate must be known.

For seconds,

and its frequency Hz. However, for a seconds, the same discrete-time sinusoid has

with frequency Hz. Interestingly, the samples of both sinusoids will be stored in memory as a sequence of numbers with no difference in discrete domain.

Is aliasing always harmful?

Aliasing – the reemergence of frequencies higher than within the range of primary zone – is a consequence of disobeying the sampling theorem. It may seem so but aliasing is not always bad. In fact, there are three types of aliasing:

Harmful aliasing that distorts the signal and must be avoided for proper representation of signal in discrete domain. This is when .

Useful aliasing that shifts the signal spectral bands up and down for free to our desired frequency through careful system design. This is employed in systems operating at multiple clock rates.

Harmless aliasing that is neither good nor bad for the system. This occurs, for example, during band-limited pulse shape design to avoid inter-symbol interference (ISI).

Don’t worry much if it sounds too confusing at this stage. We will cover everything in detail when the topic arises.

A final remark about sampling a continuous-time signal is that for a fixed time interval of data collection, the more samples we take, the higher the energy in the resulting discrete-time signal is. This is because there will be more samples in the discrete-time signal during a fixed interval for a higher sampling rate, see its definition in energy and power.