The Nyquist sampling theorem says that a band-limited signal can be recovered from evenly-spaced samples. If the highest frequency component of the signal is fc then the function needs to be sampled at a frequency of at least the Nyquist frequency 2fc. Or to put it another way, the spacing between samples needs to be no more than Δ = 1/2fc.
If the signal is given by a function h(t), then the Nyquist-Shannon sampling theorem says we can recover h(t) by
where sinc(x) = sin(πx) / πx.
In practice, signals may not entirely band-limited, but beyond some frequency fc higher frequencies can be ignored. This means that the cutoff frequency fc is somewhat fuzzy. As we demonstrate below, it’s much better to err on the side of making the cutoff frequency higher than necessary. Sampling at a little less than the necessary frequency can cause the reconstructed signal to be a poor approximation of the original. That is, the sampling theorem is robust to over-sampling but not to under-sampling. There’s no harm from sampling more frequently than necessary. (No harm as far as the accuracy of the equation above. There may be economic costs, for example, that come from using an unnecessarily high sampling rate.)
Let’s look at the function h(t) = cos(18πt) + cos(20πt). The bandwidth of this function is 10 Hz, and so the sampling theorem requires that we sample our function at 20 Hz. If we sample at 20.4 Hz, 2% higher than necessary, the reconstruction lines up with the original function so well that the plots of the two functions agree to the thickness of the plotting line.
But if we sample at 19.6 Hz, 2% less than necessary, the reconstruction is not at all accurate due to problems with aliasing.
One rule of thumb is to use the Engineer’s Nyquist frequency of 2.5 fc which is 25% more than the exact Nyquist frequency. An engineer’s Nyquist frequency is sorta like a baker’s dozen, a conventional safety margin added to a well-known quantity.
Update: Here’s a plot of the error, the RMS difference between the signal and its reconstruction, as a function of sampling frequency.
By the way, the function in the example demonstrates beats. The sum of a 9 Hz signal and a 10 Hz signal is a 9.5 Hz signal modulated at 0.5 Hz. More details on beats in this post on AM radio and musical instruments.
Hi John: Is there a typo in there because it says that the bandwidth is 10 HZ so we should sample at 10 Hz. Is that correct ? Also, how did you get 10 Hz for the bandwidth ? Thanks.
Hi! Usually sampling is combined with quantizing the signal, but the sampling theorem in itself doesn’t consider quantizing. The question is how these two operations interact, if they can be treated separately and to what extent a corse quantizing can be compensated for by oversampling. Thanks.
If you look at the reconstruction error for non-trivial signals, it is surprisingly easy to get unexpected (and unwanted) artifacts. Which usually entails filtering the reconstruction to remove these artifacts. Filtering itself can be problematic.
A useful approach is to double the “Engineer’s” sample rate to 5 fc. This can be a no-cost increase if the sample width is reduced by one bit, permitting use of a cheaper ADC (ADC costs rise faster in width than speed). Depending on the noise present, it can be advantageous to add sample pairs, partially reclaiming the “lost” bit.
The trade-off often boils down to “shoving” sampling artifacts to where they either don’t matter, or are cheap to cleanly remove. Nyquist is a start, but far from the whole solution.
Most digital oscilloscopes rate their equivalent analog bandwidth as one fifth the sample rate.
@mark: Thanks. I’ll fix that.
In the oil and gas exploration business (mid to late 70), when we started recording areal surveys to record seismic data to process 3d reflection images of the subsurface, we used 2x oversampling in time and 1.5x oversampling in space.
The result was improved images of the stratigraphy of the reservoir rocks …
Mark,
consider cos(20 pi t).
2 pi f t = 20 pi t
2 f = 20
f =10 Hz this is the highest frequency = fc
This itself is the BW,
Hope this helps.
In some 40 years in the Aerospace Industry we sampled at 10 times the bandwidth of the highest useful frequency in the desired signal.
The minimum sampling rate is twice the bandwidth, which is less than or equal to twice the maximum frequency of the signal. If you are willing to deal with complex-valued time series, you can actually “baseband” a band-limited signal and sample at a rate which is less than twice the highest frequency in the band limited signal. For instance, in the example you gave with an 18 Hz sinusoid and a 20 Hz sinusoid, the bandwidth is only 2 Hz, so conceptually, you can envision shifting the signal downward in frequency until it has a component at -1 Hz and +1 Hz, and then sample it at greater than or equal to 4 Hz as opposed to 40 Hz. This can help compress the data for storage and allow smaller, faster DFTs to be used to compute the spectrum.
Sorry, I meant to thank you for the post before I commented!