Katana VentraIP

Sampling (signal processing)

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values.[A]

For other uses, see Sampling (disambiguation).

A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points.


The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a reconstruction filter.

. Some amount of aliasing is inevitable because only theoretical, infinitely long, functions can have no frequency content above the Nyquist frequency. Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter.

Aliasing

results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant.[4] In a capacitor-based sample and hold circuit, aperture errors are introduced by multiple mechanisms. For example, the capacitor cannot instantly track the input signal and the capacitor can not instantly be isolated from the input signal.

Aperture error

or deviation from the precise sample timing intervals.

Jitter

including thermal sensor noise, analog circuit noise, etc.

Noise

limit error, caused by the inability of the ADC input value to change sufficiently rapidly.

Slew rate

as a consequence of the finite precision of words that represent the converted values.

Quantization

Error due to other effects of the mapping of input voltage to converted output value (in addition to the effects of quantization).

non-linear

In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a device with various physical limitations. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion.


Various types of distortion can occur, including:


Although the use of oversampling can completely eliminate aperture error and aliasing by shifting them out of the passband, this technique cannot be practically used above a few GHz, and may be prohibitively expensive at much lower frequencies. Furthermore, while oversampling can reduce quantization error and non-linearity, it cannot eliminate these entirely. Consequently, practical ADCs at audio frequencies typically do not exhibit aliasing, aperture error, and are not limited by quantization error. Instead, analog noise dominates. At RF and microwave frequencies where oversampling is impractical and filters are expensive, aperture error, quantization error and aliasing can be significant limitations.


Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function.

Applications[edit]

Audio sampling[edit]

Digital audio uses pulse-code modulation (PCM) and digital signals for sound reproduction. This includes analog-to-digital conversion (ADC), digital-to-analog conversion (DAC), storage, and transmission. In effect, the system commonly referred to as digital is in fact a discrete-time, discrete-level analog of a previous electrical analog. While modern systems can be quite subtle in their methods, the primary usefulness of a digital system is the ability to store, retrieve and transmit signals without any loss of quality.


When it is necessary to capture audio covering the entire 20–20,000 Hz range of human hearing[5] such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz, 88.2 kHz, or 96 kHz.[6] The approximately double-rate requirement is a consequence of the Nyquist theorem. Sampling rates higher than about 50 kHz to 60 kHz cannot supply more usable information for human listeners. Early professional audio equipment manufacturers chose sampling rates in the region of 40 to 50 kHz for this reason.


There has been an industry trend towards sampling rates well beyond the basic requirements: such as 96 kHz and even 192 kHz[7] Even though ultrasonic frequencies are inaudible to humans, recording and mixing at higher sampling rates is effective in eliminating the distortion that can be caused by foldback aliasing. Conversely, ultrasonic sounds may interact with and modulate the audible part of the frequency spectrum (intermodulation distortion), degrading the fidelity.[8] One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling delta-sigma-converters this advantage is less important.


The Audio Engineering Society recommends 48 kHz sampling rate for most applications but gives recognition to 44.1 kHz for CD and other consumer uses, 32 kHz for transmission-related applications, and 96 kHz for higher bandwidth or relaxed anti-aliasing filtering.[9] Both Lavry Engineering and J. Robert Stuart state that the ideal sampling rate would be about 60 kHz, but since this is not a standard frequency, recommend 88.2 or 96 kHz for recording purposes.[10][11][12][13]


A more complete list of common audio sample rates is:

Complex sampling [edit]

Complex sampling (or I/Q sampling) is the simultaneous sampling of two different, but related, waveforms, resulting in pairs of samples that are subsequently treated as complex numbers.[C]  When one waveform  is the Hilbert transform of the other waveform  the complex-valued function,    is called an analytic signal,  whose Fourier transform is zero for all negative values of frequency. In that case, the Nyquist rate for a waveform with no frequencies ≥ B can be reduced to just B (complex samples/sec), instead of 2B (real samples/sec).[D] More apparently, the equivalent baseband waveform,    also has a Nyquist rate of B, because all of its non-zero frequency content is shifted into the interval [-B/2, B/2).


Although complex-valued samples can be obtained as described above, they are also created by manipulating samples of a real-valued waveform. For instance, the equivalent baseband waveform can be created without explicitly computing   by processing the product sequence[E]  through a digital low-pass filter whose cutoff frequency is B/2.[F] Computing only every other sample of the output sequence reduces the sample-rate commensurate with the reduced Nyquist rate. The result is half as many complex-valued samples as the original number of real samples. No information is lost, and the original s(t) waveform can be recovered, if necessary.

Crystal oscillator frequencies

Downsampling

Upsampling

Multidimensional sampling

and I/Q data

In-phase and quadrature components

Sample rate conversion

Digitizing

Sample and hold

Beta encoder

Kell factor

Bit rate

Normalized frequency

Matt Pharr, Wenzel Jakob and Greg Humphreys, Physically Based Rendering: From Theory to Implementation, 3rd ed., Morgan Kaufmann, November 2016.  978-0128006450. The chapter on sampling (available online) is nicely written with diagrams, core theory and code sample.

ISBN

Journal devoted to Sampling Theory

 – a page trying to answer the question Why I/Q Data?

I/Q Data for Dummies

 – an interactive presentation in a web-demo at the Institute of Telecommunications, University of Stuttgart

Sampling of analog signals