Katana VentraIP

White noise

In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density.[1] The term is used with this or similar meanings in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, rather than to any specific signal. White noise draws its name from white light,[2] although light that appears white generally does not have a flat power spectral density over the visible band.

For other uses, see White Noise.

In discrete time, white noise is a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance; a single realization of white noise is a random shock. Depending on the context, one may also require that the samples be independent and have identical probability distribution (in other words independent and identically distributed random variables are the simplest representation of white noise).[3] In particular, if each sample has a normal distribution with zero mean, the signal is said to be additive white Gaussian noise.[4]


The samples of a white noise signal may be sequential in time, or arranged along one or more spatial dimensions. In digital image processing, the pixels of a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables with uniform probability distribution over some interval. The concept can be defined also for signals spread over more complicated domains, such as a sphere or a torus.


An infinite-bandwidth white noise signal is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, random signals are considered "white noise" if they are observed to have a flat spectrum over the range of frequencies that are relevant to the context. For an audio signal, the relevant range is the band of audible sound frequencies (between 20 and 20,000 Hz). Such a signal is heard by the human ear as a hissing sound, resembling the /h/ sound in a sustained aspiration. On the other hand, the "sh" sound /ʃ/ in "ash" is a colored noise because it has a formant structure. In music and acoustics, the term "white noise" may be used for any signal that has a similar hissing sound.


The term white noise is sometimes used in the context of phylogenetically based statistical methods to refer to a lack of phylogenetic pattern in comparative data.[5] It is sometimes used analogously in nontechnical contexts to mean "random talk without meaningful contents".[6][7]

Mathematical definitions[edit]

White noise vector[edit]

A random vector (that is, a random variable with values in Rn) is said to be a white noise vector or white random vector if its components each have a probability distribution with zero mean and finite variance, and are statistically independent: that is, their joint probability distribution must be the product of the distributions of the individual components.[19]


A necessary (but, in general, not sufficient) condition for statistical independence of two variables is that they be statistically uncorrelated; that is, their covariance is zero. Therefore, the covariance matrix R of the components of a white noise vector w with n elements must be an n by n diagonal matrix, where each diagonal element Rii is the variance of component wi; and the correlation matrix must be the n by n identity matrix.


If, in addition to being independent, every variable in w also has a normal distribution with zero mean and the same variance , w is said to be a Gaussian white noise vector. In that case, the joint distribution of w is a multivariate normal distribution; the independence between the variables then implies that the distribution has spherical symmetry in n-dimensional space. Therefore, any orthogonal transformation of the vector will result in a Gaussian white random vector. In particular, under most types of discrete Fourier transform, such as FFT and Hartley, the transform W of w will be a Gaussian white noise vector, too; that is, the n Fourier coefficients of w will be independent Gaussian variables with zero mean and the same variance .


The power spectrum P of a random vector w can be defined as the expected value of the squared modulus of each coefficient of its Fourier transform W, that is, Pi = E(|Wi|2). Under that definition, a Gaussian white noise vector will have a perfectly flat power spectrum, with Pi = σ2 for all i.


If w is a white random vector, but not a Gaussian one, its Fourier coefficients Wi will not be completely independent of each other; although for large n and common probability distributions the dependencies are very subtle, and their pairwise correlations can be assumed to be zero.


Often the weaker condition "statistically uncorrelated" is used in the definition of white noise, instead of "statistically independent". However, some of the commonly expected properties of white noise (such as flat power spectrum) may not hold for this weaker version. Under this assumption, the stricter version can be referred to explicitly as independent white noise vector.[20]: p.60  Other authors use strongly white and weakly white instead.[21]


An example of a random vector that is "Gaussian white noise" in the weak but not in the strong sense is where is a normal random variable with zero mean, and is equal to or to , with equal probability. These two variables are uncorrelated and individually normally distributed, but they are not jointly normally distributed and are not independent. If is rotated by 45 degrees, its two components will still be uncorrelated, but their distribution will no longer be normal.


In some situations, one may relax the definition by allowing each component of a white random vector to have non-zero expected value . In image processing especially, where samples are typically restricted to positive values, one often takes to be one half of the maximum sample value. In that case, the Fourier coefficient corresponding to the zero-frequency component (essentially, the average of the ) will also have a non-zero expected value ; and the power spectrum will be flat only over the non-zero frequencies.

Discrete-time white noise[edit]

A discrete-time stochastic process is a generalization of a random vector with a finite number of components to infinitely many components. A discrete-time stochastic process is called white noise if its mean is equal to zero for all , i.e. and if the autocorrelation function has a nonzero value only for , i.e. .

Continuous-time white noise[edit]

In order to define the notion of "white noise" in the theory of continuous-time signals, one must replace the concept of a "random vector" by a continuous-time random signal; that is, a random process that generates a function of a real-valued parameter .


Such a process is said to be white noise in the strongest sense if the value for any time is a random variable that is statistically independent of its entire history before . A weaker definition requires independence only between the values and at every pair of distinct times and . An even weaker definition requires only that such pairs and be uncorrelated.[22] As in the discrete case, some authors adopt the weaker definition for "white noise", and use the qualifier independent to refer to either of the stronger definitions. Others use weakly white and strongly white to distinguish between them.


However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signal is no longer a finite-dimensional space , but an infinite-dimensional function space. Moreover, by any definition a white noise signal would have to be essentially discontinuous at every point; therefore even the simplest operations on , like integration over a finite interval, require advanced mathematical machinery.


Some authors require each value to be a real-valued random variable with expectation and some finite variance . Then the covariance between the values at two times and is well-defined: it is zero if the times are distinct, and if they are equal. However, by this definition, the integral

over any interval with positive width would be simply the width times the expectation: . This property renders the concept inadequate as a model of "white noise" signals either in a physical or mathematical sense.


Therefore, most authors define the signal indirectly by specifying random values for the integrals of and over each interval . In this approach, however, the value of at an isolated time cannot be defined as a real-valued random variable. Also the covariance becomes infinite when ; and the autocorrelation function must be defined as , where is some real constant and is Dirac's "function".


In this approach, one usually specifies that the integral of over an interval is a real random variable with normal distribution, zero mean, and variance ; and also that the covariance of the integrals , is , where is the width of the intersection of the two intervals . This model is called a Gaussian white noise signal (or process).


In the mathematical field known as white noise analysis, a Gaussian white noise is defined as a stochastic tempered distribution, i.e. a random variable with values in the space of tempered distributions. Analogous to the case for finite-dimensional random vectors, a probability law on the infinite-dimensional space can be defined via its characteristic function (existence and uniqueness are guaranteed by an extension of the Bochner–Minlos theorem, which goes under the name Bochner–Minlos–Sazanov theorem); analogously to the case of the multivariate normal distribution , which has characteristic function


the white noise must satisfy


where is the natural pairing of the tempered distribution with the Schwartz function , taken scenariowise for , and .

Mathematical applications[edit]

Time series analysis and regression[edit]

In statistics and econometrics one often assumes that an observed series of data values is the sum of the values generated by a deterministic linear process, depending on certain independent (explanatory) variables, and on a series of random noise values. Then regression analysis is used to infer the parameters of the model process from the observed data, e.g. by ordinary least squares, and to test the null hypothesis that each of the parameters is zero against the alternative hypothesis that it is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and have the same Gaussian probability distribution – in other words, that the noise is Gaussian white (not just white). If there is non-zero correlation between the noise values underlying different observations then the estimated model parameters are still unbiased, but estimates of their uncertainties (such as confidence intervals) will be biased (not accurate on average). This is also true if the noise is heteroskedastic – that is, if it has different variances for different data points.


Alternatively, in the subset of regression analysis known as time series analysis there are often no explanatory variables other than the past values of the variable being modeled (the dependent variable). In this case the noise process is often modeled as a moving average process, in which the current value of the dependent variable depends on current and past values of a sequential white noise process.

Random vector transformations[edit]

These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression.


In particular, by a suitable linear transformation (a coloring transformation), a white random vector can be used to produce a "non-white" random vector (that is, a list of random variables) whose elements have a prescribed covariance matrix. Conversely, a random vector with known covariance matrix can be transformed into a white random vector by a suitable whitening transformation.

Generation[edit]

White noise may be generated digitally with a digital signal processor, microprocessor, or microcontroller. Generating white noise typically entails feeding an appropriate stream of random numbers to a digital-to-analog converter. The quality of the white noise will depend on the quality of the algorithm used.[23]

Chatter from multiple conversations within the acoustics of a confined space.

The jargon used by politicians to mask a point that they don't want noticed.[24]

pleonastic

that is disagreeable, harsh, dissonant or discordant with no melody.

Music

The term is sometimes used as a colloquialism to describe a backdrop of ambient sound, creating an indistinct or seamless commotion. Following are some examples:


The term can also be used metaphorically, as in the novel White Noise (1985) by Don DeLillo which explores the symptoms of modern culture that came together so as to make it difficult for an individual to actualize their ideas and personality.