HOME

TheInfoList



OR:

Signal averaging is a
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
technique applied in the
time domain Time domain refers to the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time. In the time domain, the signal or function's value is known for all real numbers, for the c ...
, intended to increase the strength of a
signal In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The '' IEEE Transactions on Signal Processing' ...
relative to
noise Noise is unwanted sound considered unpleasant, loud or disruptive to hearing. From a physics standpoint, there is no distinction between noise and desired sound, as both are vibrations through a medium, such as air or water. The difference aris ...
that is obscuring it. By averaging a set of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in proportion to the square root of the number of measurements.


Deriving the SNR for averaged signals

Assumed that * Signal s(t) is uncorrelated to noise, and noise z(t) is uncorrelated : E (t)z(t-\tau)= 0 = E (t)s(t-\tau)forall t,\tau. * Signal power P_=E ^2/math> is constant in the replicate measurements. * Noise is random, with a
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the '' ari ...
of zero and constant
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
in the replicate measurements: E = 0 = \mu and 0 < E left(z-\mu\right)^2= E ^2= P_ = \sigma^2. * We (canonically) define Signal-to-Noise ratio as SNR = \frac = \frac.


Noise power for sampled signals

Assuming we sample the noise, we get a per-sample variance of \mathrm(z)=E ^2= \sigma^2. Averaging a random variable leads to the following variance: \mathrm\left(\frac 1n \sum_^n z_i\right) = \frac 1 \mathrm\left(\sum_^n z_i\right)= \frac 1 \sum_^n\mathrm\left( z_i\right). Since noise variance is constant \sigma^2: \mathrm(N_\text) = \mathrm\left(\frac 1n \sum_^n z_i\right) = \frac 1 n \sigma^2 = \frac 1n \sigma^2, demonstrating that averaging n realizations of the same, uncorrelated noise reduces noise power by a factor of n, and reduces noise level by a factor of \sqrt.


Signal power for sampled signals

Considering n vectors V_i,\,i \in\ of signal samples of length T: V_i = \left _, \ldots, s_\right\quad s_ \in \mathbb K^T, the power P_i of such a vector simply is P_i = \sum_^T = \left, V_i \^2. Again, averaging the n vectors V_,\,i=1,\ldots,n, yields the following averaged vector V_\text = \frac 1n \sum_^T\sum_^n s_= \frac 1n \sum_^n \sum_^T s_. In the case where V_n \equiv V_m \forall m,n \in \, we see that V_\text reaches a maximum of V_\text = P_i. In this case, the ratio of signal to noise also reaches a maximum, \text_\text = \frac = n \text. This is the
oversampling In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate o ...
case, where the observed signal is correlated (because oversampling implies that the signal observations are strongly correlated).


Time-locked signals

Averaging is applied to enhance a time-locked signal component in noisy measurements; time-locking implies that the signal is observation-periodic, so we end up in the maximum case above.


Averaging odd and even trials

A specific way of obtaining replicates is to average all the odd and even trials in separate buffers. This has the advantage of allowing for comparison of even and odd results from interleaved trials. An average of odd and even averages generates the completed averaged result, while the difference between the odd and even averages, divided by two, constitutes an estimate of the noise.


Algorithmic implementation

The following is a MATLAB simulation of the averaging process: N=1000; % signal length even=zeros(N,1); % even buffer odd=even; % odd buffer actual_noise=even;% keep track of noise level x=sin(linspace(0,4*pi,N))'; % tracked signal for ii=1:256 % number of replicates n = randn(N,1); % random noise actual_noise = actual_noise+n; if (mod(ii,2)) even = even+n+x; else odd=odd+n+x; end end even_avg = even/(ii/2); % even buffer average odd_avg = odd/(ii/2); % odd buffer average act_avg = actual_noise/ii; % actual noise level db(rms(act_avg)) db(rms((even_avg-odd_avg)/2)) plot((odd_avg+even_avg)); hold on; plot((even_avg-odd_avg)/2) The averaging process above, and in general, results in an estimate of the signal. When compared with the raw trace, the averaged noise component is reduced with every averaged trial. When averaging real signals, the underlying component may not always be as clear, resulting in repeated averages in a search for consistent components in two or three replicates. It is unlikely that two or more consistent results will be produced by chance alone.


Correlated noise

Signal averaging typically relies heavily on the assumption that the noise component of a signal is random, having zero mean, and being unrelated to the signal. However, there are instances in which the noise is not uncorrelated. A common example of correlated noise is quantization noise (e.g. the noise created when converting from an analog to a digital signal).


References

{{DSP Digital signal processing
Averaging In ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in the list (the arithmetic mean). For example, the average of the numbers 2, 3, 4, 7, ...