Allan Deviation
   HOME

TheInfoList



OR:

The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in
clock A clock or a timepiece is a device used to measure and indicate time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month and the ...
s,
oscillator Oscillation is the repetitive or periodic variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples of oscillation include a swinging pendulum ...
s and
amplifier An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It may increase the power significantly, or its main effect may be to boost the v ...
s. It is named after
David W. Allan David Wayne Allan (born September 25, 1936) is an American atomic clock physicist and author of the Allan variance, also known as the two-sample variance, a measure of frequency stability in clocks, oscillators and other applications. He worked ...
and expressed mathematically as \sigma_y^2(\tau). The Allan deviation (ADEV), also known as sigma-tau, is the square root of the Allan variance, \sigma_y(\tau). The ''M-sample variance'' is a measure of frequency stability using ''M'' samples, time ''T'' between measurements and observation time \tau. ''M''-sample variance is expressed as :\sigma_y^2(M, T, \tau). The Allan variance is intended to estimate stability due to noise processes and not that of systematic errors or imperfections such as frequency drift or temperature effects. The Allan variance and Allan deviation describe frequency stability. See also the section Interpretation of value below. There are also different adaptations or alterations of Allan variance, notably the
modified Allan variance The modified Allan variance (MVAR), also known as mod ''σy''2(''τ''), is a variable bandwidth modified variant of Allan variance, a measurement of frequency stability in clocks, oscillators and amplifiers. Its main advantage relative ...
MAVAR or MVAR, the total variance, and the
Hadamard variance Jacques Salomon Hadamard (; 8 December 1865 – 17 October 1963) was a French mathematician who made major contributions in number theory, complex analysis, differential geometry and partial differential equations. Biography The son of a teac ...
. There also exist time-stability variants such as
time deviation Time deviation (TDEV), also known as \sigma_x(\tau), is the time stability of phase ''x'' versus observation interval ''τ'' of the measured clock source. The time deviation thus forms a standard deviation type of measurement to indicate the time in ...
(TDEV) or time variance (TVAR). Allan variance and its variants have proven useful outside the scope of
timekeeping Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, to co ...
and are a set of improved statistical tools to use whenever the noise processes are not unconditionally stable, thus a derivative exists. The general ''M''-sample variance remains important, since it allows
dead time For detection systems that record discrete events, such as particle and nuclear detectors, the dead time is the time after each event during which the system is not able to record another event. An everyday life example of this is what happens when ...
in measurements, and bias functions allow conversion into Allan variance values. Nevertheless, for most applications the special case of 2-sample, or "Allan variance" with T = \tau is of greatest interest.


Background

When investigating the stability of
crystal oscillator A crystal oscillator is an electronic oscillator circuit that uses a piezoelectric crystal as a frequency-selective element. The oscillator frequency is often used to keep track of time, as in quartz wristwatches, to provide a stable cloc ...
s and
atomic clock An atomic clock is a clock that measures time by monitoring the resonant frequency of atoms. It is based on atoms having different energy levels. Electron states in an atom are associated with different energy levels, and in transitions betwee ...
s, it was found that they did not have a
phase noise In signal processing, phase noise is the frequency-domain representation of random fluctuations in the phase of a waveform, corresponding to time-domain deviations from perfect periodicity (jitter). Generally speaking, radio-frequency engineers ...
consisting only of
white noise In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used, with this or similar meanings, in many scientific and technical disciplines, ...
, but also of flicker frequency noise. These noise forms become a challenge for traditional statistical tools such as
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
, as the estimator will not converge. The noise is thus said to be divergent. Early efforts in analysing the stability included both theoretical analysis and practical measurements. An important side consequence of having these types of noise was that, since the various methods of measurements did not agree with each other, the key aspect of repeatability of a measurement could not be achieved. This limits the possibility to compare sources and make meaningful specifications to require from suppliers. Essentially all forms of scientific and commercial uses were then limited to dedicated measurements, which hopefully would capture the need for that application. To address these problems, David Allan introduced the ''M''-sample variance and (indirectly) the two-sample variance. While the two-sample variance did not completely allow all types of noise to be distinguished, it provided a means to meaningfully separate many noise-forms for time-series of phase or frequency measurements between two or more oscillators. Allan provided a method to convert between any ''M''-sample variance to any ''N''-sample variance via the common 2-sample variance, thus making all ''M''-sample variances comparable. The conversion mechanism also proved that ''M''-sample variance does not converge for large ''M'', thus making them less useful. IEEE later identified the 2-sample variance as the preferred measure. An early concern was related to time- and frequency-measurement instruments that had a
dead time For detection systems that record discrete events, such as particle and nuclear detectors, the dead time is the time after each event during which the system is not able to record another event. An everyday life example of this is what happens when ...
between measurements. Such a series of measurements did not form a continuous observation of the signal and thus introduced a
systematic bias Systematic may refer to: Science * Short for systematic error * Systematic fault * Systematic bias, errors that are not determined by chance but are introduced by an inaccuracy (involving either the observation or measurement process) inheren ...
into the measurement. Great care was spent in estimating these biases. The introduction of zero-dead-time counters removed the need, but the bias-analysis tools have proved useful. Another early aspect of concern was related to how the
bandwidth Bandwidth commonly refers to: * Bandwidth (signal processing) or ''analog bandwidth'', ''frequency bandwidth'', or ''radio bandwidth'', a measure of the width of a frequency range * Bandwidth (computing), the rate of data transfer, bit rate or thr ...
of the measurement instrument would influence the measurement, such that it needed to be noted. It was later found that by algorithmically changing the observation \tau, only low \tau values would be affected, while higher values would be unaffected. The change of \tau is done by letting it be an integer multiple n of the measurement
timebase A time base generator (also timebase or time base) is a special type of function generator, an electronic circuit that generates a varying voltage to produce a particular waveform. Time base generators produce very high frequency sawtooth waves spec ...
\tau_0: :\tau = n \tau_0. The physics of
crystal oscillator A crystal oscillator is an electronic oscillator circuit that uses a piezoelectric crystal as a frequency-selective element. The oscillator frequency is often used to keep track of time, as in quartz wristwatches, to provide a stable cloc ...
s were analyzed by D. B. Leeson, and the result is now referred to as
Leeson's equation Leeson's equation is an empirical expression that describes an oscillator's phase noise spectrum. Leeson's expression for single-sideband (SSB) phase noise in dBc/Hz (decibels relative to output level per hertz) and augmented for flicker noise: :L( ...
. The feedback in the
oscillator Oscillation is the repetitive or periodic variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples of oscillation include a swinging pendulum ...
will make the
white noise In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used, with this or similar meanings, in many scientific and technical disciplines, ...
and
flicker noise Flicker noise is a type of electronic noise with a 1/''f'' power spectral density. It is therefore often referred to as 1/''f'' noise or pink noise, though these terms have wider definitions. It occurs in almost all electronic devices and can show ...
of the feedback amplifier and crystal become the
power-law noise In audio engineering, electronics, physics, and many other fields, the color of noise or noise spectrum refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly ...
s of f^ white frequency noise and f^ flicker frequency noise respectively. These noise forms have the effect that the
standard variance In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
estimator does not converge when processing time-error samples. This mechanics of the feedback oscillators was unknown when the work on oscillator stability started, but was presented by Leeson at the same time as the set of statistical tools was made available by
David W. Allan David Wayne Allan (born September 25, 1936) is an American atomic clock physicist and author of the Allan variance, also known as the two-sample variance, a measure of frequency stability in clocks, oscillators and other applications. He worked ...
. For a more thorough presentation on the
Leeson effect Leeson is a surname. Notable people with the surname include: *Bill Leeson, British filmmaker *Cecil Leeson, American musician *David Leeson, American photojournalist *Edward Leeson, British musician *John Leeson, British actor *Joseph Leeson, 1st ...
, see modern phase-noise literature.


Interpretation of value

Allan variance is defined as one half of the time average of the squares of the differences between successive readings of the
frequency deviation Frequency deviation (f_) is used in FM radio to describe the difference between the minimum or maximum extent of a frequency modulated signal, and the nominal center or carrier frequency. The term is sometimes mistakenly used as synonymous with freq ...
sampled over the sampling period. The Allan variance depends on the time period used between samples, therefore, it is a function of the sample period, commonly denoted as ''τ'', likewise the distribution being measured, and is displayed as a graph rather than a single number. A low Allan variance is a characteristic of a clock with good stability over the measured period. Allan deviation is widely used for plots (conventionally in log–log format) and presentation of numbers. It is preferred, as it gives the relative amplitude stability, allowing ease of comparison with other sources of errors. An Allan deviation of 1.3 at observation time 1 s (i.e. ''τ'' = 1 s) should be interpreted as there being an instability in frequency between two observations 1 second apart with a relative
root mean square In mathematics and its applications, the root mean square of a set of numbers x_i (abbreviated as RMS, or rms and denoted in formulas as either x_\mathrm or \mathrm_x) is defined as the square root of the mean square (the arithmetic mean of the ...
(RMS) value of 1.3. For a 10 MHz clock, this would be equivalent to 13 mHz RMS movement. If the phase stability of an oscillator is needed, then the
time deviation Time deviation (TDEV), also known as \sigma_x(\tau), is the time stability of phase ''x'' versus observation interval ''τ'' of the measured clock source. The time deviation thus forms a standard deviation type of measurement to indicate the time in ...
variants should be consulted and used. One may convert the Allan variance and other time-domain variances into frequency-domain measures of time (phase) and frequency stability.


Definitions


''M''-sample variance

The M-sample variance is definedAllan, D.
''Statistics of Atomic Frequency Standards''
pages 221–230. Proceedings of the IEEE, Vol. 54, No 2, February 1966.
(here in a modernized notation form) as :\sigma_y^2(M, T, \tau) = \frac \left\, where x(t) is the clock reading (in seconds) measured at time t, or with
average fractional frequency In ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in the list (the arithmetic mean). For example, the average of the numbers 2, 3, 4, 7, ...
time series :\sigma_y^2(M, T, \tau) = \frac \left\, where M is the number of frequency samples used in variance, T is the time between each frequency sample, and \tau is the time length of each frequency estimate. An important aspect is that M-sample variance model can include dead-time by letting the time T be different from that of \tau. An alternative (and equivalent) way to view this formula that makes the connection to the typical sample variance formula more explicit is obtained by multiplying \frac by M and dividing the 2 terms inside the curly braces by M: :\begin \sigma_y^2(M, T, \tau) &= \frac \left\\\ pt &= \frac \left\ \end Now, the \frac coefficient can be interpreted as
Bessel's correction In statistics, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the sample variance and sample standard deviation, where ''n'' is the number of observations in a sample. This method corrects the bias in t ...
to the biased sample variance which is what appears inside the curly braces in the form \operatorname ^2\operatorname 2.


Allan variance

The Allan variance is defined as :\sigma_y^2(\tau) = \left\langle\sigma_y^2(2, \tau, \tau)\right\rangle, where \langle\dotsm\rangle denotes the expectation operator. This can be conveniently expressed as :\sigma_y^2(\tau) = \frac \left\langle\left(\bar_ - \bar_n\right)^2\right\rangle = \frac \left\langle\left(x_ - 2x_ + x_n\right)^2\right\rangle, where \tau is the observation period, \bar_n is the ''n''th
fractional frequency A fraction is one or more equal parts of something. Fraction may also refer to: * Fraction (chemistry), a quantity of a substance collected by fractionation * Fraction (floating point number), an (ambiguous) term sometimes used to specify a part ...
average over the observation time \tau. The samples are taken with no dead-time between them, which is achieved by letting :T = \tau.


Allan deviation

Just as with
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
, the Allan deviation is defined as the square root of the Allan variance: :\sigma_y(\tau) = \sqrt.


Supporting definitions


Oscillator model

The oscillator being analysed is assumed to follow the basic model of : V(t) = V_0 \sin (\Phi(t)). The oscillator is assumed to have a nominal frequency of \nu_\text, given in cycles per second (SI unit:
hertz The hertz (symbol: Hz) is the unit of frequency in the International System of Units (SI), equivalent to one event (or cycle) per second. The hertz is an SI derived unit whose expression in terms of SI base units is s−1, meaning that on ...
). The nominal
angular frequency In physics, angular frequency "''ω''" (also referred to by the terms angular speed, circular frequency, orbital frequency, radian frequency, and pulsatance) is a scalar measure of rotation rate. It refers to the angular displacement per unit tim ...
\omega_\text (in radians per second) is given by : \omega_\text = 2\pi \nu_\text. The total phase can be separated into a perfectly cyclic component \omega_\text t, along with a fluctuating component \varphi(t): : \Phi(t) = \omega_\textt + \varphi(t) = 2\pi \nu_\textt + \varphi(t).


Time error

The time-error function ''x''(''t'') is the difference between expected nominal time and actual normal time: : x(t) = \frac = \frac - t = T(t) - t. For measured values a time-error series TE(''t'') is defined from the reference time function ''T''(''t'') as : TE(t) = T(t) - T_\text(t).


Frequency function

The frequency function \nu(t) is the frequency over time, defined as : \nu(t) = \frac \frac.


Fractional frequency

The fractional frequency ''y''(''t'') is the normalized difference between the frequency \nu(t) and the nominal frequency \nu_\text: :y(t) = \frac = \frac - 1.


Average fractional frequency

The average fractional frequency is defined as :\bar(t, \tau) = \frac \int_0^\tau y(t + t_v) \, dt_v, where the average is taken over observation time ''τ'', the ''y''(''t'') is the fractional-frequency error at time ''t'', and ''τ'' is the observation time. Since ''y''(''t'') is the derivative of ''x''(''t''), we can without loss of generality rewrite it as :\bar(t, \tau) = \frac.


Estimators

This definition is based on the statistical
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
, integrating over infinite time. The real-world situation does not allow for such time-series, in which case a statistical
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the ...
needs to be used in its place. A number of different estimators will be presented and discussed.


Conventions


Fixed ''τ'' estimators

A first simple estimator would be to directly translate the definition into :\sigma_y^2(\tau, M) = \operatorname(\tau, M) = \frac \sum_^(\bar_ - \bar_i)^2, or for the time series: :\sigma_y^2(\tau, N) = \operatorname(\tau, N) = \frac \sum_^(x_ - 2x_ + x_i)^2. These formulas, however, only provide the calculation for the ''τ'' = ''τ''0 case. To calculate for a different value of ''τ'', a new time-series needs to be provided.


Non-overlapped variable τ estimators

Taking the time-series and skipping past ''n'' − 1 samples, a new (shorter) time-series would occur with ''τ''0 as the time between the adjacent samples, for which the Allan variance could be calculated with the simple estimators. These could be modified to introduce the new variable ''n'' such that no new time-series would have to be generated, but rather the original time series could be reused for various values of ''n''. The estimators become :\sigma_y^2(n\tau_0, M) = \operatorname(n\tau_0, M) = \frac \sum_^\left(\bar_ - \bar_\right)^2 with n \le M - 1, and for the time series: :\sigma_y^2(n\tau_0, N) = \operatorname(n\tau_0, N) = \frac \sum_^\left(x_ - 2x_ + x_\right)^2 with n \le \frac. These estimators have a significant drawback in that they will drop a significant amount of sample data, as only 1/''n'' of the available samples is being used.


Overlapped variable ''τ'' estimators

A technique presented by J. J. SnyderSnyder, J. J.: ''An ultra-high resolution frequency meter'', pages 464–469, Frequency Control Symposium #35, 1981. provided an improved tool, as measurements were overlapped in ''n'' overlapped series out of the original series. The overlapping Allan variance estimator was introduced by Howe, Allan and Barnes. This can be shown to be equivalent to averaging the time or normalized frequency samples in blocks of ''n'' samples prior to processing. The resulting predictor becomes : \begin \sigma_y^2(n\tau_0, M) & = \operatorname(n\tau_0, M) = \frac \sum_^ \left( \sum_^ y_ - y_i \right)^2 \\ pt& = \frac \sum_^ \left(\bar_ - \bar_j \right)^2, \end or for the time series: :\sigma_y^2(n\tau_0, N) = \operatorname(n\tau_0, N) = \frac \sum_^ (x_ - 2x_ + x_i)^2. The overlapping estimators have far superior performance over the non-overlapping estimators, as ''n'' rises and the time-series is of moderate length. The overlapped estimators have been accepted as the preferred Allan variance estimators in IEEE, ITU-TITU-T Rec. G.810
''Definitions and terminology for synchronization and networks''
ITU-T Rec. G.810 (08/96).
and ETSIETSI EN 300 462-1-1
''Definitions and terminology for synchronisation networks''
ETSI EN 300 462-1-1 V1.1.1 (1998–05).
standards for comparable measurements such as needed for telecommunication qualification.


Modified Allan variance

In order to address the inability to separate white phase modulation from flicker phase modulation using traditional Allan variance estimators, an algorithmic filtering reduces the bandwidth by ''n''. This filtering provides a modification to the definition and estimators and it now identifies as a separate class of variance called
modified Allan variance The modified Allan variance (MVAR), also known as mod ''σy''2(''τ''), is a variable bandwidth modified variant of Allan variance, a measurement of frequency stability in clocks, oscillators and amplifiers. Its main advantage relative ...
. The modified Allan variance measure is a frequency stability measure, just as is the Allan variance.


Time stability estimators

A time stability (σ''x'') statistical measure, which is often called the time deviation (TDEV), can be calculated from the modified Allan deviation (MDEV). The TDEV is based on the MDEV instead of the original Allan deviation, because the MDEV can discriminate between white and flicker phase modulation (PM). The following is the time variance estimation based on the modified Allan variance: :\sigma_x^2(\tau) = \frac\bmod\sigma_y^2(\tau), and similarly for modified Allan deviation to
time deviation Time deviation (TDEV), also known as \sigma_x(\tau), is the time stability of phase ''x'' versus observation interval ''τ'' of the measured clock source. The time deviation thus forms a standard deviation type of measurement to indicate the time in ...
: :\sigma_x(\tau) = \frac\bmod\sigma_y(\tau). The TDEV is normalized so that it is equal to the classical deviation for white PM for time constant ''τ'' = ''τ''0. To understand the normalization scale factor between the statistical measures, the following is the relevant statistical rule: For independent random variables ''X'' and ''Y'', the variance (σ''z''2) of a sum or difference (''z'' = ''x'' − ''y'') is the sum square of their variances (σ''z''2 = σ''x''2 + σ''y''2). The variance of the sum or difference (''y'' = ''x''2''τ'' − ''x''''τ'') of two independent samples of a random variable is twice the variance of the random variable (σ''y''2 = 2σ''x''2). The MDEV is the second difference of independent phase measurements (''x'') that have a variance (σ''x''2). Since the calculation is the double difference, which requires three independent phase measurements (''x''2''τ'' − 2''x''''τ'' + ''x''), the modified Allan variance (MVAR) is three times the variances of the phase measurements.


Other estimators

Further developments have produced improved estimation methods for the same stability measure, the variance/deviation of frequency, but these are known by separate names such as the
Hadamard variance Jacques Salomon Hadamard (; 8 December 1865 – 17 October 1963) was a French mathematician who made major contributions in number theory, complex analysis, differential geometry and partial differential equations. Biography The son of a teac ...
, modified Hadamard variance, the total variance, modified total variance and the Theo variance. These distinguish themselves in better use of statistics for improved confidence bounds or ability to handle linear frequency drift.


Confidence intervals and equivalent degrees of freedom

Statistical estimators will calculate an estimated value on the sample series used. The estimates may deviate from the true value and the range of values which for some probability will contain the true value is referred to as the
confidence interval In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
. The confidence interval depends on the number of observations in the sample series, the dominant noise type, and the estimator being used. The width is also dependent on the statistical certainty for which the confidence interval values forms a bounded range, thus the statistical certainty that the true value is within that range of values. For variable-''τ'' estimators, the ''τ''0 multiple ''n'' is also a variable.


Confidence interval

The
confidence interval In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
can be established using
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
by using the distribution of the sample variance:D. A. Howe, D. W. Allan, J. A. Barnes
''Properties of signal sources and measurement methods''
pages 464–469, Frequency Control Symposium #35, 1981.
:\chi^2 = \frac, where ''s''''2'' is the sample variance of our estimate, ''σ''2 is the true variance value, df is the degrees of freedom for the estimator, and ''χ''2 is the degrees of freedom for a certain probability. For a 90% probability, covering the range from the 5% to the 95% range on the probability curve, the upper and lower limits can be found using the inequality :\chi^2(0.05) \le \frac \le \chi^2(0.95), which after rearrangement for the true variance becomes :\frac \le \sigma^2 \le \frac.


Effective degrees of freedom

The
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
represents the number of free variables capable of contributing to the estimate. Depending on the estimator and noise type, the effective degrees of freedom varies. Estimator formulas depending on ''N'' and ''n'' has been found empirically: :


Power-law noise

The Allan variance will treat various
power-law noise In audio engineering, electronics, physics, and many other fields, the color of noise or noise spectrum refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly ...
types differently, conveniently allowing them to be identified and their strength estimated. As a convention, the measurement system width (high corner frequency) is denoted ''f''''H''. As found inJ. A. Barnes, A. R. Chi, L. S. Cutler, D. J. Healey, D. B. Leeson, T. E. McGunigal, J. A. Mullen, W. L. Smith, R. Sydnor, R. F. C. Vessot, G. M. R. Winkler:
Characterization of Frequency Stability
', NBS Technical Note 394, 1970.
and in modern forms.Bregni, Stefano
''Synchronisation of digital telecommunication networks''
Wiley 2002, .
NIST SP 1065
''Handbook of Frequency Stability Analysis''
.
The Allan variance is unable to distinguish between WPM and FPM, but is able to resolve the other power-law noise types. In order to distinguish WPM and FPM, the
modified Allan variance The modified Allan variance (MVAR), also known as mod ''σy''2(''τ''), is a variable bandwidth modified variant of Allan variance, a measurement of frequency stability in clocks, oscillators and amplifiers. Its main advantage relative ...
needs to be employed. The above formulas assume that :\tau \gg \frac, and thus that the bandwidth of the observation time is much lower than the instruments bandwidth. When this condition is not met, all noise forms depend on the instrument's bandwidth.


''α''–''μ'' mapping

The detailed mapping of a phase modulation of the form :S_x(f) = \frac h_\alpha f^ = \frac h_\alpha f^\beta, where :\beta \equiv \alpha - 2, or frequency modulation of the form :S_y(f) = h_\alpha f^\alpha into the Allan variance of the form :\sigma_y^2(\tau) = K_\alpha h_\alpha \tau^\mu can be significantly simplified by providing a mapping between ''α'' and ''μ''. A mapping between ''α'' and ''K''''α'' is also presented for convenience: :


General conversion from phase noise

A signal with spectral phase noise S_\varphi with units rad2/Hz can be converted to Allan Variance by : \sigma^2_y(\tau) = \frac \int^_0 S_\varphi(f) \frac \, df.


Linear response

While Allan variance is intended to be used to distinguish noise forms, it will depend on some but not all linear responses to time. They are given in the table: : Thus, linear drift will contribute to output result. When measuring a real system, the linear drift or other drift mechanism may need to be estimated and removed from the time-series prior to calculating the Allan variance.


Time and frequency filter properties

In analysing the properties of Allan variance and friends, it has proven useful to consider the filter properties on the normalize frequency. Starting with the definition for Allan variance for :\sigma_y^2(\tau) = \frac\left\langle\left(\bar_ - \bar_i\right)^2\right\rangle, where :\bar_i = \frac \int_0^\tau y(i\tau + t) \, dt. Replacing the time series of y_i with the Fourier-transformed variant S_y(f) the Allan variance can be expressed in the frequency domain as :\sigma_y^2(\tau) = \int_0^\infty S_y(f) \frac \, df. Thus the transfer function for Allan variance is :\left\vert H_A(f)\right\vert^2 = \frac.


Bias functions

The ''M''-sample variance, and the defined special case Allan variance, will experience
systematic bias Systematic may refer to: Science * Short for systematic error * Systematic fault * Systematic bias, errors that are not determined by chance but are introduced by an inaccuracy (involving either the observation or measurement process) inheren ...
depending on different number of samples ''M'' and different relationship between ''T'' and ''τ''. In order to address these biases, the bias-functions ''B''1 and ''B''2 has been definedBarnes, J. A.
''Tables of Bias Functions, ''B''1 and ''B''2, for Variances Based On Finite Samples of Processes with Power Law Spectral Densities''
NBS Technical Note 375, 1969.
and allows conversion between different ''M'' and ''T'' values. These bias functions are not sufficient for handling the bias resulting from concatenating ''M'' samples to the ''Mτ''0 observation time over the ''MT''0 with the dead-time distributed among the ''M'' measurement blocks rather than at the end of the measurement. This rendered the need for the ''B''3 bias. The bias functions are evaluated for a particular µ value, so the α–µ mapping needs to be done for the dominant noise form as found using noise identification. Alternatively, the µ value of the dominant noise form may be inferred from the measurements using the bias functions.


''B''1 bias function

The ''B''1 bias function relates the ''M''-sample variance with the 2-sample variance (Allan variance), keeping the time between measurements ''T'' and time for each measurements ''τ'' constant. It is defined as :B_1(N, r, \mu) = \frac, where :r = \frac. The bias function becomes after analysis :B_1(N, r, \mu) = \frac.


''B''2 bias function

The ''B''2 bias function relates the 2-sample variance for sample time ''T'' with the 2-sample variance (Allan variance), keeping the number of samples ''N'' = 2 and the observation time ''τ'' constant. It is defined as :B_2(r, \mu) = \frac, where :r = \frac. The bias function becomes after analysis :B_2(r, \mu) = \frac.


''B''3 bias function

The ''B''3 bias function relates the 2-sample variance for sample time ''MT''0 and observation time ''Mτ''0 with the 2-sample variance (Allan variance) and is definedJ. A. Barnes, D. W. Allan
''Variances Based on Data with Dead Time Between the Measurements''
NIST Technical Note 1318, 1990.
as :B_3(N, M, r, \mu) = \frac, where :T = M T_0, :\tau = M \tau_0. The ''B''3 bias function is useful to adjust non-overlapping and overlapping variable ''τ'' estimator values based on dead-time measurements of observation time ''τ''0 and time between observations ''T''0 to normal dead-time estimates. The bias function becomes after analysis (for the ''N'' = 2 case) : B_3(2, M, r, \mu) = \frac, where : F(A) = 2A^ - (A + 1)^ - , A - 1, ^.


''τ'' bias function

While formally not formulated, it has been indirectly inferred as a consequence of the ''α''–''µ'' mapping. When comparing two Allan variance measure for different ''τ'', assuming same dominant noise in the form of same µ coefficient, a bias can be defined as :B_\tau(\tau_1, \tau_2, \mu) = \frac. The bias function becomes after analysis :B_\tau(\tau_1, \tau_2, \mu) = \left( \frac \right)^\mu.


Conversion between values

In order to convert from one set of measurements to another the ''B''1, ''B''2 and τ bias functions can be assembled. First the ''B''1 function converts the (''N''1, ''T''1, ''τ''1) value into (2, ''T''1, ''τ''1), from which the ''B''2 function converts into a (2, ''τ''1, ''τ''1) value, thus the Allan variance at ''τ''1. The Allan variance measure can be converted using the τ bias function from ''τ''1 to ''τ''2, from which then the (2, ''T''2, ''τ''2) using ''B''2 and then finally using ''B''1 into the (''N''2, ''T''2, ''τ''2) variance. The complete conversion becomes :\left\langle \sigma_y^2(N_2, T_2, \tau_2) \right\rangle = \left( \frac \right)^\mu \left \frac \right\left\langle \sigma_y^2(N_1, T_1, \tau_1) \right\rangle, where :r_1 = \frac, :r_2 = \frac. Similarly, for concatenated measurements using ''M'' sections, the logical extension becomes :\left\langle \sigma_y^2(N_2, M_2, T_2, \tau_2) \right\rangle = \left( \frac \right)^\mu \left \frac \right\left\langle \sigma_y^2(N_1, M_1, T_1, \tau_1) \right\rangle.


Measurement issues

When making measurements to calculate Allan variance or Allan deviation, a number of issues may cause the measurements to degenerate. Covered here are the effects specific to Allan variance, where results would be biased.


Measurement bandwidth limits

A measurement system is expected to have a bandwidth at or below that of the
Nyquist rate In signal processing, the Nyquist rate, named after Harry Nyquist, is a value (in units of samples per second or hertz, Hz) equal to twice the highest frequency (bandwidth) of a given function or signal. When the function is digitized at a hig ...
, as described within the
Shannon–Hartley theorem In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding ...
. As can be seen in the power-law noise formulas, the white and flicker noise modulations both depends on the upper corner frequency f_H (these systems is assumed to be low-pass filtered only). Considering the frequency filter property, it can be clearly seen that low-frequency noise has greater impact on the result. For relatively flat phase-modulation noise types (e.g. WPM and FPM), the filtering has relevance, whereas for noise types with greater slope the upper frequency limit becomes of less importance, assuming that the measurement system bandwidth is wide relative the \tau as given by :\tau \gg \frac. When this assumption is not met, the effective bandwidth f_H needs to be notated alongside the measurement. The interested should consult NBS TN394. If, however, one adjust the bandwidth of the estimator by using integer multiples of the sample time n\tau_0, then the system bandwidth impact can be reduced to insignificant levels. For telecommunication needs, such methods have been required in order to ensure comparability of measurements and allow some freedom for vendors to do different implementations. The ITU-T Rec. G.813ITU-T Rec. G.813
''Timing characteristics of SDH equipment slave clock (SEC)''
ITU-T Rec. G.813 (03/2003).
for the TDEV measurement. It can be recommended that the first \tau_0 multiples be ignored, such that the majority of the detected noise is well within the passband of the measurement systems bandwidth. Further developments on the Allan variance was performed to let the hardware bandwidth be reduced by software means. This development of a software bandwidth allowed addressing the remaining noise, and the method is now referred to
modified Allan variance The modified Allan variance (MVAR), also known as mod ''σy''2(''τ''), is a variable bandwidth modified variant of Allan variance, a measurement of frequency stability in clocks, oscillators and amplifiers. Its main advantage relative ...
. This bandwidth reduction technique should not be confused with the enhanced variant of
modified Allan variance The modified Allan variance (MVAR), also known as mod ''σy''2(''τ''), is a variable bandwidth modified variant of Allan variance, a measurement of frequency stability in clocks, oscillators and amplifiers. Its main advantage relative ...
, which also changes a smoothing filter bandwidth.


Dead time in measurements

Many measurement instruments of time and frequency have the stages of arming time, time-base time, processing time and may then re-trigger the arming. The arming time is from the time the arming is triggered to when the start event occurs on the start channel. The time-base then ensures that minimal amount of time goes prior to accepting an event on the stop channel as the stop event. The number of events and time elapsed between the start event and stop event is recorded and presented during the processing time. When the processing occurs (also known as the dwell time), the instrument is usually unable to do another measurement. After the processing has occurred, an instrument in continuous mode triggers the arm circuit again. The time between the stop event and the following start event becomes
dead time For detection systems that record discrete events, such as particle and nuclear detectors, the dead time is the time after each event during which the system is not able to record another event. An everyday life example of this is what happens when ...
, during which the signal is not being observed. Such dead time introduces systematic measurement biases, which needs to be compensated for in order to get proper results. For such measurement systems will the time ''T'' denote the time between the adjacent start events (and thus measurements), while \tau denote the time-base length, i.e. the nominal length between the start and stop event of any measurement. Dead-time effects on measurements have such an impact on the produced result that much study of the field have been done in order to quantify its properties properly. The introduction of zero-dead-time counters removed the need for this analysis. A zero-dead-time counter has the property that the stop event of one measurement is also being used as the start event of the following event. Such counters create a series of event and time timestamp pairs, one for each channel spaced by the time-base. Such measurements have also proved useful in order forms of time-series analysis. Measurements being performed with dead time can be corrected using the bias function ''B''1, ''B''2 and ''B''3. Thus, dead time as such is not prohibiting the access to the Allan variance, but it makes it more problematic. The dead time must be known, such that the time between samples ''T'' can be established.


Measurement length and effective use of samples

Studying the effect on the
confidence intervals In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
that the length ''N'' of the sample series have, and the effect of the variable ''τ'' parameter ''n'' the confidence intervals may become very large, since the effective degree of freedom may become small for some combination of ''N'' and ''n'' for the dominant noise form (for that ''τ''). The effect may be that the estimated value may be much smaller or much greater than the real value, which may lead to false conclusions of the result. It is recommended that the confidence interval is plotted along with the data, such that the reader of the plot is able to be aware of the statistical uncertainty of the values. It is recommended that the length of the sample sequence, i.e. the number of samples ''N'' is kept high to ensure that confidence interval is small over the ''τ'' range of interest. It is recommended that the ''τ'' range as swept by the ''τ''0 multiplier ''n'' is limited in the upper end relative ''N'', such that the read of the plot is not being confused by highly unstable estimator values. It is recommended that estimators providing better degrees of freedom values be used in replacement of the Allan variance estimators or as complementing them where they outperform the Allan variance estimators. Among those the total variance and Theo variance estimators should be considered.


Dominant noise type

A large number of conversion constants, bias corrections and confidence intervals depends on the dominant noise type. For proper interpretation shall the dominant noise type for the particular ''τ'' of interest be identified through noise identification. Failing to identify the dominant noise type will produce biased values. Some of these biases may be of several order of magnitude, so it may be of large significance.


Linear drift

Systematic effects on the signal is only partly cancelled. Phase and frequency offset is cancelled, but linear drift or other high-degree forms of polynomial phase curves will not be cancelled and thus form a measurement limitation. Curve fitting and removal of systematic offset could be employed. Often removal of linear drift can be sufficient. Use of linear-drift estimators such as the
Hadamard variance Jacques Salomon Hadamard (; 8 December 1865 – 17 October 1963) was a French mathematician who made major contributions in number theory, complex analysis, differential geometry and partial differential equations. Biography The son of a teac ...
could also be employed. A linear drift removal could be employed using a moment-based estimator.


Measurement instrument estimator bias

Traditional instruments provided only the measurement of single events or event pairs. The introduction of the improved statistical tool of overlapping measurements by J. J. Snyder allowed much improved resolution in frequency readouts, breaking the traditional digits/time-base balance. While such methods is useful for their intended purpose, using such smoothed measurements for Allan variance calculations would give a false impression of high resolution,Rubiola, Enrico
''On the measurement of frequency and of its sample variance with high-resolution counters''
, Proc. Joint IEEE International Frequency Control Symposium and Precise Time and Time Interval Systems and Applications Meeting pp. 46–49, Vancouver, Canada, 29–31 August 2005.
Rubiola, Enrico
''High-resolution frequency counters (extended version, 53 slides)''
, seminar given at the FEMTO-ST Institute, at the Université Henri Poincaré, and at the Jet Propulsion Laboratory, NASA-Caltech.
but for longer ''τ'' the effect is gradually removed, and the lower-''τ'' region of the measurement has biased values. This bias is providing lower values than it should, so it is an overoptimistic (assuming that low numbers is what one wishes) bias, reducing the usability of the measurement rather than improving it. Such smart algorithms can usually be disabled or otherwise circumvented by using time-stamp mode, which is much preferred if available.


Practical measurements

While several approaches to measurement of Allan variance can be devised, a simple example may illustrate how measurements can be performed.


Measurement

All measurements of Allan variance will in effect be the comparison of two different clocks. Consider a reference clock and a device under test (DUT), and both having a common nominal frequency of 10 MHz. A time-interval counter is being used to measure the time between the rising edge of the reference (channel A) and the rising edge of the device under test. In order to provide evenly spaced measurements, the reference clock will be divided down to form the measurement rate, triggering the time-interval counter (ARM input). This rate can be 1 Hz (using the 1 PPS output of a reference clock), but other rates like 10 Hz and 100 Hz can also be used. The speed of which the time-interval counter can complete the measurement, output the result and prepare itself for the next arm will limit the trigger frequency. A computer is then useful to record the series of time differences being observed.


Post-processing

The recorded time-series require post-processing to unwrap the wrapped phase, such that a continuous phase error is being provided. If necessary, logging and measurement mistakes should also be fixed. Drift estimation and drift removal should be performed, the drift mechanism needs to be identified and understood for the sources. Drift limitations in measurements can be severe, so letting the oscillators become stabilized, by long enough time being powered on, is necessary. The Allan variance can then be calculated using the estimators given, and for practical purposes the overlapping estimator should be used due to its superior use of data over the non-overlapping estimator. Other estimators such as total or Theo variance estimators could also be used if bias corrections is applied such that they provide Allan variance-compatible results. To form the classical plots, the Allan deviation (square root of Allan variance) is plotted in log–log format against the observation interval ''τ''.


Equipment and software

The time-interval counter is typically an off-the-shelf counter commercially available. Limiting factors involve single-shot resolution, trigger jitter, speed of measurements and stability of reference clock. The computer collection and post-processing can be done using existing commercial or public-domain software. Highly advanced solutions exists, which will provide measurement and computation in one box.


Research history

The field of frequency stability has been studied for a long time. However, during the 1960s it was found that coherent definitions were lacking. A NASA-IEEE Symposium on Short-Term Stability in November 1964NASA

''Short-Term Frequency Stability'', NASA-IEEE symposium on Short Term Frequency Stability Goddard Space Flight Center 23–24 November 1964, NASA Special Publication 80.
resulted in the special February 1966 issue of the IEEE Proceedings on Frequency Stability. The NASA-IEEE Symposium brought together many fields and uses of short- and long-term stability, with papers from many different contributors. The articles and panel discussions concur on the existence of the frequency flicker noise and the wish to achieve a common definition for both short-term and long-term stability. Important papers, including those of David Allan, James A. Barnes,Barnes, J. A.
''Atomic Timekeeping and the Statistics of Precision Signal Generators''
IEEE Proceedings on Frequency Stability, Vol 54 No 2, pages 207–220, 1966.
L. S. Cutler and C. L. Searle and D. B. Leeson, appeared in the IEEE Proceedings on Frequency Stability and helped shape the field. David Allan's article analyses the classical ''M''-sample variance of frequency, tackling the issue of dead-time between measurements along with an initial bias function. Although Allan's initial bias function assumes no dead-time, his formulas do include dead-time calculations. His article analyses the case of M frequency samples (called N in the article) and variance estimators. It provides the now standard α–µ mapping, clearly building on James Barnes' work in the same issue. The 2-sample variance case is a special case of the ''M''-sample variance, which produces an average of the frequency derivative. Allan implicitly uses the 2-sample variance as a base case, since for arbitrary chosen ''M'', values may be transferred via the 2-sample variance to the ''M''-sample variance. No preference was clearly stated for the 2-sample variance, even if the tools were provided. However, this article laid the foundation for using the 2-sample variance as a way of comparing other ''M''-sample variances. James Barnes significantly extended the work on bias functions, introducing the modern ''B''1 and ''B''2 bias functions. Curiously enough, it refers to the ''M''-sample variance as "Allan variance", while referring to Allan's article "Statistics of Atomic Frequency Standards". With these modern bias functions, full conversion among ''M''-sample variance measures of various ''M'', ''T'' and ''τ'' values could be performed, by conversion through the 2-sample variance. James Barnes and David Allan further extended the bias functions with the ''B''3 function to handle the concatenated samples estimator bias. This was necessary to handle the new use of concatenated sample observations with dead-time in between. In 1970, the IEEE Technical Committee on Frequency and Time, within the IEEE Group on Instrumentation & Measurements, provided a summary of the field, published as NBS Technical Notice 394. This paper was first in a line of more educational and practical papers helping fellow engineers grasp the field. This paper recommended the 2-sample variance with ''T'' = ''τ'', referring to it as Allan variance (now without the quotes). The choice of such parametrisation allows good handling of some noise forms and getting comparable measurements; it is essentially the least common denominator with the aid of the bias functions ''B''1 and ''B''2. J. J. Snyder proposed an improved method for frequency or variance estimation, using sample statistics for frequency counters. To get more effective degrees of freedom out of the available dataset, the trick is to use overlapping observation periods. This provides a improvement, and was incorporated in the overlapping Allan variance estimator. Variable-τ software processing was also incorporated. This development improved the classical Allan variance estimators, likewise providing a direct inspiration for the work on
modified Allan variance The modified Allan variance (MVAR), also known as mod ''σy''2(''τ''), is a variable bandwidth modified variant of Allan variance, a measurement of frequency stability in clocks, oscillators and amplifiers. Its main advantage relative ...
. Howe, Allan and Barnes presented the analysis of confidence intervals, degrees of freedom, and the established estimators.


Educational and practical resources

The field of time and frequency and its use of Allan variance,
Allan deviation The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in clocks, oscillators and amplifiers. It is named after David W. Allan and expressed mathematically as \sigma_y^2(\tau). The Allan deviation (ADEV ...
and friends is a field involving many aspects, for which both understanding of concepts and practical measurements and post-processing requires care and understanding. Thus, there is a realm of educational material stretching about 40 years available. Since these reflect the developments in the research of their time, they focus on teaching different aspect over time, in which case a survey of available resources may be a suitable way of finding the right resource. The first meaningful summary is the NBS Technical Note 394 "Characterization of Frequency Stability". This is the product of the Technical Committee on Frequency and Time of the IEEE Group on Instrumentation & Measurement. It gives the first overview of the field, stating the problems, defining the basic supporting definitions and getting into Allan variance, the bias functions ''B''1 and ''B''2, the conversion of time-domain measures. This is useful, as it is among the first references to tabulate the Allan variance for the five basic noise types. A classical reference is the NBS Monograph 140Blair, B. E.
''Time and Frequency: Theory and Fundamentals''
NBS Monograph 140, May 1974.
from 1974, which in chapter 8 has "Statistics of Time and Frequency Data Analysis".David W. Allan, John H. Shoaf and Donald Halford
''Statistics of Time and Frequency Data Analysis''
NBS Monograph 140, pages 151–204, 1974.
This is the extended variant of NBS Technical Note 394 and adds essentially in measurement techniques and practical processing of values. An important addition will be the ''Properties of signal sources and measurement methods''. It covers the effective use of data, confidence intervals, effective degree of freedom, likewise introducing the overlapping Allan variance estimator. It is a highly recommended reading for those topics. The IEEE standard 1139 ''Standard definitions of Physical Quantities for Fundamental Frequency and Time Metrology'' is beyond that of a standard a comprehensive reference and educational resource. A modern book aimed towards telecommunication is Stefano Bregni "Synchronisation of Digital Telecommunication Networks". This summarises not only the field, but also much of his research in the field up to that point. It aims to include both classical measures and telecommunication-specific measures such as MTIE. It is a handy companion when looking at measurements related to telecommunication standards. The NIST Special Publication 1065 "Handbook of Frequency Stability Analysis" of W. J. Riley is a recommended reading for anyone wanting to pursue the field. It is rich of references and also covers a wide range of measures, biases and related functions that a modern analyst should have available. Further it describes the overall processing needed for a modern tool.


Uses

Allan variance is used as a measure of frequency stability in a variety of precision oscillators, such as
crystal oscillator A crystal oscillator is an electronic oscillator circuit that uses a piezoelectric crystal as a frequency-selective element. The oscillator frequency is often used to keep track of time, as in quartz wristwatches, to provide a stable cloc ...
s,
atomic clock An atomic clock is a clock that measures time by monitoring the resonant frequency of atoms. It is based on atoms having different energy levels. Electron states in an atom are associated with different energy levels, and in transitions betwee ...
s and frequency-stabilized
laser A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The word "laser" is an acronym for "light amplification by stimulated emission of radiation". The fir ...
s over a period of a second or more. Short-term stability (under a second) is typically expressed as
phase noise In signal processing, phase noise is the frequency-domain representation of random fluctuations in the phase of a waveform, corresponding to time-domain deviations from perfect periodicity (jitter). Generally speaking, radio-frequency engineers ...
. The Allan variance is also used to characterize the bias stability of
gyroscopes A gyroscope (from Ancient Greek γῦρος ''gŷros'', "round" and σκοπέω ''skopéō'', "to look") is a device used for measuring or maintaining orientation and angular velocity. It is a spinning wheel or disc in which the axis of rotat ...
, including
fiber optic gyroscope A fibre-optic gyroscope (FOG) senses changes in orientation using the Sagnac effect, thus performing the function of a mechanical gyroscope. However its principle of operation is instead based on the interference of light which has passed throu ...
s, hemispherical resonator gyroscopes and Microelectromechanical systems, MEMS gyroscopes and accelerometers.


50th Anniversary

In 2016, IEEE-UFFC is going to be publishing a "Special Issue to celebrate the 50th anniversary of the Allan Variance (1966–2016)". A guest editor for that issue will be David's former colleague at National Institute of Standards and Technology, NIST, Judah Levine, who is the most recent recipient of the I. I. Rabi Award.


See also

*Variance *Semivariance *Variogram *Metrology *Network time protocol *Precision Time Protocol *Synchronization


References


External links


UFFC Frequency Control Teaching ResourcesDavid W. Allan's Allan Variance OverviewDavid W. Allan's official web siteWilliam Riley publicationsStable32
Software for Frequency Stability Analysis, by William Riley
Enrico Rubiola publications
*[http://www.alamath.com/ Alavar windows software with reporting tools; Freeware ]
AllanTools
open-source python library for Allan variance
MATLAB AVAR
open-source MATLAB application {{DEFAULTSORT:Allan Variance Clocks Signal processing metrics Measurement