Logarithmic Data Transformation
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, data
transformation Transformation may refer to: Science and mathematics In biology and medicine * Metamorphosis, the biological process of changing physical form after birth or hatching * Malignant transformation, the process of cells becoming cancerous * Trans ...
is the application of a
deterministic Determinism is a philosophical view, where all events are determined completely by previously existing causes. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and consi ...
mathematical function to each point in a data set—that is, each data point ''zi'' is replaced with the transformed value ''yi'' = ''f''(''zi''), where ''f'' is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution, distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical ...
procedure that is to be applied, or to improve the interpretability or appearance of
graphs Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discre ...
. Nearly always, the function that is used to transform the data is invertible, and generally is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function.


Motivation

Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95%
confidence interval In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially
skewed In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimoda ...
and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong
coverage probability In statistics, the coverage probability is a technique for calculating a confidence interval which is the proportion of the time that the interval contains the true value of interest. For example, suppose our interest is in the mean number of mon ...
. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations * Probability distribution, the probability of a particular value or value range of a vari ...
before constructing a confidence interval. If desired, the confidence interval can then be transformed back to the original scale using the inverse of the transformation that was applied to the data. Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph. Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile.


In regression

Data transformation may be used as a remedial measure to make data suitable for modeling with
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ...
if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of ''Y'' (the response variable to be predicted) and each
independent variable Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
(when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of ''Y,'' resulting in a polynomial regression model, a special case of linear regression. Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is
heteroscedastic In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The s ...
), it may be possible to find a transformation of ''Y'' alone, or transformations of both ''X'' (the predictor variables) and ''Y'', such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these. Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedaticity) often also help make the error terms approximately normal.


Examples

Equation: Y = a + bX :Meaning: A unit increase in X is associated with an average of b units increase in Y. Equation: \log(Y) = a + bX :(From exponentiating both sides of the equation: Y = e^a e^) :Meaning: A unit increase in X is associated with an average increase of b units in \log(Y), or equivalently, Y increases on an average by a multiplicative factor of e^\!. For illustrative purposes, if
base-10 logarithm In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered ...
were used instead of
natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
in the above transformation and the same symbols (''a'' and ''b'') are used to denote the regression coefficients, then a unit increase in X would lead to a 10^times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in X Equation: Y = a + b \log(X) :Meaning: A k-fold increase in X is associated with an average of b \times \log(k)units increase in Y. For illustrative purposes, if
base-10 logarithm In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered ...
were used instead of
natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
in the above transformation and the same symbols (''a'' and ''b'') are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of b \times \log_(10) = b units in Y Equation: \log(Y) = a + b \log(X) :(From exponentiating both sides of the equation: Y = e^a X^) :Meaning: A k-fold increase in X is associated with a k^multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of 2^\!.


Alternative

Generalized linear models In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and b ...
(GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value.


Common cases

The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The '' power transformation'' is a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the ''
Box–Cox transformation In statistics, a power transform is a family of functions applied to create a monotonic transformation of data using power functions. It is a data transformation technique used to stabilize variance, make the data more normal distribution-like, i ...
''. The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the
inverse hyperbolic sine In mathematics, the inverse hyperbolic functions are the inverse functions of the hyperbolic functions. For a given value of a hyperbolic function, the corresponding inverse hyperbolic function provides the corresponding hyperbolic angle. The s ...
, can be meaningfully applied to data that include both positive and negative values (the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied. A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes." The logarithm also has a useful effect on ratios. If we are comparing positive quantities ''X'' and ''Y'' using the ratio ''X'' / ''Y'', then if ''X'' < ''Y'', the ratio is in the interval (0,1), whereas if ''X'' > ''Y'', the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where ''X'' and ''Y'' are treated symmetrically, the log-ratio log(''X'' / ''Y'') is zero in the case of equality, and it has the property that if ''X'' is ''K'' times greater than ''Y'', the log-ratio is the equidistant from zero as in the situation where ''Y'' is ''K'' times greater than ''X'' (the log-ratios are log(''K'') and −log(''K'') in these two situations). If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a
logit transformation In statistics, the logit ( ) function is the quantile function associated with the standard logistic distribution. It has many uses in data analysis and machine learning, especially in data transformations. Mathematically, the logit is the i ...
may be appropriate: this yields values in the range (−∞,∞).


Transforming to normality

1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations. 2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and
leptokurtic In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation. 3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed.


Transforming to a uniform distribution or an arbitrary distribution

If we observe a set of ''n'' values ''X''1, ..., ''X''''n'' with no ties (i.e., there are ''n'' distinct values), we can replace ''X''''i'' with the transformed value ''Y''''i'' = ''k'', where ''k'' is defined such that ''X''''i'' is the ''k''th largest among all the ''X'' values. This is called the ''rank transform'', and creates data with a perfect fit to a
uniform distribution Uniform distribution may refer to: * Continuous uniform distribution * Discrete uniform distribution * Uniform distribution (ecology) * Equidistributed sequence In mathematics, a sequence (''s''1, ''s''2, ''s''3, ...) of real numbers is said to be ...
. This approach has a population analogue. Using the probability integral transform, if ''X'' is any
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
, and ''F'' is the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of ''X'', then as long as ''F'' is invertible, the random variable ''U'' = ''F''(''X'') follows a uniform distribution on the unit interval ,1 From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If ''G'' is an invertible cumulative distribution function, and ''U'' is a uniformly distributed random variable, then the random variable ''G''−1(''U'') has ''G'' as its cumulative distribution function. Putting the two together, if ''X'' is any random variable, ''F'' is the invertible cumulative distribution function of ''X'', and ''G'' is an invertible cumulative distribution function then the random variable ''G''−1(''F''(''X'')) has ''G'' as its cumulative distribution function.


Variance stabilizing transformations

Many types of statistical data exhibit a " variance-on-mean relationship", meaning that the variability is different for data values with different
expected values In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances. A variance-stabilizing transformation aims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or
Anscombe transform In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Ansc ...
for Poisson data (count data), the
Box–Cox transformation In statistics, a power transform is a family of functions applied to create a monotonic transformation of data using power functions. It is a data transformation technique used to stabilize variance, make the data more normal distribution-like, i ...
for regression analysis, and the arcsine square root transformation or angular transformation for proportions (
binomial Binomial may refer to: In mathematics *Binomial (polynomial), a polynomial with two terms * Binomial coefficient, numbers appearing in the expansions of powers of binomials *Binomial QMF, a perfect-reconstruction orthogonal wavelet decomposition ...
data). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended because logistic regression or a
logit transformation In statistics, the logit ( ) function is the quantile function associated with the standard logistic distribution. It has many uses in data analysis and machine learning, especially in data transformations. Mathematically, the logit is the i ...
are more appropriate for binomial or non-binomial proportions, respectively, especially due to decreased type-II error.


Transformations for multivariate data

Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector ''X'' are observed as vectors ''X''i of observations with
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = ''A'' ''A. Then the transformed vector ''Y''i = ''A''−1''X''i has the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial o ...
as its covariance matrix.


See also

* Arcsin * Feature engineering * Logit * Nonlinear regression#Transformation * Pearson correlation coefficient *
Power transform In statistics, a power transform is a family of functions applied to create a monotonic transformation of data using power functions. It is a data transformation technique used to stabilize variance, make the data more normal distribution-like, i ...
(Box–Cox) * Wilson–Hilferty transformation *
Whitening transformation A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they ar ...


References


External links


Log Transformations for Skewed and Wide Distributions
– discussing the log and the "signed logarithm" transformations (A chapter from "Practical Data Science with R"). {{DEFAULTSORT:Data Transformation (Statistics) Statistical inference