Inverse probability weighting
   HOME

TheInfoList



OR:

Inverse probability weighting is a statistical technique for estimating quantities related to a population other than the one from which the data was collected. Study designs with a disparate sampling population and population of target inference (target population) are common in application. There may be prohibitive factors barring researchers from directly sampling from the target population such as cost, time, or ethical concerns. A solution to this problem is to use an alternate design strategy, e.g. stratified sampling. Weighting, when correctly applied, can potentially improve the efficiency and reduce the bias of unweighted estimators. One very early weighted estimator is the
Horvitz–Thompson estimator In statistics, the Horvitz–Thompson estimator, named after Daniel G. Horvitz and Donovan J. Thompson, is a method for estimating the total and mean of a pseudo-population in a stratified sample by applying inverse probability weighting to acc ...
of the mean. When the sampling probability is known, from which the sampling population is drawn from the target population, then the inverse of this probability is used to weight the observations. This approach has been generalized to many aspects of statistics under various frameworks. In particular, there are weighted likelihoods, weighted estimating equations, and weighted probability densities from which a majority of statistics are derived. These applications codified the theory of other statistics and estimators such as marginal structural models, the
standardized mortality ratio In epidemiology, the standardized mortality ratio or SMR, is a quantity, expressed as either a ratio or percentage quantifying the increase or decrease in mortality of a study cohort with respect to the general population. Standardized mortality ...
, and the EM algorithm for coarsened or aggregate data. Inverse probability weighting is also used to account for missing data when subjects with missing data cannot be included in the primary analysis. With an estimate of the sampling probability, or the probability that the factor would be measured in another measurement, inverse probability weighting can be used to inflate the weight for subjects who are under-represented due to a large degree of
missing data In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Mi ...
.


Inverse Probability Weighted Estimator (IPWE)

The inverse probability weighting estimator can be used to demonstrate causality when the researcher cannot conduct a controlled experiment but has observed data to model. Because it is assumed that the treatment is not randomly assigned, the goal is to estimate the counterfactual or potential outcome if all subjects in population were assigned either treatment. We consider random variables (X,A,Y) \in \mathbb^ \times \ \times \mathbb jointly distributed according to a law P where * X \in \mathbb^ are the covariates * A \in \ are the two possible treatments * Y \in \mathbb is the response * No assumptions such as random assignment of treatment are made. Following Rubin's potential outcomes framework we also stipulate the existence of random variables Y^\ast(a) \in \mathbb for each a=0,1. Semantically, Y^\bigl(a\bigr) denotes the potential outcome that would be observed if the subject were assigned treatment a. Technically speaking, we actually work with the full joint distribution P^\ast of (X,A,Y, Y^\ast(0), Y^\ast(1)); in that case P is the
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
for only the observed components of P^\ast. Special assumptions are needed to infer properties about P^\ast using P, which will be detailed below. Now suppose we have observations \_^ distributed identically and independently according to P. The goal is to use the observed data to estimate properties of the potential outcome Y^\ast(a). For instance, we may wish to compare the mean outcome if all patients in the population were assigned either treatment: \mu_ = \mathbbY^(a). We want to estimate \mu_a using observed data \^_.


Estimator Formula

\hat^_ = \frac\sum^_Y_ \frac


Constructing the IPWE

# \mu_ = \mathbb\left frac \right/math> where p(a, x) = \frac # construct \hat_(a, x) or p(a, x) using any propensity model (often a logistic regression model) # \hat^_ = \sum^_\frac With the mean of each treatment group computed, a statistical t-test or ANOVA test can be used to judge difference between group means and determine statistical significance of treatment effect.


Assumptions

Recall the full joint probability model (X,A,Y, Y^\ast(0), Y^\ast(1))\sim P^\ast for the covariate X, action A, response Y, and potential outcomes Y^\ast(0), Y^\ast(1). Recall also that P is the marginal distribution of the observed data (X,A,Y). We make the following assumptions on P^\ast relating the potential outcomes to the observed data. These allow us to infer properties of P^\ast via P. * (A1) Consistency: Y = \mathbf\Y^\ast(0) + \mathbf\Y^\ast(1). So \mathbf\ Y = \mathbf\ Y^\ast(a) for any a. * (A2) No unmeasured confounders: \ \perp A \,, \, X. Formally, for any bounded, Borel-measurable functions f and g, \begin \mathbb_\left \, X \right = \mathbb_\left \, X \right\mathbb_\left \, X \right\endfor any a. This means that treatment assignment is based solely on covariate data and independent of potential outcomes. * (A3) Positivity: P(A=a, X=x) >0 for all a and x.


Formal derivation

Under the assumptions (A1)-(A3), we will derive the following identities \begin \mathbb_\left Y^(a) \right = \mathbb_\left \, A = a, X\right\right">_\,.html" ;"title="\mathbb_\left[Y \,">\, A = a, X\right\right = \mathbb_\left[ \frac \right]. \qquad \cdots \cdots (*) \end The first equality is shown as follows: \begin \mathbb_\left[ Y^\ast(a)\right] &= \mathbb_\left[ \mathbb_[Y^\ast(a) \,, \, X]\right] \qquad \cdots \cdots \text \\ &= \mathbb_\left \, A = a, Xright">^\ast(a)_\,.html" ;"title="\mathbb_ \, A = a, Xright\qquad \cdots \cdots \text \\ &= \mathbb_\left \, A = a, X \right\right">^\ast(a) \,">\, A = a, Xright\qquad \cdots \cdots \text \\ &= \mathbb_\left[ \mathbb_\left[ Y \,">\, A = a, X \right\right\qquad \cdots \cdots \text \end For the second equality, first note from the proof above that \begin \mathbb_\left \, A = a, X \right\right">\mathbb_\left \, A = a, X \right\right= \mathbb_\left[ \mathbb_[Y^\ast(a) \,">\, Xright">Y \,">\, A = a, X \right\right">\mathbb_\left[ Y \,">\, A = a, X \right\right= \mathbb_\left[ \mathbb_[Y^\ast(a) \,">\, Xright \end Now by (A3), P(A \,">\, X) >0 almost surely. Furthermore, note that \mathbb_\Big \, X\Big= \sum_\frac \cancel = 1. Hence we can write \begin \mathbb_\left \, Xright&= \mathbb_\Big \, X\Big\, \mathbb_[Y^\ast(a) \,">\, X\Big">frac_\,\Big&qu.html" ;"title="^\ast(a) \,">\, Xright&= \mathbb_\Big \, X\Big\, \mathbb_[Y^\ast(a) \,">\, X\Big\\ &= \mathbb_\Big[ \mathbb_\Big[\frac \,\Big">\, X\Big\Big">\mathbb_\Big[\frac \,\Big">\, X\Big\, \mathbb_[Y^\ast(a) \,">\, X\Big\\ &= \mathbb_\Big \, X\Big\Big\qquad \cdots \cdots \text \\ &= \mathbb_\Big[ \mathbb_\Big[\frac \,\Big">\, X\Big\Big">\mathbb_\Big[\frac \,\Big">\, X\Big\Big\qquad \cdots \cdots \text \\ &= \mathbb_\Big[ \mathbb_\Big[\frac \,\Big">\, X\Big\Big\qquad \cdots \cdots \text \\ &= \mathbb_\left[ \frac \right] \end


Variance reduction

The Inverse Probability Weighted Estimator (IPWE) is known to be unstable if some estimated propensities are too close to 0 or 1. In such instances, the IPWE can be dominated by a small number of subjects with large weights. To address this issue, a smoothed IPW estimator using Rao-Blackwellization has been proposed, which reduces the variance of IPWE by up to 7-fold and helps protect the estimator from model misspecification.


Augmented Inverse Probability Weighted Estimator (AIPWE)

An alternative estimator is the augmented inverse probability weighted estimator (AIPWE) combines both the properties of the regression based estimator and the inverse probability weighted estimator. It is therefore a 'doubly robust' method in that it only requires either the propensity or outcome model to be correctly specified but not both. This method augments the IPWE to reduce variability and improve estimate efficiency. This model holds the same assumptions as the Inverse Probability Weighted Estimator (IPWE).


Estimator Formula

\begin \hat^_ &= \frac \sum_^n\Biggl(\frac - \frac\hat_n(X_i,a)\Biggr) \\ &= \frac \sum_^n\Biggl(\fracY_ + (1-\frac)\hat_n(X_i,a)\Biggr) \\ &= \frac\sum_^n\Biggl(\hat_n(X_i,a)\Biggr) + \frac\sum_^n\frac\Biggl(Y_ - \hat_n(X_i,a)\Biggr) \end With the following notations: # 1_ is an
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
if subject i is part of treatment group a (or not). # Construct regression estimator \hat_n(x,a) to predict outcome Y based on covariates X and treatment A, for some subject i. For example, using
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
regression. # Construct propensity (probability) estimate \hat_n(A_i, X_i). For example, using
logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
. # Combine in AIPWE to obtain \hat^_


Interpretation and "double robustness"

The later rearrangement of the formula helps reveal the underlying idea: our estimator is based on the average predicted outcome using the model (i.e.: \frac\sum_^n\Biggl(\hat_n(X_i,a)\Biggr)). However, if the model is biased, then the residuals of the model will not be (in the full treatment group a) around 0. We can correct this potential bias by adding the extra term of the average residuals of the model (Q) from the true value of the outcome (Y) (i.e.: \frac\sum_^n\frac\Biggl(Y_ - \hat_n(X_i,a)\Biggr)). Because we have missing values of Y, we give weights to inflate the relative importance of each residual (these weights are based on the inverse propensity, a.k.a. probability, of seeing each subject observations) (see page 10 in Kang, Joseph DY, and Joseph L. Schafer. "Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data." Statistical science 22.4 (2007): 523-539
link for the paper
/ref>). The "doubly robust" benefit of such an estimator comes from the fact that it's sufficient for one of the two models to be correctly specified, for the estimator to be unbiased (either \hat_n(X_i,a) or \hat_(A_, X_), or both). This is because if the outcome model is well specified then its residuals will be around 0 (regardless of the weights each residual will get). While if the model is biased, but the weighting model is well specified, then the bias will be well estimated (And corrected for) by the weighted average residuals. The bias of the doubly robust estimators is called a second-order bias, and it depends on the product of the difference \frac - \frac and the difference \hat_n(X_i,a) - Q_n(X_i,a). This property allows us, when having a "large enough" sample size, to lower the overall bias of doubly robust estimators by using
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
estimators (instead of parametric models).Hernán, Miguel A., and James M. Robins. "Causal inference." (2010): 2
link to the book
- page 170


See also

*
Propensity score matching In the statistical analysis of observational data, propensity score matching (PSM) is a statistical matching technique that attempts to estimate the effect of a treatment, policy, or other intervention by accounting for the covariates that pred ...


References

{{Reflist, refs= {{cite journal , last1 = Hernan , first1 = MA , last2 = Robins , first2 = JM , year = 2006 , title = Estimating Causal Effects From Epidemiological Data , citeseerx = 10.1.1.157.9366 , journal = J Epidemiol Community Health , volume = 60 , issue = 7 , pages = 578–596 , doi=10.1136/jech.2004.029496, pmc = 2652882 , pmid=16790829 {{cite journal , last1 = Robins , first1 = JM , last2 = Rotnitzky , first2 = A , author2-link=Andrea Rotnitzky , last3 = Zhao , first3 = LP , year = 1994 , title = Estimation of regression coefficients when some regressors are not always observed , journal =
Journal of the American Statistical Association The ''Journal of the American Statistical Association'' is a quarterly peer-reviewed scientific journal published by Taylor & Francis on behalf of the American Statistical Association. It covers work primarily focused on the application of statis ...
, volume = 89 , issue = 427 , pages = 846–866 , doi=10.1080/01621459.1994.10476818
{{cite journal , last1 = Breslow , first1 = NE , author-link1=Breslow NE , last2 = Lumley , first2 = T , year = 2009 , pmc = 2768499 , title = Using the Whole Cohort in the Analysis of Case-Cohort Data , journal = Am J Epidemiol , volume = 169 , issue = 11 , pages = 1398–1405 , doi=10.1093/aje/kwp055 , pmid=19357328, display-authors=etal {{cite journal , last1 = Liao , first1 = JG , last2 = Rohde , first2 = C , year = 2022 , pmc = , title = Variance reduction in the inverse probability weighted estimators for the average treatment effect using the propensity score , journal = Biometrics , volume = 78 , issue = 2 , pages = 660–667 , doi=10.1111/biom.13454 , pmid = 33715153 , s2cid = 232232367 Survey methodology Epidemiology