Generalized Functional Linear Model
   HOME

TheInfoList



OR:

The generalized functional linear model (GFLM) is an extension of the
generalized linear model In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and b ...
(GLM) that allows one to regress univariate responses of various types (continuous or discrete) on functional predictors, which are mostly random trajectories generated by a square-integrable
stochastic process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appea ...
es. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function X with a smooth parameter function \beta . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.


Overview

A key aspect of GFLM is estimation and inference for the smooth parameter function \beta which is usually obtained by dimension reduction of the infinite dimensional functional predictor. A common method is to expand the predictor function X in an
orthonormal In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of un ...
basis Basis may refer to: Finance and accounting *Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates *Basis trading, a trading strategy consisting of ...
of L2 space, the
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
of square integrable functions with the simultaneous expansion of the parameter function in the same basis. This representation is then combined with a truncation step to reduce the contribution of the parameter function \beta in the linear predictor to a finite number of regression coefficients.
Functional principal component analysis Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of ...
(FPCA) that employs the Karhunen–Loève expansion is a common and parsimonious approach to accomplish this. Other orthogonal expansions, like Fourier expansions and
B-spline In the mathematical subfield of numerical analysis, a B-spline or basis spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. Any spline function of given degree can be expresse ...
expansions may also be employed for the dimension reduction step. The Akaike information criterion (AIC) can be used for selecting the number of included components. Minimization of cross-validation prediction errors is another criterion often used in classification applications. Once the dimension of the predictor process has been reduced, the simplified linear predictor allows to use GLM and
quasi-likelihood In statistics, quasi-likelihood methods are used to estimate parameters in a statistical model when exact likelihood methods, for example maximum likelihood estimation, are computationally infeasible. Due to the wrong likelihood being used, quasi- ...
estimation techniques to obtain estimates of the finite dimensional regression coefficients which in turn provide an estimate of the parameter function \beta in the GFLM.


Model components


Linear predictor

The predictor functions \textstyle X(t), t \in T , typically are square integrable stochastic processes on a real interval T and the unknown smooth parameter function \beta(t), t \in T , is assumed to be square integrable on T. Given a real measure dw on T, the linear predictor is given by \eta = \alpha + \int X^c(t)\beta(t)\,dw(t) where X^c(t)=X(t)-\text(X(t)) is the centered predictor process and \alpha is a scalar that serves as an intercept.


Response variable and variance function

The outcome Y is typically a real valued random variable which may be either continuous or discrete. Often the conditional distribution of Y given the predictor process is specified within the exponential family. However it is also sufficient to consider the functional quasi-likelihood set up, where instead of the distribution of the response one specifies the conditional
variance function In statistics, the variance function is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statisti ...
, \rm(Y\mid X) = \sigma^2(\mu) , as a function of the conditional mean, \rm(Y\mid X)=\mu .


Link function

The link function g is a smooth invertible function, that relates the conditional mean of the response \rm(Y\mid X) = \mu with the linear predictor \eta= \alpha + \int X^c(t)\beta(t)\,dw(t). The relationship is given by \mu= g(\eta) .


Formulation

In order to implement the necessary dimension reduction, the centered predictor process X^c(t) and the parameter function \beta(t) are expanded as, : X^c(t)= \sum_^\infty \xi_j \rho_j(t)\text\beta(t)= \sum_^\infty \beta_j\rho_j(t), where \rho_j, j=1,2,\ldots is an orthonormal basis of the function space L^2(dw), such that \int_T \rho_j(t)\rho_k(t) \, dw(t) = \delta_ where \delta_ =1 if j=k and 0 otherwise. The random variables \xi_j are given by \xi_j= \int X^c(t) \rho_j(t) \, dw(t) and the coefficients \beta_j as \beta_j=\int \beta(t) \rho_j(t) \, dw(t) for j=1,2, \ldots . \text(\xi_j)=0 and \sum_^\infty \beta_j^2 < \infty and denoting \sigma_j^2= \text(\xi_j)= \text(\xi_j^2), so \sum_^\infty \sigma_j^2 = \int \text(X^c(t))^2 \, dw(t) < \infty. From the orthonormality of the basis functions \rho_j, it follows immediately that \int X^c (t)\beta(t) \, dw(t)= \sum_^\infty \beta_j \xi_j. The key step is then approximating \eta= \alpha+ \int X^c(t)\beta(t)\,dw(t)= \alpha + \sum_^\infty \beta_j \xi_j by \eta \approx \alpha +\sum_^p \beta_j \xi_j for a suitably chosen truncation point p. FPCA gives the most parsimonious approximation of the linear predictor for a given number of basis functions as the eigenfunction basis explains more of the variation than any other set of basis functions. For a differentiable link function with bounded first derivative, the approximation error of the p-truncated model i.e. the linear predictor truncated to the summation of the first p components, is a constant multiple of \text(\sum_^\infty \beta_j \xi_j)= \text\left(\left(\sum_^\infty \beta_j \xi_j\right)^2\right)= \sum_^\infty \beta_j\sigma_j^2. A heuristic motivation for the truncation strategy derives from the fact that \text\left(\left(\sum_^\infty \beta_j \xi_j\right)^2\right) = \sum_^\infty \beta_j\sigma_j^2 \leq \sum_^\infty \beta_j^2 \ \sum_^\infty \sigma_j^2 which is a consequence of the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was published by . The corresponding inequality fo ...
and by noting that the right hand side of the last inequality converges to 0 as p \rightarrow \infty since both \sum_^\infty\beta_j^2 and \sum_^\infty\sigma_j^2 are finite. For the special case of the eigenfunction basis, the sequence \sigma_j^2, j=1,2,\ldots corresponds to the sequence of the eigenvalues of the covariance kernel G(s,t)= \text(X(s), X(t)), \ s,t \in T. For data with n ''i.i.d'' observations, setting \xi_j^0 = 1, \beta_0= \alpha and \xi_j^i = \int X_i(t)\rho_j(t) \, dw(t), the approximated linear predictors can be represented as \eta_i= \sum_^p \beta_j \xi_j^i, i=1,2,\ldots,n which are related to the means through \mu_i=g(\eta_i).


Estimation

The main aim is to estimate the parameter function \beta. Once p has been fixed, standard GLM and quasi-likelihood methods can be used for the p-truncated model to estimate \boldsymbol \beta^T=(\beta_0, \beta_1,\ldots,\beta_p) by solving the estimating equation or the score equation U(\beta)=0. The vector valued score function turns out to be U(\beta)= \sum_^n (Y_i-\mu_i) g'(\eta_i) \xi_i / \sigma^2(\mu_i) which depends on \boldsymbol \beta through \mu and \eta. Just as in GLM, the equation U(\beta)=0 is solved using iterative methods like
Newton–Raphson In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
(NR) or Fisher scoring (FS) or
iteratively reweighted least squares The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a ''p''-norm: :\underset \sum_^n \big, y_i - f_i (\boldsymbol\beta) \big, ^p, by an iterative met ...
(IWLS) to get the estimate of the regression coefficients \boldsymbol \hat , leading to the estimate of the parameter function \hat(t)= \hat_o+ \sum_^p \hat_j\rho_j(t) . When using the canonical link function, these methods are equivalent. Results are available in the literature of p-truncated models as p \rightarrow \infty which provide asymptotic inference for the deviation of the estimated parametric function from the true parametric function and also asymptotic tests for regression effects and asymptotic confidence regions.


Exponential family response

If the response variable Y_i, given X_i \in L^2(T) follows the one parameter exponential family, then its probability density function or probability mass function (as the case may be) is :f(y_i\mid X_i)= \exp\left(\frac + c(y_i,\phi)\right) for some functions b and c , where \theta_i is the canonical parameter, and \phi is a dispersion parameter which is typically assumed to be positive. In the canonical set up, \eta_i= \alpha + \int X_i^c(t) \beta(t) \, dw(t)= \theta_i and from the properties of exponential family, : \mu_i= b'(\theta_i),\text \mu_i= b'(\eta_i). Hence b' serves as a link function and is called the canonical link function. \text(y_i)= \phi b''(\theta_i)= \phi b''(\eta_i)= \phi g'(\eta_i)= \phi g'(g^(\mu_i))) is the corresponding variance function and \phi the dispersion parameter.


Special cases


Functional linear regression (FLR)

Functional linear regression, one of the most useful tools of functional data analysis, is an example of GFLM where the response variable is continuous and is often assumed to have a
Normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. The variance function is a constant function and the link function is identity. Under these assumptions the GFLM reduces to the FLR, : \mu= \operatorname(Y\mid X)= \eta= \alpha + \int X^c (t)\beta(t) \, dw(t) Without the normality assumption, the constant variance function motivates the use of quasi-normal techniques.


Functional binary regression

When the response variable has binary outcomes, i.e., 0 or 1, the distribution is usually chosen as Bernoulli, and then \mu_i= P(Y_i=1 \mid X_i). Popular link functions are the expit function, which is the inverse of the
logit In statistics, the logit ( ) function is the quantile function associated with the standard logistic distribution. It has many uses in data analysis and machine learning, especially in data transformations. Mathematically, the logit is the ...
function (functional logistic regression) and the probit function (functional probit regression). Any cumulative distribution function ''F'' has range '' ,1' which is the range of binomial mean and so can be chosen as a link function. Another link function in this context is the complementary log–log function, which is an asymmetric link. The variance function for binary data is given by \operatorname(Y_i)= \phi \mu_i (1- \mu_i) where the dispersion parameter \phi is taken as 1 or alternatively the quasi-likelihood approach is used.


Functional Poisson regression

Another special case of GFLM occurs when the outcomes are counts, so that the distribution of the responses is assumed to be Poisson. The mean \mu_i is typically linked to the linear predictor \eta_i via a log-link, which is also the canonical link . The variance function is \operatorname(Y_i)= \phi \mu_i, where the dispersion parameter \phi is 1, except when the data might be over-dispersed which is when the quasi-Poisson approach is used.


Extensions

Extensions of GFLM have been proposed for the cases where there are multiple predictor functions. Another generalization is called the Semi Parametric Quasi-likelihood Regression (SPQR) which considers the situation where the link and the variance functions are unknown and are estimated non-parametrically from the data. This situation can also be handled by single or multiple index models, using for example Sliced Inverse Regression (SIR). Another extension in this domain is Functional Generalized Additive Model (FGAM)) which is a generalization of generalized additive model(GAM) where :g^(\operatorname(Y\mid X)) = \alpha + \sum_^p f_j(\xi_j), where \xi_j are the expansion coefficients of the random predictor function X and each f_j is an unknown smooth function that has to be estimated and where \text(f_j(\xi_j))=0.. In general, estimation in FGAM requires combining IWLS with backfitting. However, if the expansion coefficients are obtained as functional principal components, then in some cases (e.g. Gaussian predictor function X), they will be independent in which case backfitting is not needed, and one can use popular smoothing methods for estimating the unknown parameter functions f_j.


Application

A popular data set that has been used for a number of analysis in the domain of functional data analysis consists of the number of eggs laid daily until death of 1000 Mediterranean fruit flies (or medflies for shor

http://anson.ucdavis.edu/~mueller/data/medfly1000.txt]. The plot here shows the egg laying trajectories in the first 25 days of life of about 600 female medflies (those that have at least 20 remaining eggs in their lifetime). The red colored curves belong to those flies that will lay less than the median number of remaining eggs, while the blue colored curves belong to the flies that will lay more than the median number of remaining eggs after age 25. An related problem of classifying medflies as long-lived or short-lived based on the initial egg laying trajectories as predictors and the subsequent longevity of the flies as response has been studied with the GFLM


See also

* Functional additive models *
Functional data analysis Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional ...
*
Functional principal component analysis Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of ...
*
Generalized linear model In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and b ...
* Karhunen–Loève theorem *
Lp space In mathematics, the spaces are function spaces defined using a natural generalization of the Norm (mathematics)#p-norm, -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although ...
*
Quasi-likelihood In statistics, quasi-likelihood methods are used to estimate parameters in a statistical model when exact likelihood methods, for example maximum likelihood estimation, are computationally infeasible. Due to the wrong likelihood being used, quasi- ...
*
Stochastic processes In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appe ...


References

{{Reflist Functional linear model