Linear-nonlinear-Poisson Cascade Model
   HOME

TheInfoList



OR:

The linear-nonlinear-Poisson (LNP) cascade model is a simplified functional model of neural spike responses.Chichilnisky, E. J.
A simple white noise analysis of neuronal light responses.
Network: Computation in Neural Systems 12:199–213. (2001)
Simoncelli, E. P., Paninski, L., Pillow, J. & Swartz, O. (2004)

in (Ed. M. Gazzaniga) ''The Cognitive Neurosciences 3rd edn'' (pp 327–338) MIT press.
Schwartz O., Pillow J. W., Rust N. C., & Simoncelli E. P. (2006). Spike-triggered neural characterization. ''Journal of Vision'' 6:484–507 It has been successfully used to describe the response characteristics of neurons in early sensory pathways, especially the visual system. The LNP model is generally implicit when using reverse correlation or the
spike-triggered average The spike-triggered averaging (STA) is a tool for characterizing the response properties of a neuron using the action potentials, spikes emitted in response to a time-varying stimulus. The STA provides an estimate of a neuron's linear receptive fi ...
to characterize neural responses with white-noise stimuli. There are three stages of the LNP cascade model. The first stage consists of a linear filter, or linear
receptive field The receptive field, or sensory space, is a delimited medium where some physiological stimuli can evoke a sensory neuronal response in specific organisms. Complexity of the receptive field ranges from the unidimensional chemical structure of od ...
, which describes how the neuron integrates stimulus intensity over space and time. The output of this filter then passes through a nonlinear function, which gives the neuron's instantaneous spike rate as its output. Finally, the spike rate is used to generate spikes according to an inhomogeneous
Poisson process In probability theory, statistics and related fields, a Poisson point process (also known as: Poisson random measure, Poisson random point field and Poisson point field) is a type of mathematical object that consists of Point (geometry), points ...
. The linear filtering stage performs
dimensionality reduction Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally ...
, reducing the high-dimensional spatio-temporal stimulus space to a low-dimensional
feature space Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenom ...
, within which the neuron computes its response. The nonlinearity converts the filter output to a (non-negative) spike rate, and accounts for nonlinear phenomena such as spike threshold (or rectification) and response saturation. The Poisson spike generator converts the continuous spike rate to a series of spike times, under the assumption that the probability of a spike depends only on the instantaneous spike rate. The model offers a useful approximation of neural activity, allowing scientists to derive reliable estimates from a mathematically simple formula.


Mathematical formulation


Single-filter LNP

Let \mathbf denote the spatio-temporal stimulus vector at a particular instant, and \mathbf denote a linear filter (the neuron's linear receptive field), which is a vector with the same number of elements as \mathbf. Let f denote the nonlinearity, a scalar function with non-negative output. Then the LNP model specifies that, in the limit of small time bins, : P(\textrm) \propto f(\mathbf \cdot \mathbf). For finite-sized time bins, this can be stated precisely as the probability of observing ''y'' spikes in a single bin: : P(y \textrm) = \frac e^ : where \lambda = f(\mathbf\cdot\mathbf), and \Delta is the bin size.


Multi-filter LNP

For neurons sensitive to multiple dimensions of the stimulus space, the linear stage of the LNP model can be generalized to a bank of linear filters, and the nonlinearity becomes a function of multiple inputs. Let \mathbf, \mathbf, \ldots, \mathbf denote the set of linear filters that capture a neuron's stimulus dependence. Then the multi-filter LNP model is described by : P(\textrm) \propto f(\mathbf\!\cdot\!\mathbf,\; \mathbf\!\cdot\!\mathbf,\; \ldots,\; \mathbf\!\cdot\!\mathbf) or : P(\textrm) \propto f(K\mathbf), where K is a matrix whose columns are the filters \mathbf.


Estimation

The parameters of the LNP model consist of the linear filters \{{k_i}\} and the nonlinearity f. The estimation problem (also known as the problem of ''neural characterization'') is the problem of determining these parameters from data consisting of a time-varying stimulus and the set of observed spike times. Techniques for estimating the LNP model parameters include: * moment-based techniques, such as the
spike-triggered average The spike-triggered averaging (STA) is a tool for characterizing the response properties of a neuron using the action potentials, spikes emitted in response to a time-varying stimulus. The STA provides an estimate of a neuron's linear receptive fi ...
or
spike-triggered covariance Spike-triggered covariance (STC) analysis is a tool for characterizing a neuron's response properties using the covariance of stimuli that elicit spikes from a neuron. STC is related to the spike-triggered average (STA), and provides a complementar ...
Brenner, N., Bialek, W., & de Ruyter van Steveninck, R. R. (2000). * with information-maximization or
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
techniques.Paninski, L. (2004) Maximum likelihood estimation of cascade point-process neural encoding models. In ''Network: Computation in Neural Systems''.Mirbagheri M. (2012
Dimension reduction in regression using Gaussian Mixture Models.
In ''Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP)''.


Related models

* The LNP model provides a simplified, mathematically tractable approximation to more biophysically detailed single-neuron models such as the
integrate-and-fire Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons (or nerve cells) are electrically excitable cells within the nervous system, able to fire ...
or
Hodgkin–Huxley model The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical engine ...
. * If the nonlinearity f is a fixed invertible function, then the LNP model is a
generalized linear model In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and by ...
. In this case, f is the inverse link function. * An alternative to the LNP model for neural characterization is the Volterra kernel or Wiener kernel series expansion, which arises in classical nonlinear systems-identification theory.Marmarelis & Marmerelis, 1978. ''Analysis of Physiological Systems: The White Noise Approach.'' London: Plenum Press. These models approximate a neuron's input-output characteristics using a polynomial expansion analogous to the
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
, but do not explicitly specify the spike-generation process.


See also

*
Random neural network The random neural network (RNN) is a mathematical representation of an interconnected network of neurons or cells which exchange spiking signals. It was invented by Erol Gelenbe and is linked to the G-network model of queueing networks as well ...
*
Spike-triggered average The spike-triggered averaging (STA) is a tool for characterizing the response properties of a neuron using the action potentials, spikes emitted in response to a time-varying stimulus. The STA provides an estimate of a neuron's linear receptive fi ...
*
Spike-triggered covariance Spike-triggered covariance (STC) analysis is a tool for characterizing a neuron's response properties using the covariance of stimuli that elicit spikes from a neuron. STC is related to the spike-triggered average (STA), and provides a complementar ...


References

Computational neuroscience Stochastic models