Spike-triggered Covariance
Spike-triggered covariance (STC) analysis is a tool for characterizing a neuron's response properties using the covariance of stimuli that elicit spikes from a neuron. STC is related to the spike-triggered average (STA), and provides a complementary tool for estimating linear filters in a linear-nonlinear-Poisson (LNP) cascade model. Unlike STA, the STC can be used to identify a multi-dimensional feature space in which a neuron computes its response. STC analysis identifies the stimulus features affecting a neuron's response via an eigenvector decomposition of the spike-triggered covariance matrix.Brenner, N., Bialek, W., & de Ruyter van Steveninck, R.R. (2000). Schwartz, O., Chichilnisky, E. J., & Simoncelli, E. P. (2002). Bialek, W. & de Ruyter van Steveninck, R. (2005). Arxiv preprint q-bio/0505003.Schwartz O., Pillow J. W., Rust N. C., & Simoncelli E. P. (2006). Spike-triggered neural characterization. ''Journal of Vision'' 6:484-507 Eigenvectors with eigenvalues significant ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one variable mainly correspond with greater values of the other variable, and the same holds for lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when greater values of one variable mainly correspond to lesser values of the other (that is, the variables tend to show opposite behavior), the covariance is negative. The magnitude of the covariance is the geometric mean of the variances that are in common for the two random variables. The Pearson product-moment correlation coefficient, correlation coefficient normalizes the covariance by dividing by the geometric mean of the total variances for the two random variables. A distinction must be made between (1) the covariance of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Spike-triggered Average
The spike-triggered averaging (STA) is a tool for characterizing the response properties of a neuron using the action potentials, spikes emitted in response to a time-varying stimulus. The STA provides an estimate of a neuron's linear receptive field. It is a useful technique for the analysis of electrophysiology, electrophysiological data. Mathematically, the STA is the average stimulus preceding a spike.de Boer and Kuyper (1968) Triggered Correlation. ''IEEE Transact. Biomed. Eng.'', 15:169-179Marmarelis, P. Z. and Naka, K. (1972). White-noise analysis of a neuron chain: an application of the Wiener theory. ''Science'', 175:1276-1278Chichilnisky, E. J. (2001). A simple white noise analysis of neuronal light responses. ''Network: Computation in Neural Systems'', 12:199-213Simoncelli, E. P., Paninski, L., Pillow, J. & Swartz, O. (2004)."Characterization of neural responses with stochastic stimuli" In M. Gazzaniga (Ed.) ''The Cognitive Neurosciences, III'' (pp. 327-338). MIT pr ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear-nonlinear-Poisson Cascade Model
The linear-nonlinear-Poisson (LNP) cascade model is a simplified functional model of neural spike responses.Chichilnisky, E. J.A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems 12:199–213. (2001)Simoncelli, E. P., Paninski, L., Pillow, J. & Swartz, O. (2004)in (Ed. M. Gazzaniga) ''The Cognitive Neurosciences 3rd edn'' (pp 327–338) MIT press.Schwartz O., Pillow J. W., Rust N. C., & Simoncelli E. P. (2006). Spike-triggered neural characterization. ''Journal of Vision'' 6:484–507 It has been successfully used to describe the response characteristics of neurons in early sensory pathways, especially the visual system. The LNP model is generally implicit when using reverse correlation or the spike-triggered average to characterize neural responses with white-noise stimuli. There are three stages of the LNP cascade model. The first stage consists of a linear filter, or linear receptive field, which describes how the neuron integrate ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Covariance Matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2 \times 2 matrix would be necessary to fully characterize the two-dimensional variation. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). The covariance matrix of a random vector \mathbf is typically denoted by \operatorname_, \Sigma or S. Definition Throughout this article, boldfaced u ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Principal Components Analysis
Principal component analysis (PCA) is a Linear map, linear dimensionality reduction technique with applications in exploratory data analysis, visualization and Data Preprocessing, data preprocessing. The data is linear map, linearly transformed onto a new coordinate system such that the directions (principal components) capturing the largest variation in the data can be easily identified. The principal components of a collection of points in a real coordinate space are a sequence of p unit vectors, where the i-th vector is the direction of a line that best fits the data while being orthogonal to the first i-1 vectors. Here, a best-fitting line is defined as one that minimizes the average squared perpendicular distance, perpendicular Distance from a point to a line, distance from the points to the line. These directions (i.e., principal components) constitute an orthonormal basis in which different individual dimensions of the data are Linear correlation, linearly uncorrelated. Ma ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Volterra Series
The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at ''all'' other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors. It has been applied in the fields of medicine (biomedical engineering) and biology, especially neuroscience. It is also used in electrical engineering to model intermodulation distortion in many devices, including power amplifiers and frequency mixers. Its main advantage lies in its generalizability: it can represent a wide range of systems. Thus, it is sometimes considered a non-parametric model. In mathematics, a Volterra se ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Wiener Series
In mathematics, the Wiener series, or Wiener G-functional expansion, originates from the 1958 book of Norbert Wiener. It is an orthogonal expansion for nonlinear functionals closely related to the Volterra series and having the same relation to it as an orthogonal Hermite polynomial expansion has to a power series. For this reason it is also known as the Wiener–Hermite expansion. The analogue of the coefficients are referred to as Wiener kernels. The terms of the series are orthogonal (uncorrelated) with respect to a statistical input of white noise. This property allows the terms to be identified in applications by the ''Lee–Schetzen method''. The Wiener series is important in nonlinear system identification. In this context, the series approximates the functional relation of the output to the entire history of system input at any time. The Wiener series has been applied mostly to the identification of biological systems, especially in neuroscience. The name Wiener series ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Computational Neuroscience
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system. Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field. Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine le ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |