HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and
mathematical physics Mathematical physics refers to the development of mathematical methods for application to problems in physics. The '' Journal of Mathematical Physics'' defines the field as "the application of mathematics to problems in physics and the developm ...
, a random matrix is a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
—that is, a matrix in which some or all elements are random variables. Many important properties of
physical system A physical system is a collection of physical objects. In physics, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment. The environment is ignored except for its effects on the ...
s can be represented mathematically as matrix problems. For example, the
thermal conductivity The thermal conductivity of a material is a measure of its ability to conduct heat. It is commonly denoted by k, \lambda, or \kappa. Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal ...
of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.


Applications


Physics

In
nuclear physics Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions, in addition to the study of other forms of nuclear matter. Nuclear physics should not be confused with atomic physics, which studies t ...
, random matrices were introduced by
Eugene Wigner Eugene Paul "E. P." Wigner ( hu, Wigner Jenő Pál, ; November 17, 1902 – January 1, 1995) was a Hungarian-American theoretical physicist who also contributed to mathematical physics. He received the Nobel Prize in Physics in 1963 "for his co ...
to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
of a random matrix, and should depend only on the symmetry class of the underlying evolution. In
solid-state physics Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how th ...
, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation. In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory. In
quantum optics Quantum optics is a branch of atomic, molecular, and optical physics dealing with how individual quanta of light, known as photons, interact with atoms and molecules. It includes the study of the particle-like properties of photons. Photons have ...
, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the
boson sampling Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate ...
model). Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is
beam splitter A beam splitter or ''beamsplitter'' is an optical device that splits a beam of light into a transmitted and a reflected beam. It is a crucial part of many optical experimental and measurement systems, such as interferometers, also finding wid ...
s and phase shifters). Random matrix theory has also found applications to the chiral Dirac operator in
quantum chromodynamics In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks mediated by gluons. Quarks are fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a type ...
,
quantum gravity Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics; it deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vi ...
in two dimensions, mesoscopic physics,
spin-transfer torque Spin-transfer torque (STT) is an effect in which the orientation of a magnetic layer in a magnetic tunnel junction or spin valve can be modified using a spin-polarized current. Charge carriers (such as electrons) have a property known as spin ...
, the
fractional quantum Hall effect The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2-dimensional (2D) electrons shows precisely quantized plateaus at fractional values of e^2/h. It is a property of a collective state in which elec ...
,
Anderson localization In condensed matter physics, Anderson localization (also known as strong localization) is the absence of diffusion of waves in a ''disordered'' medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first to su ...
,
quantum dots Quantum dots (QDs) are semiconductor particles a few nanometres in size, having optical and electronic properties that differ from those of larger particles as a result of quantum mechanics. They are a central topic in nanotechnology. When the ...
, and
superconductors Superconductivity is a set of physical properties observed in certain materials where electrical resistance vanishes and magnetic flux fields are expelled from the material. Any material exhibiting these properties is a superconductor. Unlike ...


Mathematical statistics and numerical analysis

In
multivariate statistics Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. Multivariate statistics concerns understanding the different aims and background of each of the dif ...
, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples. Chernoff-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue of a finite sum of random Hermitian matrices. In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, random matrices have been used since the work of
John von Neumann John von Neumann (; hu, Neumann János Lajos, ; December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. He was regarded as having perhaps the widest c ...
and
Herman Goldstine Herman Heine Goldstine (September 13, 1913 – June 16, 2004) was a mathematician and computer scientist, who worked as the director of the IAS machine at Princeton University's Institute for Advanced Study and helped to develop ENIAC, th ...
to describe computation errors in operations such as
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
. Although random entries are traditional "generic" inputs to an algorithm, the
concentration of measure In mathematics, concentration of measure (about a median) is a principle that is applied in measure theory, probability and combinatorics, and has consequences for other fields such as Banach space theory. Informally, it states that "A random v ...
associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.


Number theory

In
number theory Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Ma ...
, the distribution of zeros of the
Riemann zeta function The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter (zeta), is a mathematical function of a complex variable defined as \zeta(s) = \sum_^\infty \frac = \frac + \frac + \frac + \cdots for \operatorname(s) > ...
(and other
L-function In mathematics, an ''L''-function is a meromorphic function on the complex plane, associated to one out of several categories of mathematical objects. An ''L''-series is a Dirichlet series, usually convergent on a half-plane, that may giv ...
s) is modeled by the distribution of eigenvalues of certain random matrices. The connection was first discovered by Hugh Montgomery and Freeman J. Dyson. It is connected to the
Hilbert–Pólya conjecture In mathematics, the Hilbert–Pólya conjecture states that the non-trivial zeros of the Riemann zeta function correspond to eigenvalues of a self-adjoint operator. It is a possible approach to the Riemann hypothesis, by means of spectral theory. ...
.


Theoretical neuroscience

In the field of theoretical neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation and time to synchrony depends on network topology.


Optimal control

In
optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
theory, the evolution of ''n'' state variables through time depends at any time on their own values and on the values of ''k'' control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of
stochastic control Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesi ...
. A key result in the case of linear-quadratic control with stochastic matrices is that the certainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.


Gaussian ensembles

The most-commonly studied random matrix distributions are the Gaussian ensembles. The Gaussian unitary ensemble \text(n) is described by the
Gaussian measure In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space R''n'', closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are nam ...
with density : \frac e^ on the space of n \times n Hermitian matrices H = (H_)^_. Here Z_ = 2^ \pi^ is a normalization constant, chosen so that the integral of the density is equal to one. The term ''unitary'' refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry. The Gaussian orthogonal ensemble \text(n) is described by the Gaussian measure with density : \frac e^ on the space of ''n × n'' real symmetric matrices ''H'' = (''H''''ij''). Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. The Gaussian symplectic ensemble \text(n) is described by the Gaussian measure with density : \frac e^ \, on the space of ''n × n'' Hermitian quaternionic matrices, e.g. symmetric square matrices composed of
quaternion In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quater ...
s, ''H'' = (''H''''ij''). Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry. The Gaussian ensembles GOE, GUE and GSE are often denoted by their Dyson index, ''β'' = 1 for GOE, ''β'' = 2 for GUE, and ''β'' = 4 for GSE. This index counts the number of real components per matrix element. The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨''H''''ij''⟩ = 0, and two-point correlations given by : \langle H_ H_^* \rangle = \langle H_ H_ \rangle = \frac \delta_ \delta_ + \frac\delta_\delta_ , from which all higher correlations follow by
Isserlis' theorem In probability theory, Isserlis' theorem or Wick's probability theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix. It is named after Leon Isserlis. This t ...
. The joint probability density for the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
''λ''1,''λ''2,...,''λ''''n'' of GUE/GOE/GSE is given by : \frac \prod_^n e^\prod_\left, \lambda_j-\lambda_i\^\beta~, \quad (1) where ''Z''''β'',''n'' is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (''β'' = 2), the formula (1) describes a
determinantal point process In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, ...
. Eigenvalues repel as the joint probability density has a zero (of \betath order) for coinciding eigenvalues \lambda_j=\lambda_i. For the distribution of the largest eigenvalue for GOE, GUE and Wishart matrices of finite dimensions, see.


Distribution of level spacings

From the ordered sequence of eigenvalues \lambda_1 < \ldots < \lambda_n < \lambda_ < \ldots, one defines the normalized spacings s = (\lambda_ - \lambda_n)/\langle s \rangle, where \langle s \rangle =\langle \lambda_ - \lambda_n \rangle is the mean spacing. The probability distribution of spacings is approximately given by, : p_1(s) = \fracs\, \mathrm^ for the orthogonal ensemble GOE \beta=1, : p_2(s) = \fracs^2 \mathrm^ for the unitary ensemble GUE \beta=2, and : p_4(s) = \fracs^4 \mathrm^ for the symplectic ensemble GSE \beta=4. The numerical constants are such that p_\beta(s) is normalized: : \int_0^\infty ds\,p_\beta(s) = 1 and the mean spacing is, : \int_0^\infty ds\, s\, p_\beta(s) = 1, for \beta = 1,2,4 .


Generalizations

''Wigner matrices'' are random Hermitian matrices \textstyle H_n = (H_n(i,j))_^n such that the entries : \left\ above the main diagonal are independent random variables with zero mean and have identical second moments. ''Invariant matrix ensembles'' are random Hermitian matrices with density on the space of real symmetric/ Hermitian/ quaternionic Hermitian matrices, which is of the form \textstyle \frac e^~, where the function ''V'' is called the potential. The Gaussian ensembles are the only common special cases of these two classes of random matrices.


Spectral theory of random matrices

The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.


Global regime

In the ''global regime'', one is interested in the distribution of linear statistics of the form N_ = n^ \text f(H).


Empirical spectral measure

The ''empirical spectral measure'' ''μH'' of ''H'' is defined by : \mu_(A) = \frac \, \# \left\ = N_, \quad A \subset \mathbb. Usually, the limit of \mu_ is a deterministic measure; this is a particular case of self-averaging. The
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Eve ...
of the limiting measure is called the integrated density of states and is denoted ''N''(''λ''). If the integrated density of states is differentiable, its derivative is called the
density of states In solid state physics and condensed matter physics, the density of states (DOS) of a system describes the number of modes per unit frequency range. The density of states is defined as D(E) = N(E)/V , where N(E)\delta E is the number of states i ...
and is denoted ''ρ''(''λ''). The limit of the empirical spectral measure for Wigner matrices was described by
Eugene Wigner Eugene Paul "E. P." Wigner ( hu, Wigner Jenő Pál, ; November 17, 1902 – January 1, 1995) was a Hungarian-American theoretical physicist who also contributed to mathematical physics. He received the Nobel Prize in Physics in 1963 "for his co ...
; see Wigner semicircle distribution and Wigner surmise. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur.. The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from
potential theory In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gra ...
.


Fluctuations

For the linear statistics ''N''''f'',''H'' = ''n''−1 Σ ''f''(''λ''''j''), one is also interested in the fluctuations about ∫ ''f''(''λ'') ''dN''(''λ''). For many classes of random matrices, a central limit theorem of the form : \frac \overset N(0, 1) is known.


Local regime

In the ''local regime'', one is interested in the spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/''n''. One distinguishes between ''bulk statistics'', pertaining to intervals inside the support of the limiting spectral measure, and ''edge statistics'', pertaining to intervals near the boundary of the support.


Bulk statistics

Formally, fix \lambda_0 in the interior of the
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
of N(\lambda). Then consider the point process : \Xi(\lambda_0) = \sum_j \delta\Big( - n \rho(\lambda_0) (\lambda_j - \lambda_0) \Big)~, where \lambda_j are the eigenvalues of the random matrix. The point process \Xi(\lambda_0) captures the statistical properties of eigenvalues in the vicinity of \lambda_0. For the Gaussian ensembles, the limit of \Xi(\lambda_0) is known; thus, for GUE it is a
determinantal point process In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, ...
with the kernel : K(x, y) = \frac (the ''sine kernel''). The ''universality'' principle postulates that the limit of \Xi(\lambda_0) as n \to \infty should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on \lambda_0). Rigorous proofs of universality are known for invariant matrix ensembles and Wigner matrices.


Edge statistics


Correlation functions

The joint probability density of the eigenvalues of n\times n random Hermitian matrices M \in \mathbf^ , with partition functions of the form : Z_n = \int_ d\mu_0(M)e^ where : V(x):=\sum_^\infty v_j x^j and d\mu_0(M) is the standard Lebesgue measure on the space \mathbf^ of Hermitian n \times n matrices, is given by : p_(x_1,\dots, x_n) = \frac\prod_ (x_i-x_j)^2 e^. The k-point correlation functions (or ''marginal distributions'') are defined as : R^_(x_1,\dots,x_k) = \frac \int_dx_ \cdots \int_ dx_ \, p_(x_1,x_2,\dots,x_n), which are skew symmetric functions of their variables. In particular, the one-point correlation function, or ''density of states'', is : R^_(x_1) = n\int_dx_ \cdots \int_ dx_ \, p_(x_1,x_2,\dots,x_n). Its integral over a Borel set B \subset \mathbf gives the expected number of eigenvalues contained in B: : \int_ R^_(x)dx = \mathbf\left(\#\\right). The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs (x_i, x_j) of points appearing within the correlator. Theorem yson-Mehta For any k, 1\leq k \leq n the k-point correlation function R^_ can be written as a determinant : R^_(x_1,x_2,\dots,x_k) = \det_\left(K_(x_i,x_j)\right), where K_(x,y) is the nth Christoffel-Darboux kernel : K_(x,y) := \sum_^\psi_k(x)\psi_k(y), associated to V, written in terms of the quasipolynomials : \psi_k(x) = \, p_k(z)\, e^ , where \_ is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions : \int_ \psi_j(x) \psi_k(x) dx = \delta_.


Other classes of random matrices


Wishart matrices

''Wishart matrices'' are ''n × n'' random matrices of the form ''H'' = ''X'' ''X''*, where ''X'' is an ''n × m'' random matrix (''m'' ≥ ''n'') with independent entries, and ''X''* is its
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
. In the important special case considered by Wishart, the entries of ''X'' are identically distributed Gaussian random variables (either real or complex). The limit of the empirical spectral measure of Wishart matrices was found by Vladimir Marchenko and Leonid Pastur.


Random unitary matrices

:''See circular ensembles.''


Non-Hermitian random matrices

:''See circular law.''


References


Books

* * *


Survey articles

* * * * *


Historic works

* * *


Footnotes


External links

* * {{DEFAULTSORT:Random Matrix Algebra of random variables Mathematical physics