Matrix Chernoff Bound
   HOME
*





Matrix Chernoff Bound
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose \ is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter ''t'': : \Pr \left \ The following theorems answer this general question under various assumptions; these assumptions are named below by analogy to their classical, scalar counterparts. All of these theorems can be found in , as the specific application of a general result which is derived below. A summary of related works is given. Matrix Gaussian and Rademacher series Self-adjoint matrices case Consider a finite sequence \ of fixed, self-adjoint matrices with dimension d, and let \ be a finite sequence of independent standard normal or independent Rademacher random variables. Then, for all t\geq0, : \Pr \left\ \leq d \cdot e^ wher ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Algebra
Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Martingale (probability Theory)
In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values. History Originally, '' martingale'' referred to a class of betting strategies that was popular in 18th-century France. The simplest of these strategies was designed for a game in which the gambler wins their stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double their bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, their probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users due to f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Law Of Total Expectation
The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states that if X is a random variable whose expected value \operatorname(X) is defined, and Y is any random variable on the same probability space, then :\operatorname (X) = \operatorname ( \operatorname ( X \mid Y)), i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X. One special case states that if _i is a finite or countable partition of the sample space, then :\operatorname (X) = \sum_i. Note: The conditional expected value E(''X'' , ''Z'') is a random variable whose value depend on the value of ''Z''. Note that the conditional expected value of ''X'' given the ''event'' ''Z'' = ''z'' is a function of ''z''. If we write E(''X'' , ''Z'' = ''z'') = ''g''(''z'') then the random variable E(''X'' , ''Z'') is ''g''(''Z''). Sim ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Jensen's Inequality
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations. Jensen's inequality generalizes the statement that the secant line of a convex function lies ''above'' the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (for ''t'' ∈  ,1, :t f(x_1) + (1-t) f(x_2), while t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov's Inequality
In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function (mathematics), function of a random variable is greater than or equal to some positive Constant (mathematics), constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in Mathematical analysis, analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev inequality) or Irénée-Jules Bienaymé, Bienaymé's inequality. Markov's inequality (and other similar inequalities) relate probabilities to expected value, expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable. Statement If is a nonnegative random variable and , then the probability that is at least is at most th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Exponential
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group. Let be an real or complex matrix. The exponential of , denoted by or , is the matrix given by the power series e^X = \sum_^\infty \frac X^k where X^0 is defined to be the identity matrix I with the same dimensions as X. The above series always converges, so the exponential of is well-defined. If is a 1×1 matrix the matrix exponential of is a 1×1 matrix whose single element is the ordinary exponential of the single element of . Properties Elementary properties Let and be complex matrices and let and be arbitrary complex numbers. We denote the identity matrix by and the zero matrix by 0. The matrix exponential satisfies the following ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Golden–Thompson Inequality
In physics and mathematics, the Golden–Thompson inequality is a trace inequality between exponentials of symmetric and Hermitian matrices proved independently by and . It has been developed in the context of statistical mechanics, where it has come to have a particular significance. Statement The Golden–Thompson inequality states that for (real) symmetric or (complex) Hermitian matrices ''A'' and ''B'', the following trace inequality holds: : \operatorname\, e^ \le \operatorname \left(e^A e^B\right). This inequality is well defined, since the quantities on either side are real numbers. For the expression on right hand side of the inequality, this can be seen by rewriting it as \operatorname(e^e^B e^) using the cyclic property of the trace. Motivation The Golden–Thompson inequality can be viewed as a generalization of a stronger statement for real numbers. If ''a'' and ''b'' are two real numbers, then the exponential of ''a+b'' is the product of the exponential of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Derivation And Proof
Derivation may refer to: Language * Morphological derivation, a word-formation process * Parse tree or concrete syntax tree, representing a string's syntax in formal grammars Law * Derivative work, in copyright law * Derivation proceeding, a proceeding in United States patent law Music * The creation of a derived row, in the twelve-tone musical technique Science and mathematics * Derivation (differential algebra), a unary function satisfying the Leibniz product law * Formal proof or derivation, a sequence of sentences each of which is an axiom or follows from the preceding sentences in the sequence by a rule of inference * An after-the-fact justification for an action, in the work of sociologist Vilfredo Pareto See also *Derive (other), for meanings of "derive" and "derived" *Derivative, in calculus *Derivative (other) The derivative of a function is the rate of change of the function's output relative to its input value. Derivative may also refer to: In ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Triangle Inequality
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of degenerate triangles, but some authors, especially those writing about elementary geometry, will exclude this possibility, thus leaving out the possibility of equality. If , , and are the lengths of the sides of the triangle, with no side being greater than , then the triangle inequality states that :z \leq x + y , with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about distances, and it is written using vectors and vector lengths ( norms): :\, \mathbf x + \mathbf y\, \leq \, \mathbf x\, + \, \mathbf y\, , where the length of the third side has been replaced by the vector sum . When and are real numbers, they can be viewed as vectors in , and the trian ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows and n columns and entries in the field K. A matrix norm is a norm on K^. This article will always write such norms with double vertical bars (like so: \, A\, ). Thus, the matrix norm is a function \, \cdot\, : K^ \to \R that must satisfy the following properties: For all scalars \alpha \in K and matrices A, B \in K^, *\, A\, \ge 0 (''positive-valued'') *\, A\, = 0 \iff A=0_ (''definite'') *\left\, \alpha A\right\, =\left, \alpha\ \left\, A\right\, (''absolutely homogeneous'') *\, A+B\, \le \, A\, +\, B\, (''sub-additive'' or satisfying the ''triangle inequality'') The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative: *\left\, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Gaussian And Rademacher Series
Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchise) * Matrix (mathematics), a rectangular array of numbers, symbols or expressions Matrix (or its plural form matrices) may also refer to: Science and mathematics * Matrix (mathematics), algebraic structure, extension of vector into 2 dimensions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the material in between a eukaryotic organism's cells * Matrix (chemical analysis), the non-analyte components of a sample * Matrix (geology), the fine-grained material in which larger objects are embedded * Matrix (composite), the constituent of a composite material * Hair matrix, produces hair * Nail matrix, part of the nail in anatomy Arts and entertainment Fictional entities * Matrix (comics), two comic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Doob Martingale
In the probability theory, mathematical theory of probability, a Doob martingale (named after Joseph L. Doob, also known as a Levy martingale) is a stochastic process that approximates a given random variable and has the Martingale (probability theory), martingale property with respect to the given filtration (probability theory), filtration. It may be thought of as the evolving sequence of best approximations to the random variable based on information accumulated up to a certain time. When analyzing sums, random walks, or other additive functions of Statistical independence, independent random variables, one can often apply the central limit theorem, law of large numbers, Chernoff's inequality, Chebyshev's inequality or similar tools. When analyzing similar objects where the differences are not independent, the main tools are Martingale (probability theory), martingales and Azuma's inequality. Definition Let Y be any random variable with \mathbb[, Y, ] < \infty. Suppose ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]