Strong Subadditivity Of Quantum Entropy
   HOME





Strong Subadditivity Of Quantum Entropy
In quantum information theory, strong subadditivity of quantum entropy (SSA) is the relation among the von Neumann entropies of various quantum subsystems of a larger quantum system consisting of three subsystems (or of one quantum system with three degrees of freedom). It is a basic theorem in modern quantum information theory. It was conjectured by D. W. Robinson and D. Ruelle in 1966 and O. E. Lanford III and D. W. Robinson in 1968 and proved in 1973 by E.H. Lieb and M.B. Ruskai, building on results obtained by Lieb in his proof of the Wigner-Yanase-Dyson conjecture. The classical version of SSA was long known and appreciated in classical probability theory and information theory. The proof of this relation in the classical case is quite easy, but the quantum case is difficult because of the non-commutativity of the reduced density matrices describing the quantum subsystems. Some useful references here include: *"Quantum Computation and Quantum Information" *"Quantum Entr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Von Neumann Entropy
In physics, the von Neumann entropy, named after John von Neumann, is a measure of the statistical uncertainty within a description of a quantum system. It extends the concept of Gibbs entropy from classical statistical mechanics to quantum statistical mechanics, and it is the quantum counterpart of the Shannon entropy from classical information theory. For a quantum-mechanical system described by a density matrix , the von Neumann entropy is S = - \operatorname(\rho \ln \rho), where \operatorname denotes the trace and \operatorname denotes the matrix version of the natural logarithm. If the density matrix is written in a basis of its eigenvectors , 1\rangle, , 2\rangle, , 3\rangle, \dots as \rho = \sum_j \eta_j \left, j \right\rang \left\lang j \ , then the von Neumann entropy is merely S = -\sum_j \eta_j \ln \eta_j . In this form, ''S'' can be seen as the Shannon entropy of the eigenvalues, reinterpreted as probabilities. The von Neumann entropy and quantities based upon i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Schrödinger–HJW Theorem
In quantum information theory, quantum state purification refers to the process of representing a mixed state as a pure quantum state of higher-dimensional Hilbert space. The purification allows the original mixed state to be recovered by taking the partial trace over the additional degrees of freedom. The purification is not unique, the different purifications that can lead to the same mixed states are limited by the Schrödinger–HJW theorem. Purification is used in algorithms such as entanglement distillation, magic state distillation and algorithmic cooling. Description Let \mathcal H_S be a finite-dimensional complex Hilbert space, and consider a generic (possibly mixed) quantum state \rho defined on \mathcal H_S and admitting a decomposition of the form \rho = \sum_i p_i, \phi_i\rangle\langle\phi_i, for a collection of (not necessarily mutually orthogonal) states , \phi_i\rangle \in \mathcal H_S and coefficients p_i \ge 0 such that \sum_i p_i = 1. Note that any quan ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hermitian Adjoint
In mathematics, specifically in operator theory, each linear operator A on an inner product space defines a Hermitian adjoint (or adjoint) operator A^* on that space according to the rule :\langle Ax,y \rangle = \langle x,A^*y \rangle, where \langle \cdot,\cdot \rangle is the inner product on the vector space. The adjoint may also be called the Hermitian conjugate or simply the Hermitian after Charles Hermite. It is often denoted by in fields like physics, especially when used in conjunction with bra–ket notation in quantum mechanics. In dimension (vector space), finite dimensions where operators can be represented by Matrix (mathematics), matrices, the Hermitian adjoint is given by the conjugate transpose (also known as the Hermitian transpose). The above definition of an adjoint operator extends verbatim to bounded operator, bounded linear operators on Hilbert spaces H. The definition has been further extended to include unbounded ''Densely defined operator, densely def ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Dénes Petz
Dénes Petz (1953–2018) was a Hungarian mathematical physicist and quantum information theorist. He is well known for his work on quantum entropy inequalities and equality conditions, quantum f-divergences, sufficiency in quantum statistical inference, quantum Fisher information, and the related concept of monotone metrics in quantum information geometry. He proposed the first quantum generalization of Rényi relative entropy and established its data processing inequality. He has written or coauthored several textbooks which have been widely read by experts in quantum information theory. He has also coauthored a book in the area of mathematical physics. Personal life He was born in Budapest, Hungary, on April 8, 1953. Education He received the M.Sc. degree in mathematics from the Eötvös Loránd University, Budapest, Hungary, in 1977 and the Ph.D. degree in mathematics from the Eötvös Loránd University, Budapest, Hungary, in 1979. In 1982, he received the qualificatio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Set
In geometry, a set of points is convex if it contains every line segment between two points in the set. For example, a solid cube (geometry), cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex. The boundary (topology), boundary of a convex set in the plane is always a convex curve. The intersection of all the convex sets that contain a given subset of Euclidean space is called the convex hull of . It is the smallest convex set containing . A convex function is a real-valued function defined on an interval (mathematics), interval with the property that its epigraph (mathematics), epigraph (the set of points on or above the graph of a function, graph of the function) is a convex set. Convex minimization is a subfield of mathematical optimization, optimization that studies the problem of minimizing convex functions over convex sets. The branch of mathematics devoted to the study of properties of convex sets and convex f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Block Matrix
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. For example, the 3x4 matrix presented below is divided by horizontal and vertical lines into four blocks: the top-left 2x3 block, the top-right 2x1 block, the bottom-left 1x3 block, and the bottom-right 1x1 block. : \left \begin a_ & a_ & a_ & b_ \\ a_ & a_ & a_ & b_ \\ \hline c_ & c_ & c_ & d \end \right Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned. This notion can be made more precise for an n by m matrix M by partitioning n into a collection \text, and then partitioning m into a collection \text. The original m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Haar Measure
In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups. This Measure (mathematics), measure was introduced by Alfréd Haar in 1933, though its special case for Lie groups had been introduced by Adolf Hurwitz in 1897 under the name "invariant integral". Haar measures are used in many parts of mathematical analysis, analysis, number theory, group theory, representation theory, mathematical statistics, statistics, probability theory, and ergodic theory. Preliminaries Let (G, \cdot) be a locally compact space, locally compact Hausdorff space, Hausdorff topological group. The Sigma-algebra, \sigma-algebra generated by all open subsets of G is called the Borel algebra. An element of the Borel algebra is called a Borel set. If g is an element of G and S is a subset of G, then we define the left and right Coset, translates of S by ''g'' as follows: * Left ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Stinespring Factorization Theorem
In mathematics, Stinespring's dilation theorem, also called Stinespring's factorization theorem, named after W. Forrest Stinespring, is a result from operator theory that represents any completely positive map on a C*-algebra ''A'' as a composition of two completely positive maps each of which has a special form: #A *-representation of ''A'' on some auxiliary Hilbert space ''K'' followed by #An operator map of the form ''T'' ↦ ''V*TV''. Moreover, Stinespring's theorem is a structure theorem from a C*-algebra into the algebra of bounded operators on a Hilbert space. Completely positive maps are shown to be simple modifications of *-representations, or sometimes called *-homomorphisms. Formulation In the case of a unital C*-algebra, the result is as follows: :Theorem. Let ''A'' be a unital C*-algebra, ''H'' be a Hilbert space, and ''B''(''H'') be the bounded operators on ''H''. For every completely positive ::\Phi : A \to B(H), :there exists a Hilbert space ''K'' and a un ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Trace (linear Algebra)
In linear algebra, the trace of a square matrix , denoted , is the sum of the elements on its main diagonal, a_ + a_ + \dots + a_. It is only defined for a square matrix (). The trace of a matrix is the sum of its eigenvalues (counted with multiplicities). Also, for any matrices and of the same size. Thus, similar matrices have the same trace. As a consequence, one can define the trace of a linear operator mapping a finite-dimensional vector space into itself, since all matrices describing such an operator with respect to a basis are similar. The trace is related to the derivative of the determinant (see Jacobi's formula). Definition The trace of an square matrix is defined as \operatorname(\mathbf) = \sum_^n a_ = a_ + a_ + \dots + a_ where denotes the entry on the row and column of . The entries of can be real numbers, complex numbers, or more generally elements of a field . The trace is not defined for non-square matrices. Example Let be a matrix, with \m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Completely Positive Map
In mathematics a positive map is a map between C*-algebras that sends positive elements to positive elements. A completely positive map is one that satisfies a stronger, more robust condition. Definition Let A and B be C*-algebras. A linear map \phi: A\to B is called a positive map if \phi maps positive elements to positive elements: a\geq 0 \implies \phi(a)\geq 0. Any linear map \phi:A\to B induces another map :\textrm \otimes \phi : \mathbb^ \otimes A \to \mathbb^ \otimes B in a natural way. If \mathbb^\otimes A is identified with the C*-algebra A^ of k\times k-matrices with entries in A, then \textrm\otimes\phi acts as : \begin a_ & \cdots & a_ \\ \vdots & \ddots & \vdots \\ a_ & \cdots & a_ \end \mapsto \begin \phi(a_) & \cdots & \phi(a_) \\ \vdots & \ddots & \vdots \\ \phi(a_) & \cdots & \phi(a_) \end. We then say \phi is k-positive if \textrm_ \otimes \phi is a positive map and completely positive if \phi is k-positive for all k. Properties * Positive maps are mo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Freeman Dyson
Freeman John Dyson (15 December 1923 – 28 February 2020) was a British-American theoretical physics, theoretical physicist and mathematician known for his works in quantum field theory, astrophysics, random matrix, random matrices, mathematical formulation of quantum mechanics, condensed matter physics, nuclear physics, and nuclear engineering, engineering. He was professor emeritus in the Institute for Advanced Study in Princeton, New Jersey, Princeton and a member of the board of sponsors of the ''Bulletin of the Atomic Scientists''. Dyson originated several concepts that bear his name, such as Dyson's transform, a fundamental technique in additive number theory, which he developed as part of his proof of Mann's theorem; the Dyson tree, a hypothetical genetic engineering, genetically engineered plant capable of growing in a comet; the Dyson series, a Perturbation theory (quantum mechanics), perturbative series where each term is represented by Feynman diagrams; the Dys ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Quantum Mutual Information
In quantum information theory, quantum mutual information, or von Neumann mutual information, after John von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog of Shannon mutual information. Motivation For simplicity, it will be assumed that all objects in the article are finite-dimensional. The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variables ''p''(''x'', ''y''), the two marginal distributions are :p(x) = \sum_ p(x,y), \qquad p(y) = \sum_ p(x,y). The classical mutual information ''I''(''X'':''Y'') is defined by :I(X:Y) = S(p(x)) + S(p(y)) - S(p(x,y)) where ''S''(''q'') denotes the Shannon entropy of the probability distribution ''q''. One can calculate directly :\begin S(p(x)) + S(p(y)) &= - \left (\sum_x p_x \log p(x) + \sum_y p_y \log p(y) \right ) \\ &= -\left (\sum_x \left ( \sum_ p(x,y') \log \sum_ p(x,y') \right ) + \sum_y \left ( \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]