Bhattacharyya Angle
   HOME
*





Bhattacharyya Angle
In statistics, Bhattacharyya angle, also called statistical angle, is a measure of distance between two probability measures defined on a finite probability space. It is defined as : \Delta(p,q) = \arccos \operatorname(p,q) where ''p''''i'', ''q''''i'' are the probabilities assigned to the point ''i'', for ''i'' = 1, ..., ''n'', and : \operatorname(p,q) = \sum_^n \sqrt is the Bhattacharya coefficient. The Bhattacharya distance is the geodesic distance in the orthant of the sphere S^ obtained by projecting the probability simplex on the sphere by the transformation p_i \mapsto \sqrt,\ i=1,\ldots, n. This distance is compatible with Fisher metric. It is also related to Bures distance and fidelity between quantum states as for two diagonal states one has : \Delta(\rho,\sigma) = \arccos \sqrt. See also * Bhattacharyya distance * Hellinger distance In probability and statistics, the Hellinger distance (closely related to, although different from, t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of statistical survey, surveys and experimental design, experiments.Dodge, Y. (2006) ''The Oxford Dictionary of Statistical Terms'', Oxford University Press. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey sample (statistics), samples. Representative sampling as ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability
Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty."Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th Ed, (2009), .William Feller, ''An Introduction to Probability Theory and Its Applications'', (Vol 1), 3rd Ed, (1968), Wiley, . The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Geodesic Distance
In the mathematical field of graph theory, the distance between two vertices in a graph is the number of edges in a shortest path (also called a graph geodesic) connecting them. This is also known as the geodesic distance or shortest-path distance. Notice that there may be more than one shortest path between two vertices. If there is no path connecting the two vertices, i.e., if they belong to different connected components, then conventionally the distance is defined as infinite. In the case of a directed graph the distance between two vertices and is defined as the length of a shortest directed path from to consisting of arcs, provided at least one such path exists. Notice that, in contrast with the case of undirected graphs, does not necessarily coincide with —so it is just a quasi-metric, and it might be the case that one is defined while the other is not. Related concepts A metric space defined over a set of points in terms of distances in a graph defined over th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Orthant
In geometry, an orthant or hyperoctant is the analogue in ''n''-dimensional Euclidean space of a quadrant in the plane or an octant in three dimensions. In general an orthant in ''n''-dimensions can be considered the intersection of ''n'' mutually orthogonal half-spaces. By independent selections of half-space signs, there are 2''n'' orthants in ''n''-dimensional space. More specifically, a closed orthant in R''n'' is a subset defined by constraining each Cartesian coordinate to be nonnegative or nonpositive. Such a subset is defined by a system of inequalities: :ε1''x''1 ≥ 0      ε2''x''2 ≥ 0     · · ·     ε''n''''x''''n'' ≥ 0, where each ε''i'' is +1 or −1. Similarly, an open orthant in R''n'' is a subset defined by a system of strict inequalities :ε1''x''1 > 0      ε2''x''2 > 0     · ·&nb ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fisher Metric
In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, ''i.e.'', a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements. The metric is interesting in several respects. By Chentsov’s theorem, the Fisher information metric on statistical models is the only Riemannian metric (up to rescaling) that is invariant under sufficient statistics. It can also be understood to be the infinitesimal form of the relative entropy (''i.e.'', the Kullback–Leibler divergence); specifically, it is the Hessian of the divergence. Alternately, it can be understood as the metric induced by the flat space Euclidean metric, after appropriate changes of variable. When extended to complex projective Hilbert space, it becomes the Fubini–Study metric; when written in terms of mixed states, it is the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bures Metric
In mathematics, in the area of quantum information geometry, the Bures metric (named after Donald Bures) or Helstrom metric (named after Carl W. Helstrom) defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone. Definition The Bures metric G may be defined as : (\rho, \rho+d\rho)2 = \frac\mbox( d \rho G ), where G is Hermitian 1-form operator implicitly given by : \rho G + G \rho = d \rho, which is a special case of a continuous Lyapunov equation. Some of the applications of the Bures metric include that given a target error, it allows the calculation of the minimum number of measurements to distinguish two different states and the use of the volume element as a candidate for the Jeffreys prior probability density for mixed quantum states. Bures distance The Bures distance is the finite ve ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Quantum Fidelity
In quantum mechanics, notably in quantum information theory, fidelity is a measure of the "closeness" of two quantum states. It expresses the probability that one state will pass a test to identify as the other. The fidelity is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space. Given two density operators \rho and \sigma, the fidelity is generally defined as the quantity F(\rho, \sigma) = \left(\operatorname \sqrt\right)^2. In the special case where \rho and \sigma represent pure quantum states, namely, \rho=, \psi_\rho\rangle\!\langle\psi_\rho, and \sigma=, \psi_\sigma\rangle\!\langle\psi_\sigma, , the definition reduces to the squared overlap between the states: F(\rho, \sigma)=, \langle\psi_\rho, \psi_\sigma\rangle, ^2. While not obvious from the general definition, the fidelity is symmetric: F(\rho,\sigma)=F(\sigma,\rho). Motivation Given two random variables X,Y with values (1, ..., n) ( categorical random variabl ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Quantum State
In quantum physics, a quantum state is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the quantum state together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. A mixture of quantum states is again a quantum state. Quantum states that cannot be written as a mixture of other states are called pure quantum states, while all other states are called mixed quantum states. A pure quantum state can be represented by a ray in a Hilbert space over the complex numbers, while mixed states are represented by density matrices, which are positive semidefinite operators that act on Hilbert spaces. Pure states are also known as state vectors or wave functions, the latter term applying particularly when they are represented as functions of position or momentum. For example, when dealing with the energy spectrum of the electron in a hydrogen at ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bhattacharyya Distance
In statistics, the Bhattacharyya distance measures the similarity of two probability distributions. It is closely related to the Bhattacharyya coefficient which is a measure of the amount of overlap between two statistical samples or populations. It is not a metric, despite named a "distance", since it does not obey the triangle inequality. Definition For probability distributions P and Q on the same domain \mathcal, the Bhattacharyya distance is defined as :D_B(P,Q) = -\ln \left( BC(P,Q) \right) where :BC(P,Q) = \sum_ \sqrt is the Bhattacharyya coefficient for discrete probability distributions. For continuous probability distributions, with P(dx) = p(x)dx and Q(dx) = q(x) dx where p(x) and q(x) are the probability density functions, the Bhattacharyya coefficient is defined as :BC(P,Q) = \int_ \sqrt\, dx. More generally, given two probability measures P, Q on a measurable space (\mathcal X, \mathcal B), let \lambda be a ( sigma finite) measure such that P and Q are absolute ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hellinger Distance
In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions. It is a type of ''f''-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which was introduced by Ernst Hellinger in 1909. It is sometimes called the Jeffreys distance. Definition Measure theory To define the Hellinger distance in terms of measure theory, let P and Q denote two probability measures on a measure space \mathcal that are absolutely continuous with respect to an auxiliary measure \lambda. Such a measure always exists, e.g \lambda = (P + Q). The square of the Hellinger distance between P and Q is defined as the quantity :H^2(P,Q) = \frac\displaystyle \int_ \left(\sqrt - \sqrt\right)^2 \lambda(dx). Here, P(dx) = p(x)\lambda(dx) and Q(dx) = q(x) \lambda(dx), i.e. p and q(x) = are the Radon–Nikodym derivatives of ''P'' and ''Q'' respe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Distance
In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points. A distance between populations can be interpreted as measuring the distance between two probability distributions and hence they are essentially measures of distances between probability measures. Where statistical distance measures relate to the differences between random variables, these may have statistical dependence,Dodge, Y. (2003)—entry for distance and hence these distances are not directly related to measures of distances between probability measures. Again, a measure of distance between random variables may relate to the extent of dependence between them, rather than to their individual values. Statistical distance measures are not typically m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]