Large Deviation Principle
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. It is required to have several properties which assist in the formulation of the large deviation principle. In some sense, the large deviation principle is an analogue of weak convergence of probability measures, but one which takes account of how well the rare events behave. A rate function is also called a Cramér function, after the Swedish probabilist Harald Cramér. Definitions Rate function An extended real-valued function ''I'' : ''X'' → , +∞defined on a Hausdorff topological space ''X'' is said to be a rate function if it is not identically +∞ and is lower semi-continuous, i.e. all the sub-level sets :\ \mbox c \geq 0 are closed in ''X''. If, furthermore, they are compact, then ''I'' is said to be a good rate function. A family of probability measures (''μ''''δ'')''δ'' >&n ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Continuous Function
In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value, known as '' discontinuities''. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is . Up until the 19th century, mathematicians largely relied on intuitive notions of continuity, and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity. Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the mo ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Fields Institute
The Fields Institute for Research in Mathematical Sciences, commonly known simply as the Fields Institute, is an international centre for scientific research in mathematical sciences. It is an independent non-profit with strong ties to 20 Ontario universities, including the University of Toronto, where it occupies a purpose-built building on the St. George campus. Fields was established in 1992, and was briefly based at the University of Waterloo before relocating to Toronto in 1995. The institute is named after Canadian mathematician John Charles Fields, after whom the Fields Medal is also named. Fields' name was given to the institute in recognition of his contributions to mathematics and his work on behalf of high level mathematical scholarship in Canada. As a centre for mathematical activity, the institute brings together mathematicians from Canada and abroad. It also supports the collaboration between professional mathematicians and researchers in other domains, such as sta ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Extreme Value Theory
Extreme value theory or extreme value analysis (EVA) is a branch of statistics dealing with the extreme deviations from the median of probability distributions. It seeks to assess, from a given ordered sample of a given random variable, the probability of events that are more extreme than any previously observed. Extreme value analysis is widely used in many disciplines, such as structural engineering, finance, earth sciences, traffic prediction, and geological engineering. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, such as the 100-year flood. Similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50-year wave and design the structure accordingly. Data analysis Two main approaches exist for practical extreme value analysis. The first method relies on deriving block maxima (minima) series as a preliminary step. In many situations it is customary and convenient to e ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
State Space
A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory. For instance, the toy problem Vacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are the natural numbers starting at 1 and are incremented over time has an infinite discrete state space. The angular position of an undamped pendulum is a continuous (and therefore infinite) state space. Definition In the theory of dynamical systems, the state space of a discrete system defined by a function ''ƒ'' can be modeled as a directed graph where each possible state of the dynamical system is represented by a vertex with a directed edge from ''a'' to ''b'' if and only if ''ƒ''(''a'') = ''b''. This is known as a state diagram. For a cont ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Probability Space
In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a die. A probability space consists of three elements:Stroock, D. W. (1999). Probability theory: an analytic view. Cambridge University Press. # A sample space, \Omega, which is the set of all possible outcomes. # An event space, which is a set of events \mathcal, an event being a set of outcomes in the sample space. # A probability function, which assigns each event in the event space a probability, which is a number between 0 and 1. In order to provide a sensible model of probability, these elements must satisfy a number of axioms, detailed in this article. In the example of the throw of a standard die, we would take the sample space to be \. For the event space, we could simply use the set of all subsets of the sample ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Cumulant
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa. The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the -th-order cumulant of their sum is equal to the sum of their -th-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property. Just as for moments, where ''joint moments'' are used for collections of random variab ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Convex Conjugate
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). It allows in particular for a far reaching generalization of Lagrangian duality. Definition Let X be a real topological vector space and let X^ be the dual space to X. Denote by :\langle \cdot , \cdot \rangle : X^ \times X \to \mathbb the canonical dual pairing, which is defined by \left( x^*, x \right) \mapsto x^* (x). For a function f : X \to \mathbb \cup \ taking values on the extended real number line, its is the function :f^ : X^ \to \mathbb \cup \ whose value at x^* \in X^ is defined to be the supremum: :f^ \left( x^ \right) := \sup \left\, or, equivalently, in terms of the infimum: :f^ \left( x^ \right) := - \inf \left\. This definition can be ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Exponentially Equivalent Measures
In mathematics, exponential equivalence of measures is how two sequences or families of probability measures are "the same" from the point of view of large deviations theory. Definition Let (M,d) be a metric space and consider two one-parameter families of probability measures on M, say (\mu_\varepsilon)_ and (\nu_\varepsilon)_. These two families are said to be exponentially equivalent if there exist * a one-parameter family of probability spaces (\Omega,\Sigma_\varepsilon,P_\varepsilon)_, * two families of M-valued random variables (Y_\varepsilon)_ and (Z_\varepsilon)_, such that * for each \varepsilon >0, the P_\varepsilon-law (i.e. the push-forward measure) of Y_\varepsilon is \mu_\varepsilon, and the P_\varepsilon-law of Z_\varepsilon is \nu_\varepsilon, * for each \delta >0, "Y_\varepsilon and Z_\varepsilon are further than \delta apart" is a \Sigma_\varepsilon- measurable event, i.e. ::\big\ \in \Sigma_, * for each \delta >0, ::\limsup_\, \varepsilon \log P_\varepsilon \big( ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Functional (mathematics)
In mathematics, a functional (as a noun) is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author). * In linear algebra, it is synonymous with linear forms, which are linear mapping from a vector space V into its Field (mathematics), field of scalars (that is, an element of the dual space V^*) "Let ''E'' be a free module over a commutative ring ''A''. We view ''A'' as a free module of rank 1 over itself. By the dual module ''E''∨ of ''E'' we shall mean the module Hom(''E'', ''A''). Its elements will be called functionals. Thus a functional on ''E'' is an ''A''-linear map ''f'' : ''E'' → ''A''." * In functional analysis and related fields, it refers more generally to a mapping from a space X into the field of Real numbers, real or complex numbers. "A numerical function ''f''(''x'') defined on a normed linear space ''R'' will be called a ''functional''. A functional ''f''(''x'') is said to be ''linear'' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Tilted Large Deviation Principle
In mathematics — specifically, in large deviations theory — the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by "tilting", i.e. integration against an exponential functional. It can be seen as an alternative formulation of Varadhan's lemma. Statement of the theorem Let ''X'' be a Polish space (i.e., a separable, completely metrizable topological space), and let (''μ''''ε'')''ε''>0 be a family of probability measures on ''X'' that satisfies the large deviation principle with rate function In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. It is required to have several properties which assist in the formulation of the large deviati ... ''I'' : ''X'' → , +∞ Let ''F'' : ''X'' → R be a continuous functio ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Projective Limit
In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to "glue together" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory. By working in the dual category, that is by reverting the arrows, an inverse limit becomes a direct limit or ''inductive limit'', and a ''limit'' becomes a colimit. Formal definition Algebraic objects We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let (I, \leq) be a directed poset (not all authors require ''I'' to be directed). Let (''A''''i'')''i''∈''I'' be a family of groups and suppose we have a family of homomorphisms f_: A_j \to A_i for all i \leq j (note the order) with the following properties: # f_ is the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |