Sub-Gaussian Distribution
   HOME
*





Sub-Gaussian Distribution
In probability theory, a sub-Gaussian distribution is a probability distribution with strong tail decay. Informally, the tails of a sub-Gaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives sub-Gaussian distributions their name. Formally, the probability distribution of a random variable ''X '' is called sub-Gaussian if there are positive constant ''C'' such that for every t \geq 0, : \operatorname(, X, \geq t) \leq 2 \exp . Sub-Gaussian properties Let ''X '' be a random variable. The following conditions are equivalent: # \operatorname(, X, \geq t) \leq 2 \exp for all t \geq 0, where K_1 is a positive constant; # \operatornameexp\leq 2, where K_2 is a positive constant; # \operatorname , X, ^p \leq 2K_3^p \Gamma\left(\frac+1\right) for all ''p \geq 1'', where K_3 is a positive constant. ''Proof''. (1)\implies(3) By the layer cake representation,\begin \operatorname , X, ^p &= \int_0^\infty \operatorname(, X ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). Examples of random phenomena include the weather conditions at some future date, the height of a randomly selected person, the fraction of male students in a school, the results of a survey to be conducted, etc. Introduction A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often denoted by \Omega, is the set of all possible outcomes of a random phe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Constant (mathematics)
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other value); as a noun, it has two different meanings: * A fixed and well-defined number or other non-changing mathematical object. The terms '' mathematical constant'' or '' physical constant'' are sometimes used to distinguish this meaning. * A function whose value remains unchanged (i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s) in question. For example, a general quadratic function is commonly written as: :a x^2 + b x + c\, , where , and are constants (or parameters), and a variable—a placeholder for the argument of the function being studied. A more explicit way to denote this function is :x\mapsto a x^2 + b x + c \, , which makes the function-argument status of (and by extension the constancy of , and ) clear. In this example , and are co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Layer Cake Representation
In mathematics, the layer cake representation of a non- negative, real-valued measurable function f defined on a measure space (\Omega,\mathcal,\mu) is the formula :f(x) = \int_0^\infty 1_ (x) \, \mathrmt, for all x \in \Omega, where 1_E denotes the indicator function of a subset E\subseteq \Omega and L(f,t) denotes the super-level set :L(f, t) = \. The layer cake representation follows easily from observing that : 1_(x) = 1_(t) and then using the formula :f(x) = \int_0^ \,\mathrmt. The layer cake representation takes its name from the representation of the value f(x) as the sum of contributions from the "layers" L(f,t): "layers"/values t below f(x) contribute to the integral, while values t above f(x) do not. It is a generalization of Cavalieri's principle and is also known under this name. An important consequence of the layer cake representation is the identity \int_\Omega f(x) \, \mathrm\mu(x) = \int_0^ \mu(\)\,\mathrmt, which follows from it by applying the Fubini-Ton ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov's Inequality
In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function (mathematics), function of a random variable is greater than or equal to some positive Constant (mathematics), constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in Mathematical analysis, analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev inequality) or Irénée-Jules Bienaymé, Bienaymé's inequality. Markov's inequality (and other similar inequalities) relate probabilities to expected value, expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable. Statement If is a nonnegative random variable and , then the probability that is at least is at most th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Orlicz Space
In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the ''L''''p'' spaces. Like the ''L''''p'' spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932. Besides the ''L''''p'' spaces, a variety of function spaces arising naturally in analysis are Orlicz spaces. One such space ''L'' log+ ''L'', which arises in the study of Hardy–Littlewood maximal functions, consists of measurable functions ''f'' such that the integral :\int_ , f(x), \log^+ , f(x), \,dx < \infty. Here log+ is the of the logarithm. Also included in the class of Orlicz spaces are many of the most important

picture info

Laplace Transform
In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform In mathematics, an integral transform maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in ... that converts a Function (mathematics), function of a Real number, real Variable (mathematics), variable (usually t, in the ''time domain'') to a function of a Complex number, complex variable s (in the complex frequency domain, also known as ''s''-domain, or s-plane). The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication. For suitable functions ''f'', the Laplace transform is the integral \mathcal\(s) = \int_0^\infty f(t)e^ \, dt. H ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Moment (mathematics)
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics. For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from to ) uniquely determines the distribution (Hausdorff moment problem). The same is not true on unbounded intervals (Hamburger moment problem). In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Moment-generating Function
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions. As its name implies, the moment-generating function can be used to compute a distribution’s moments: the ''n''th moment about 0 is the ''n''th derivative of the moment-generating function, evaluated at 0. In addition to real-valued distributions (univariate distributions), moment-generating functions can be defined for vector- or matrix-valued random variables, and can even be extended to more general cases. The moment-generating func ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Independent And Identically Distributed Random Variables
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usually abbreviated as ''i.i.d.'', ''iid'', or ''IID''. IID was first defined in statistics and finds application in different fields such as data mining and signal processing. Introduction In statistics, we commonly deal with random samples. A random sample can be thought of as a set of objects that are chosen randomly. Or, more formally, it’s “a sequence of independent, identically distributed (IID) random variables”. In other words, the terms ''random sample'' and ''IID'' are basically one and the same. In statistics, we usually say “random sample,” but in probability it’s more common to say “IID.” * Identically Distributed means that there are no overall trends–the distribution doesn’t fluctuate and all items in t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Kurtosis
In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosis describes a particular aspect of a probability distribution. There are different ways to quantify kurtosis for a theoretical distribution, and there are corresponding ways of estimating it using a sample from a population. Different measures of kurtosis may have different interpretations. The standard measure of a distribution's kurtosis, originating with Karl Pearson, is a scaled version of the fourth moment of the distribution. This number is related to the tails of the distribution, not its peak; hence, the sometimes-seen characterization of kurtosis as "peakedness" is incorrect. For this measure, higher kurtosis corresponds to greater extremity of deviations (or outliers), and not the configuration of data near the mean. It is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Studia Mathematica
''Studia Mathematica'' is a triannual peer-reviewed scientific journal of mathematics published by the Polish Academy of Sciences. Papers are written in English, French, German, or Russian, primarily covering functional analysis, abstract methods of mathematical analysis, and probability theory. The editor-in-chief is Adam Skalski. History The journal was established in 1929 by Stefan Banach and Hugo Steinhaus and its first editors were Banach, Steinhaus and Herman Auerbach. Due to the Second World War publication stopped after volume 9 (1940) and was not resumed until volume 10 in 1948. Abstracting and indexing The journal is abstracted and indexed in: *Current Contents/Physical, Chemical & Earth Sciences *MathSciNet *Science Citation Index *Scopus *Zentralblatt MATH According to the ''Journal Citation Reports'', the journal has a 2018 impact factor The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]