HOME
*





Rectified Gaussian Distribution
In probability theory, the rectified Gaussian distribution is a modification of the Gaussian distribution when its negative elements are reset to 0 (analogous to an electronic rectifier). It is essentially a mixture of a discrete distribution (constant 0) and a continuous distribution (a truncated Gaussian distribution with interval (0,\infty)) as a result of censoring. Density function The probability density function of a rectified Gaussian distribution, for which random variables ''X'' having this distribution, derived from the normal distribution \mathcal(\mu,\sigma^2), are displayed as X \sim \mathcal^(\mu,\sigma^2) , is given by f(x;\mu,\sigma^2) =\Phi\delta(x)+ \frac\; e^\textrm(x). Here, \Phi(x) is the cumulative distribution function (cdf) of the standard normal distribution: \Phi(x) = \frac \int_^x e^ \, dt \quad x\in\mathbb, \delta(x) is the Dirac delta function \delta(x) = \begin +\infty, & x = 0 \\ 0, & x \ne 0 \end and, \textrm(x) ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probab ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Mean-preserving Contraction
In probability and statistics, a mean-preserving spread (MPS) is a change from one probability distribution A to another probability distribution B, where B is formed by spreading out one or more portions of A's probability density function or probability mass function while leaving the mean (the expected value) unchanged. As such, the concept of mean-preserving spreads provides a stochastic ordering of equal-mean gambles (probability distributions) according to their degree of risk; this ordering is partial, meaning that of two equal-mean gambles, it is not necessarily true that either is a mean-preserving spread of the other. Distribution A is said to be a mean-preserving contraction of B if B is a mean-preserving spread of A. Ranking gambles by mean-preserving spreads is a special case of ranking gambles by second-order stochastic dominance – namely, the special case of equal means: If B is a mean-preserving spread of A, then A is second-order stochastically dom ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Truncated Normal Distribution
In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics. Definitions Suppose X has a normal distribution with mean \mu and variance \sigma^2 and lies within the interval (a,b), \text \; -\infty \leq a < b \leq \infty . Then X conditional on a < X < b has a truncated normal distribution. Its , f, for a \leq x \leq b , is given by : f(x;\mu,\sigma,a,b) = \frac\,\frac and by f=0 otherwise. Here, :\phi(\xi)=\frac\exp\left(-\frac\xi^2\right) is t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Modified Half-normal Distribution
In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution. Let X follow an ordinary normal distribution, N(0,\sigma^2). Then, Y=, X, follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero. Properties Using the \sigma parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by : f_Y(y; \sigma) = \frac\exp \left( -\frac \right) \quad y \geq 0, where E = \mu = \frac. Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if \sigma is near zero), obtained by setting \theta=\frac, the probability density function is given by : f_Y(y; \theta) = \frac\exp \left( -\frac \right) \quad y \geq 0, where E = \mu = \frac. The cumulative distribution function (CDF) is given by : F_Y(y; \sigma) = \int_0^y \frac\sqrt \, \exp \left( -\frac \righ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Half-t Distribution
In statistics, the folded-''t'' and half-''t'' distributions are derived from Student's ''t''-distribution by taking the absolute values of variates. This is analogous to the folded-normal and the half-normal statistical distributions being derived from the normal distribution. Definitions The folded non-standardized ''t'' distribution is the distribution of the absolute value of the non-standardized ''t'' distribution with \nu degrees of freedom; its probability density function is given by: :g\left(x\right)\;=\;\frac\left\lbrace \left +\frac\frac\right+\left +\frac\frac\right \right\rbrace \qquad(\mbox\quad x \geq 0). The half-''t'' distribution results as the special case of \mu=0, and the standardized version as the special case of \sigma=1. If \mu=0, the folded-''t'' distribution reduces to the special case of the half-''t'' distribution. Its probability density function then simplifies to :g\left(x\right)\;=\;\frac \left(1+\frac\frac\right)^ \qquad(\mbox\quad x \geq 0). T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Half-normal Distribution
In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution. Let X follow an ordinary normal distribution, N(0,\sigma^2). Then, Y=, X, follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero. Properties Using the \sigma parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by : f_Y(y; \sigma) = \frac\exp \left( -\frac \right) \quad y \geq 0, where E = \mu = \frac. Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if \sigma is near zero), obtained by setting \theta=\frac, the probability density function is given by : f_Y(y; \theta) = \frac\exp \left( -\frac \right) \quad y \geq 0, where E = \mu = \frac. The cumulative distribution function (CDF) is given by : F_Y(y; \sigma) = \int_0^y \frac\sqrt \, \exp \left( -\frac \rig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Folded Normal Distribution
The folded normal distribution is a probability distribution related to the normal distribution. Given a normally distributed random variable ''X'' with mean ''μ'' and variance ''σ''2, the random variable ''Y'' = , ''X'', has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called "folded" because probability mass to the left of ''x'' = 0 is folded over by taking the absolute value. In the physics of heat conduction, the folded normal distribution is a fundamental solution of the heat equation on the half space; it corresponds to having a perfect insulator on a hyperplane through the origin. Definitions Density The probability density function (PDF) is given by :f_Y(x;\mu,\sigma^2)= \frac \, e^ + \frac \, e^ for ''x'' ≥ 0, and 0 everywhere else. An alternative formulation is given by : f\left(x \right)=\sqrte^\cosh, where cosh is the cosine Hyperbolic function. It foll ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Error Function
In mathematics, the error function (also called the Gauss error function), often denoted by , is a complex function of a complex variable defined as: :\operatorname z = \frac\int_0^z e^\,\mathrm dt. This integral is a special (non- elementary) sigmoid function that occurs often in probability, statistics, and partial differential equations. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real. In statistics, for non-negative values of , the error function has the following interpretation: for a random variable that is normally distributed with mean 0 and standard deviation , is the probability that falls in the range . Two closely related functions are the complementary error function () defined as :\operatorname z = 1 - \operatorname z, and the imaginary error function () defined as :\operatorname z = -i\operatorname iz, where is the imaginary unit Name The name "error functi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gene Regulatory Network
A gene (or genetic) regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo). The regulator can be DNA, RNA, protein or any combination of two or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzym ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Biology
Computational biology refers to the use of data analysis, mathematical modeling and Computer simulation, computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and big data, the field also has foundations in applied mathematics, chemistry, and genetics. It differs from biological computing, a subfield of computer engineering which uses bioengineering to build computers. History Bioinformatics, the analysis of informatics processes in biological systems, began in the early 1970s. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data pushed biological researchers to use computers to evaluate and compare large data sets in their own field. By 1982, researchers shared information via Punched card, punch cards. The amount of data grew exponentially by the end of the 1980s, requiring new computational method ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dirichlet Process
In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution. As an example, a bag of 100 real-world dice is a ''random probability mass function (random pmf)'' - to sample this random pmf you put your hand in the bag and draw out a die, that is, you draw a pmf. A bag of dice manufactured using a crude process 100 years ago will likely have probabilities that deviate wildly from the uniform pmf, whereas a bag of state-of-the-art dice used by Las Vegas casinos may have barely perceptible i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Variational Bayesian Methods
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes: #To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables. #To derive a lower bound for the marginal likelihood (sometimes called the ''evidence'') of the observed data (i.e. the marginal probability of the data given the model, with marginalization performed over unobserved v ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]