HOME
*





Hypoexponential Distribution
In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one. Overview The Erlang distribution is a series of ''k'' exponential distributions all with rate \lambda. The hypoexponential is a series of ''k'' exponential distributions each with their own rate \lambda_, the rate of the i^ exponential distribution. If we have ''k'' independently distributed exponential random variables \boldsymbol_, then the random variable, : \boldsymbol=\sum^_\boldsymbol_ is hypoexponentially distributed. The hy ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Real Number
In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real number can be almost uniquely represented by an infinite decimal expansion. The real numbers are fundamental in calculus (and more generally in all mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives. The set of real numbers is denoted or \mathbb and is sometimes called "the reals". The adjective ''real'' in this context was introduced in the 17th century by René Descartes to distinguish real numbers, associated with physical reality, from imaginary numbers (such as the square roots of ), which seemed like a theoretical contrivance unrelated to physical reality. The real numbers include the rational numbers, such as the integer and the fraction . The rest of the real number ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Process
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs ''now''." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics. Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Phase-type Distribution
A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases. It has a discrete-time equivalent the discrete phase-type distribution. The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution. Definition Consider a continuous-time Markov process with ''m'' + 1 states, where ''m'' ≥ 1, such that the states 1,...,''m'' are transient states and state 0 is an absorbing state. Further, let the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Laplace Transform
In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform In mathematics, an integral transform maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in ... that converts a Function (mathematics), function of a Real number, real Variable (mathematics), variable (usually t, in the ''time domain'') to a function of a Complex number, complex variable s (in the complex frequency domain, also known as ''s''-domain, or s-plane). The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication. For suitable functions ''f'', the Laplace transform is the integral \mathcal\(s) = \int_0^\infty f(t)e^ \, dt. H ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Lagrange Polynomial
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree of a polynomial, degree that polynomial interpolation, interpolates a given set of data. Given a data set of graph of a function, coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' and the y_j are called ''values''. The Lagrange polynomial L(x) has degree \leq k and assumes each value at the corresponding node, L(x_j) = y_j. Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler. Uses of Lagrange polynomials include the Newton–Cotes formulas, Newton–Cotes method of numerical integration and Shamir's Secret Sharing, Shamir's secret sharing scheme in cryptography. For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. Definition Given a set of k + ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Exponential
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group. Let be an real or complex matrix. The exponential of , denoted by or , is the matrix given by the power series e^X = \sum_^\infty \frac X^k where X^0 is defined to be the identity matrix I with the same dimensions as X. The above series always converges, so the exponential of is well-defined. If is a 1×1 matrix the matrix exponential of is a 1×1 matrix whose single element is the ordinary exponential of the single element of . Properties Elementary properties Let and be complex matrices and let and be arbitrary complex numbers. We denote the identity matrix by and the zero matrix by 0. The matrix exponential satisfies the following ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Column Vector
In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, consisting of a single row of n entries, \boldsymbol a = \begin a_1 & a_2 & \dots & a_n \end. (Throughout this article, boldface is used for both row and column vectors.) The transpose (indicated by T) of any row vector is a column vector, and the transpose of any column vector is a row vector: \begin x_1 \; x_2 \; \dots \; x_m \end^ = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end and \begin x_1 \\ x_2 \\ \vdots \\ x_m \end^ = \begin x_1 \; x_2 \; \dots \; x_m \end. The set of all row vectors with ''n'' entries in a given field (such as the real numbers) forms an ''n''-dimensional vector space; similarly, the set of all column vectors with ''m'' entries forms an ''m''-dimensional vector space. The space of row vectors with ''n'' entries can b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Density Function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a ''relative likelihood'' that the value of the random variable would be close to that sample. Probability density is the probability per unit length, in other words, while the ''absolute likelihood'' for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. In a more precise sense, the PDF is used to specify the probability of the random variable falling ''within a particular range of values'', as opposed to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cumulative Distribution Function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by an ''upwards continuous'' ''monotonic increasing'' cumulative distribution function F : \mathbb R \rightarrow ,1/math> satisfying \lim_F(x)=0 and \lim_F(x)=1. In the case of a scalar continuous distribution, it gives the area under the probability density function from minus infinity to x. Cumulative distribution functions are also used to specify the distribution of multivariate random variables. Definition The cumulative distribution function of a real-valued random variable X is the function given by where the right-hand side represents the probability that the random variable X takes on a value less tha ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Exponential Distribution
In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts. The exponential distribution is not the same as the class of exponential families of distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions. Definitions Probability density function The probability density function (pdf) of an exponential distribution is : f(x;\lambda) = \begin \lambda ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Phase-type Distribution
A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occurs may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of a Markov process with one absorbing state. Each of the states of the Markov process represents one of the phases. It has a discrete-time equivalent the discrete phase-type distribution. The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution. Definition Consider a continuous-time Markov process with ''m'' + 1 states, where ''m'' ≥ 1, such that the states 1,...,''m'' are transient states and state 0 is an absorbing state. Further, let the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hyper-exponential Distribution
In probability theory, a hyperexponential distribution is a continuous probability distribution whose probability density function of the random variable ''X'' is given by : f_X(x) = \sum_^n f_(x)\;p_i, where each ''Y''''i'' is an exponentially distributed random variable with rate parameter ''λ''''i'', and ''p''''i'' is the probability that ''X'' will take on the form of the exponential distribution with rate ''λ''''i''. It is named the ''hyper''exponential distribution since its coefficient of variation is greater than that of the exponential distribution, whose coefficient of variation is 1, and the hypoexponential distribution, which has a coefficient of variation smaller than one. While the exponential distribution is the continuous analogue of the geometric distribution, the hyperexponential distribution is not analogous to the hypergeometric distribution. The hyperexponential distribution is an example of a mixture density. An example of a hyperexponential random var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]