Line Sampling
   HOME

TheInfoList



OR:

Line sampling is a method used in
reliability engineering Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended functi ...
to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing
black box In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). The te ...
systems, and unlike the
importance sampling Importance sampling is a Monte Carlo method for evaluating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally at ...
method of
variance reduction In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates obtained for a given simulation or computational effort. Every output random variable fr ...
, does not require detailed knowledge of the system. The basic idea behind line sampling is to refine estimates obtained from the
first-order reliability method The first-order reliability method, (FORM), is a semi- probabilistic reliability analysis method devised to evaluate the reliability of a system. The accuracy of the method can be improved by averaging over many samples, which is known as Line Sam ...
(FORM), which may be incorrect due to the non-linearity of the limit state function. Conceptually, this is achieved by averaging the result of different FORM simulations. In practice, this is made possible by identifying the importance direction \boldsymbol \alpha  in the input parameter space, which points towards the region which most strongly contributes to the overall failure probability. The importance direction can be closely related to the center of mass of the failure region, or to the failure point with the highest probability density, which often falls at the closest point to the origin of the limit state function, when the
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s of the problem have been transformed into the standard
normal space Normal(s) or The Normal(s) may refer to: Film and television * Normal (2003 film), ''Normal'' (2003 film), starring Jessica Lange and Tom Wilkinson * Normal (2007 film), ''Normal'' (2007 film), starring Carrie-Anne Moss, Kevin Zegers, Callum Keit ...
. Once the importance direction has been set to point towards the failure region, samples are randomly generated from the standard normal space and lines are drawn parallel to the importance direction in order to compute the distance to the limit state function, which enables the probability of failure to be estimated for each sample. These failure probabilities can then be averaged to obtain an improved estimate.


Mathematical approach

Firstly the importance direction must be determined. This can be achieved by finding the design point, or the gradient of the limit state function. A set of samples is generated using
Monte Carlo simulation Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be det ...
in the standard normal space. For each sample \boldsymbol x, the probability of failure in the line parallel to the important direction is defined as: : p_f(\boldsymbol x) = \int_^ I(\boldsymbol x + \beta \cdot \boldsymbol \alpha)\varphi (\beta ) \, d\beta where I(\cdot)  is equal to one for samples contributing to failure, and is zero otherwise: : I_f(\boldsymbol x) = \begin 1 & \text \boldsymbol x \in \Omega_f \\ 0 & \text \end \boldsymbol \alpha  is the important direction, \varphi  is the probability density function of a
Gaussian distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
(and \beta  is a real number). In practice the roots of a nonlinear function must be found to estimate the partial probabilities of failure along each line. This is either done by interpolation of a few samples along the line, or by using the
Newton–Raphson method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
. The global probability of failure is the mean of the probability of failure on the lines: : \tilde_f = \frac \sum_^ p_f^ where N_L  is the total number of lines used in the analysis and the p_f^  are the partial probabilities of failure estimated along all the lines. For problems in which the dependence of the performance function is only moderately non-linear with respect to the parameters modeled as random variables, setting the importance direction as the gradient vector of the performance function in the underlying standard normal space leads to highly efficient Line Sampling. In general it can be shown that the variance obtained by line sampling is always smaller than that obtained by conventional Monte Carlo simulation, and hence the line sampling algorithm converges more quickly. The rate of convergence is made quicker still by recent advancements which allow the importance direction to be repeatedly updated throughout the simulation, and this is known as adaptive line sampling.


Industrial application

The algorithm is particularly useful for performing reliability analysis on computationally expensive industrial black box models, since the limit state function can be non-linear and the number of samples required is lower than for other reliability analysis techniques such as
subset simulation Subset simulation is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The basic idea is to express a small failure probability as a product of larger conditional p ...
. The algorithm can also be used to efficiently propagate epistemic uncertainty in the form of probability boxes, or random sets. A numerical implementation of the method is available in the open source software OpenCOSSAN.


See also

* Rare event sampling *
Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. T ...
* Quantitative risk assessment


References

{{Reflist Reliability analysis Variance_reduction