HOME
*



picture info

Cumulative Frequency Analysis
Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called ''frequency of non-exceedance''. Cumulative frequency analysis is performed to obtain insight into how often a certain phenomenon (feature) is below a certain value. This may help in describing or explaining a situation in which the phenomenon is involved, or in planning interventions, for example in flood protection.Benson, M.A. 1960. Characteristics of frequency curves based on a theoretical 1000-year record. In: T.Dalrymple (ed.), Flood frequency analysis. U.S. Geological Survey Water Supply paper 1543-A, pp. 51–71 This statistical technique can be used to see how likely an event like a flood is going to happen again in the future, based on how often it happened in the past. It can be adapted to bring in things like climate change causing wetter winters and drie ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gumbel Distribution
In probability theory and statistics, the Gumbel distribution (also known as the type-I generalized extreme value distribution) is used to model the distribution of the maximum (or the minimum) of a number of samples of various distributions. This distribution might be used to represent the distribution of the maximum level of a river in a particular year if there was a list of maximum values for the past ten years. It is useful in predicting the chance that an extreme earthquake, flood or other natural disaster will occur. The potential applicability of the Gumbel distribution to represent the distribution of maxima relates to extreme value theory, which indicates that it is likely to be useful if the distribution of the underlying sample data is of the normal or exponential type. ''This article uses the Gumbel distribution to model the distribution of the maximum value''. ''To model the minimum value, use the negative of the original values.'' The Gumbel distribution is a par ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Confidence Interval
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 90% or 99%, are sometimes used. The confidence level represents the long-run proportion of corresponding CIs that contain the true value of the parameter. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value. Factors affecting the width of the CI include the sample size, the variability in the sample, and the confidence level. All else being the same, a larger sample produces a narrower confidence interval, greater variability in the sample produces a wider confidence interval, and a higher confidence level produces a wider confidence interval. Definition Let be a random sample from a probability distribution with statistical parameter , which is a quantity to be estima ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Binomial Distribution
In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: ''success'' (with probability ''p'') or ''failure'' (with probability q=1-p). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., ''n'' = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size ''n'' drawn with replacement from a population of size ''N''. If the sampling is carried out without replacement, the draws are not independent and so the resulting ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Frequency Of Exceedance
The frequency of exceedance, sometimes called the annual rate of exceedance, is the frequency with which a random process exceeds some critical value. Typically, the critical value is far from the mean. It is usually defined in terms of the number of peaks of the random process that are outside the boundary. It has applications related to predicting extreme events, such as major earthquakes and floods. Definition The frequency of exceedance is the number of times a stochastic process exceeds some critical value, usually a critical value far from the process' mean, per unit time. Counting exceedance of the critical value can be accomplished either by counting peaks of the process that exceed the critical value or by counting upcrossings of the critical value, where an ''upcrossing'' is an event where the instantaneous value of the process crosses the critical value with positive slope. This article assumes the two methods of counting exceedance are equivalent and that the process ha ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in prob ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Binomial Distribution
In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: ''success'' (with probability ''p'') or ''failure'' (with probability q=1-p). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., ''n'' = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size ''n'' drawn with replacement from a population of size ''N''. If the sampling is carried out without replacement, the draws are not independent and so the resulting ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Binomial Distribution Pmf
Binomial may refer to: In mathematics *Binomial (polynomial), a polynomial with two terms *Binomial coefficient, numbers appearing in the expansions of powers of binomials *Binomial QMF, a perfect-reconstruction orthogonal wavelet decomposition *Binomial theorem, a theorem about powers of binomials *Binomial type, a property of sequences of polynomials In probability and statistics *Binomial distribution, a type of probability distribution *Binomial process *Binomial test, a test of significance In computing science *Binomial heap, a data structure In linguistics *Binomial pair, a sequence of two or more words or phrases in the same grammatical category, having some semantic relationship and joined by some syntactic device In biology * Binomial nomenclature, a Latin two-term name for a species, such as ''Sequoia sempervirens'' In finance *Binomial options pricing model, a numerical method for the valuation of options In politics *Binomial voting system Binomial may refer to: ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




The Black Swan (Taleb Book)
''The Black Swan: The Impact of the Highly Improbable'' is a 2007 book by Nassim Nicholas Taleb, who is a former options trader. The book focuses on the extreme impact of rare and unpredictable outlier events—and the human tendency to find simplistic explanations for these events, retrospectively. Taleb calls this the Black Swan theory. The book covers subjects relating to knowledge, aesthetics, as well as ways of life, and uses elements of fiction and anecdotes from the author's life to elaborate his theories. It spent 36 weeks on the ''New York Times'' best-seller list. The book is part of Taleb's five-volume series, titled the ''Incerto'', including '' Fooled by Randomness'' (2001), ''The Black Swan'' (2007–2010), '' The Bed of Procrustes'' (2010–2016), '' Antifragile'' (2012), and ''Skin in the Game'' (2018). Coping with Black Swan events A central idea in Taleb's book is not to attempt to predict Black Swan events, but to build robustness to negative events and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Random Error
Observational error (or measurement error) is the difference between a measured value of a quantity and its true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. In statistics, an error is not necessarily a "mistake". Variability is an inherent part of the results of measurements and of the measurement process. Measurement errors can be divided into two components: ''random'' and ''systematic''. Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measurements of a constant attribute or quantity are taken. Systematic errors are errors that are not determined by chance but are introduced by repeatable processes inherent to the system. Systematic error may also refer to an error with a non-zero mean, the effect of which is not reduced when observations are averaged. Measurement errors can be summarized in terms of accuracy and precision. Measurement error should not be confused with measurement uncert ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

El Niño
El Niño (; ; ) is the warm phase of the El Niño–Southern Oscillation (ENSO) and is associated with a band of warm ocean water that develops in the central and east-central equatorial Pacific (approximately between the International Date Line and 120°W), including the area off the Pacific coast of South America. The ENSO is the cycle of warm and cold sea surface temperature (SST) of the tropical central and eastern Pacific Ocean. El Niño is accompanied by high air pressure in the western Pacific and low air pressure in the eastern Pacific. El Niño phases are known to last close to four years; however, records demonstrate that the cycles have lasted between two and seven years. During the development of El Niño, rainfall develops between September–November. The cool phase of ENSO is es, La Niña, translation=The Girl, with SSTs in the eastern Pacific below average, and air pressure high in the eastern Pacific and low in the western Pacific. The ENSO cycle, including ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]