HOME



picture info

False Discovery Rate
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections of the null). Equivalently, the FDR is the expected ratio of the number of false positive classifications (false discoveries) to the total number of positive classifications (rejections of the null). The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP / (FP + TP). FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures (such as the Bonferroni correction), which control the probability of ''at least one'' Type I error. Thus, FDR-controlling procedures have greater power, at t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of statistical survey, surveys and experimental design, experiments. When census data (comprising every member of the target population) cannot be collected, statisticians collect data by developing specific experiment designs and survey sample (statistics), samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Bonferroni Adjustment
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Background The method is named for its use of the Bonferroni inequalities. Application of the method to confidence intervals was described by Olive Jean Dunn. Statistical hypothesis testing is based on rejecting the null hypothesis when the likelihood of the observed data would be low if the null hypothesis were true. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level of \alpha/m, where \alpha is the desired overall alpha level and m is the number of hypotheses. For example, if a trial is testing m = 20 hypotheses with a desired overall \alpha = 0.05, then the Bonferroni correction would test each individual hypoth ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Storey-Tibshirani Procedure
In statistical hypothesis testing, specifically multiple hypothesis testing, the ''q''-value in the Storey procedure provides a means to control the positive false discovery rate (pFDR). Just as the ''p''-value gives the expected false positive rate obtained by rejecting the null hypothesis for any result with an equal or smaller ''p''-value, the ''q''-value gives the expected pFDR obtained by rejecting the null hypothesis for any result with an equal or smaller ''q''-value. History In statistics, testing multiple hypotheses simultaneously using methods appropriate for testing single hypotheses tends to yield many false positives: the so-called multiple comparisons problem. For example, assume that one were to test 1,000 null hypotheses, all of which are true, and (as is conventional in single hypothesis testing) to reject null hypotheses with a significance level of 0.05; due to random chance, one would expect 5% of the results to appear significant ('' P'' < 0.05), yieldi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Euler–Mascheroni Constant
Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma (), defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by : \begin \gamma &= \lim_\left(-\log n + \sum_^n \frac1\right)\\ px&=\int_1^\infty\left(-\frac1x+\frac1\right)\,\mathrm dx. \end Here, represents the floor function. The numerical value of Euler's constant, to 50 decimal places, is: History The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled ''De Progressionibus harmonicis observationes'' (Eneström Index 43), where he described it as "worthy of serious consideration". Euler initially calculated the constant's value to 6 decimal places. In 1781, he calculated it to 16 decimal places. Euler used the notations and for the constant. The Italian mathematician Lorenzo Mascheroni attempted to calculate the constant to 32 dec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Harmonic Number
In mathematics, the -th harmonic number is the sum of the reciprocals of the first natural numbers: H_n= 1+\frac+\frac+\cdots+\frac =\sum_^n \frac. Starting from , the sequence of harmonic numbers begins: 1, \frac, \frac, \frac, \frac, \dots Harmonic numbers are related to the harmonic mean in that the -th harmonic number is also times the reciprocal of the harmonic mean of the first positive integers. Harmonic numbers have been studied since antiquity and are important in various branches of number theory. They are sometimes loosely termed harmonic series, are closely related to the Riemann zeta function, and appear in the expressions of various special functions. The harmonic numbers roughly approximate the natural logarithm function and thus the associated harmonic series grows without limit, albeit slowly. In 1737, Leonhard Euler used the divergence of the harmonic series to provide a new proof of the infinity of prime numbers. His work was extended into the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistical Independence
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Classification Of Multiple Hypothesis Tests
Multiple comparisons, multiplicity or multiple testing problem occurs in statistics when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values. The larger the number of inferences made, the more likely erroneous inferences become. Several statistical techniques have been developed to address this problem, for example, by requiring a stricter significance threshold for individual comparisons, so as to compensate for the number of inferences being made. Methods for family-wise error rate give the probability of false positives resulting from the multiple comparisons problem. History The problem of multiple comparisons received increased attention in the 1950s with the work of statisticians such as Tukey and Scheffé. Over the ensuing decades, many procedures were developed to address the problem. In 1996, the first international conference on multiple comparison procedures took place in Tel Aviv. Th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Division By Zero
In mathematics, division by zero, division (mathematics), division where the divisor (denominator) is 0, zero, is a unique and problematic special case. Using fraction notation, the general example can be written as \tfrac a0, where a is the dividend (numerator). The usual definition of the quotient in elementary arithmetic is the number which yields the dividend when multiplication, multiplied by the divisor. That is, c = \tfrac ab is equivalent to c \cdot b = a. By this definition, the quotient q = \tfrac is nonsensical, as the product q \cdot 0 is always 0 rather than some other number a. Following the ordinary rules of elementary algebra while allowing division by zero can create a mathematical fallacy, a subtle mistake leading to absurd results. To prevent this, the arithmetic of real numbers and more general numerical structures called field (mathematics), fields leaves division by zero undefined (mathematics), undefined, and situations where division by zero might occur m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Expected Value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean, mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would expect to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by Integral, integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Simes Procedure
SIMes (or H2Imes) is an ''N''-heterocyclic carbene. It is a white solid that dissolves in organic solvents. The compound is used as a ligand in organometallic chemistry Organometallic chemistry is the study of organometallic compounds, chemical compounds containing at least one chemical bond between a carbon atom of an organic molecule and a metal, including alkali, alkaline earth, and transition metals, and so .... It is structurally related to the more common ligand IMes but with a saturated backbone (the S of SIMes indicates a saturated backbone). It is slightly more flexible and is a component in Grubbs II. It is prepared by alkylation of trimethylaniline by dibromoethane followed by ring closure and dehydrohalogenation. References {{organic-compound-stub Carbenes ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]