TheInfoList

Consider an example where the probability of developing lung cancer among smokers was 20% and among non-smokers 1%. This situation is expressed in the 2 × 2 table to the right.

Here, a = 20, b = 80, c = 1, and d = 99. Then the relative risk of cancer associated with smoking would be

${\displaystyle RR={\frac {a/(a+b)}{c/(c+d)}}={\frac {20/100}{1/100}}=20.}$

Smokers would be twenty times as likely as non-smokers to develop lung cancer.

The alternative term risk ratio is sometimes used because it is the ratio of the risk in the exposed to the risk in the unexposed.

Relative risk contrasts with the actual or absolute risk, and may be confused with it in the media or elsewhere.

## Statistical use and meaning

Relative risk is frequently used in the statistical analysis of binary outcomes where the outcome of interest has relatively low probability. It is thus often suited to clinical trial data, where it is used to compare the risk of developing a disease, in people not receiving the new medical treatment (or receiving a placebo) versus people who are receiving an established (standard of care) treatment. Alternatively, it is used to compare the risk of developing a side effect in people receiving a drug as compared to the people who are not receiving the treatment (or receiving a placebo). It is particularly attractive because it can be calculated by hand in the simple case, but is also amenable to regression modelling, typically in a Poisson regression framework.

In a simple comparison between an experimental group and a control group:

• A relative risk of 1 means there is no difference in risk between the two groups.
• An RR of < 1 means the event is less likely to occur in the experimental group than in the control group.
• An RR of > 1 says the event is more likely to happen in the experimental group than in the control group.

When the event is not necessarily an adverse one, the term relative probability may be used instead.[3][4]

### Bayesian interpretation

We could assume a disease noted by ${\displaystyle D}$, and no disease noted by ${\displaystyle D'}$, exposure noted by ${\displaystyle E}$, and no exposure noted by ${\displaystyle E'}$. Relative risk can be written as

${\displaystyle RR={\frac {\Pr(D\mid E)}{\Pr(D\mid E')}}={\frac {\Pr(D\cap E)\Pr(E')}{\Pr(D\cap E')\Pr(E)}}}$

Substituting ${\displaystyle \Pr(D\cap E)=\Pr(E\mid D)\Pr(D)}$ and ${\displaystyle \Pr(D\cap E')=\Pr(E'\mid D)\Pr(D)}$

${\displaystyle RR={\frac {\Pr(E\mid D)/\Pr(E'\mid D)}{\Pr(E)/\Pr(E')}}}$

In other words, you can see the relative risk in Bayesian terms as the posterior ratio of the exposure (i.e. after seeing the disease) normalised by the prior ratio of exposure.[5] In simple terms, if the posterior ratio of exposure is similar to that of the prior, I will end up with an equal effect, or ≈ 1, indicating no association with the disease, since it didn't change my beliefs of the exposure. If on the other hand, the posterior ratio of exposure is smaller or higher than that of the prior ratio, then the disease has changed my view of the exposure danger, and the magnitude of this change is the relative risk.

The odds ratio can be then viewed in similar terms as follows

${\displaystyle OR={\frac {\Pr(D\mid E)/(1-\Pr(D\mid E))}{\Pr(DE')/(1-\Pr(D\mid E'))}}}$

or can be alternatively defined in terms of exposure similar to the above definition as

${\displaystyle OR={\frac {\Pr(E\mid D)/(1-\Pr(E\mid D))}{\Pr(E\mid D')/(1-\Pr(E\mid D'))}}}$

### Log transformation for approximating a normal distribution

As a consequence of the delta method, the logarithm of the relative risk has a sampling distribution that is approximately normal with variance that can be estimated by a formula involving the number of subjects in each group and the event rates in each group.[6] This permits the construction of a confidence interval (CI) which is symmetric around log(RR), i.e.,

${\displaystyle CI=\log(RR)\pm \mathrm {SE} \times z_{\alpha }}$

where ${\displaystyle z_{\alpha }}$ is the standard score for the chosen level of significance and SE the standard error. The antilog can be taken of the two bounds of the log-CI, giving the high and low bounds for an asymmetric confidence interval around the relative risk.

In regression models, the treatment is typically included as a dummy variable along with other factors that may affect risk. The relative risk is usually reported as calculated for the mean of the sample values of the explanatory variables.

### Comparison to the odds ratio

Relative risk is different from the odds ratio, although it asymptotically approaches it for small probabilities. In the example of association of smoking to lung cancer considered above, if a is substantially smaller than b, then a/(a + b) ${\displaystyle \scriptstyle \approx }$ a/b. And if similarly c is much smaller than d, then c/(c + d) ${\displaystyle \scriptstyle \approx }$ c/d. Thus

${\displaystyle OR={\frac {ad}{bc}}}$

This is the odds ratio.

In fact, the odds ratio has much broader use in statistics, since logistic regression, often associated with clinical trials, works with the log of the odds ratio, not relative risk. Because the record of the odds ratio is estimated as a linear function of the explanatory variables, the estimated odds ratio for 70-year-olds and 60-year-olds associated with the type of treatment would be the same in logistic regression models where the outcome is associated with drug and age, although the relative risk might be significantly different. In cases like this, statistical models of the odds ratio often reflect the underlying mechanisms more efficiently.

Since relative risk is a more intuitive measure of effectiveness, the distinction is important especially in cases of medium to high probabilities. If action A carries a risk of 99.9% and action B a risk of 99.0% then the relative risk is just over 1, while the odds associated with action A are more than 10 times higher than the odds with B.

In epidemiological research, the odds ratio is commonly used for case-control studies, as odds, but not probabilities, are usually estimated.[7] Relative risk is used in randomized controlled trials and cohort studies.[8]

In statistical modelling, approaches like poisson regression (for counts of events per unit exposure) have relative risk interpretations: the estimated effect of an explanatory variable is multiplicative on the rate and thus leads to a risk ratio or relative risk. Logistic regression (for binary outcomes, or counts of successes out of a number of trials) must be interpreted in odds-ratio terms: the effect of an explanatory variable is multiplicative on the odds and thus leads to an odds ratio.

### Statistical significance (confidence) and relative risk

Whether a given relative risk can be considered statistically significant is dependent on the relative difference between the conditions compared, the amount of measurement and the noise associated with the measurement (of the events considered). In other words, the confidence one has, in a given relative risk being non-random (i.e. it is not a consequence of chance), depends on the signal-to-noise ratio and the sample size.

Expressed mathematically, the following formula gives the confidence that a result is not by random chance by Sackett:[9]

${\displaystyle {\text{confidence}}={\frac {\text{signal}}{\text{noise}}}\times {\sqrt {\text{sample size}}}.}$

For clarity, the above formula is presented in tabular form below.

Dependence of confidence with noise, signal and sample size (tabular form)

Parameter Parameter increases Parameter decreases
Noise Confidence decreases Confidence increases
Signal Confidence increases Confidence decreases
Sample size Confidence increases Confidence decreases

In words, the confidence is higher if the noise is lower and/or the sample size is larger and/or the effect size (signal) is increased. The confidence of a relative risk value (and its associated confidence interval) is not dependent on effect size alone. If the sample size is large and the noise is low, a small effect size can be measured with high confidence. Whether a small effect size is considered significant is dependent on the context of the events compared.

In medicine, small effect sizes (reflected by small relative risk values) are usually considered clinically relevant (if there is great confidence in them) and are frequently used to guide treatment decisions. A relative risk of 1.10 may seem very small, but over a large number of patients will make a noticeable difference. Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs.

### Tests

The distribution of the log Relative Risk is approximately normal with:

${\displaystyle X\ \sim \ {\mathcal {N}}(\log(RR),\,\sigma ^{2}).\,}$

The standard error for the log(relative risk) is approximately:[10]

${\displaystyle SE(\log(RR))={\sqrt {[1/a+1/c]-[1/(a+b)+1/(c+d)]}}}$

This is an asymptotic approximation.

The risk ratio confidence intervals are based on the sampling distribution of

${\displaystyle \log _{e}{\frac {p_{1}}{p_{2}}}=\log _{e}{\frac {a/(a+b)}{c/(c+d)}}}$

This is considered to be (approximately) normal with

${\displaystyle m=\log _{e}{\frac {p_{1}}{p_{2}}}}$

and

${\displaystyle s^{2}={\frac {b}{a(a+b)}}+{\frac {d}{c(c+d)}}}$

where m is the mean and s2 is the variance. Approximate 95% confidence intervals (CI) for the relative risk are

${\displaystyle CI=e^{m\pm 1.96s}}$

In applications using this estimator, the sample size should be at least 25[citation needed].

## Worked example

Example 1: risk reduction Example 2: risk increase
Experimental group (E) Control group (C) Total (E) (C) Total
Events (E) EE = 15 CE = 100 115 EE = 75 CE = 100 175
Non-events (N) EN = 135 CN = 150 285 EN = 75 CN = 150 225
Total subjects (S) ES = EE + EN = 150 CS = CE + CN = 250 400 ES = 150 CS = 250 400
Event rate (ER) EER = EE / ES = 0.1, or 10% CER = CE / CS = 0.4, or 40% EER = 0.5 (50%) CER = 0.4 (40%)
Equation Variable Abbr. Example 1 Example 2
EER − CER < 0: absolute risk reduction ARR (−)0.3, or (−)30% N/A
> 0: absolute risk increase ARI N/A 0.1, or 10%
(EER − CER) / CER < 0: relative risk reduction RRR (−)0.75, or (−)75% N/A
> 0: relative risk increase RRI N/A 0.25, or 25%
1 / (EER − CER) < 0: number needed to treat NNT (−)3.33 N/A
> 0: number needed to harm NNH N/A 10
EER / CER relative risk RR 0.25 1.25
(EE / EN) / (CE / CN) odds ratio OR 0.167 1.5
EER − CER attributable risk AR (−)0.30, or (−)30% 0.1, or 10%
(RR − 1) / RR attributable risk percent ARP N/A 20%
1 − RR (or 1 − OR) preventive fraction PF 0.75, or 75% N/A
• Example 3: Ratios are presented for each of experimental and control groups. In the disease-risk 2 × 2 table above, suppose a + c = 1 and b + d = 1 and the total number of patients and healthy people be m and n, respectively. Then prevalence ratio becomes p = m/(m + n). We can put q = m/n = p/(1 − p). Thus
${\displaystyle RR={\frac {am/(am+bn)}{cm/(cm+dn)}}={\frac {a(d+bq)}{b(c+aq)}}={\frac {ad\left\lbrace 1+(b/d)q\right\rbrace }{bc\left\lbrace 1+(a/c)q\right\rbrace }}.}$
If p is small enough, then q would be small enough and either of (b/d)q and (a/c)q would be small enough to be regarded as 0 compared with 1. RR would be reduced to the odd ratio as above.
Among Japanese, not a small fraction of patients of Behçet's disease are bestowed with a specific HLA type, namely HLA-B51 gene.[11] In a survey, the proportion is 63% of the patients with this gene, while in healthy people the ratio is 21%.[11] If the figures are considered to be representative for most Japanese, using the values of 12,700 patients in Japan in 1984 and the Japanese population about 120 million in 1982, then RR = 6.40. Compare with the odd ratio 6.41.

## References

1. ^ Sistrom CL, Garvan CW (January 2004). "Proportions, odds, and risk". Radiology. 230 (1): 12–19. doi:10.1148/radiol.2301031028. PMID 14695382.
2. ^ Weddell, Angie. "Evidence from Safety Research to Update Cycling Training Materials in Canada" (PDF). University of British Columbia. Retrieved 30 September 2013.
3. ^ Burton, Michael; Rigby, Dan; Young, Trevor (2008). "Analysis of the Determinants of Adoption of Organic Horticultural Techniques in the UK". Journal of Agricultural Economics. 50 (1): 47–63. doi:10.1111/j.1477-9552.1999.tb00794.x. ISSN 0021-857X.
4. ^ Khatri, Pooja; Yeatts, Sharon D; Mazighi, Mikael; Broderick, Joseph P; Liebeskind, David S; Demchuk, Andrew M; Amarenco, Pierre; Carrozzella, Janice; Spilker, Judith; Foster, Lydia D; Goyal, Mayank; Hill, Michael D; Palesch, Yuko Y; Jauch, Edward C; Haley, E Clarke; Vagal, Achala; Tomsick, Thomas A (2014). "Time to angiographic reperfusion and clinical outcome after acute ischaemic stroke: an analysis of data from the Interventional Management of Stroke (IMS III) phase 3 trial". The Lancet Neurology. 13 (6): 567–574. doi:10.1016/S1474-4422(14)70066-3. ISSN 1474-4422.
5. ^ Statistical Methods in Medical Research, Fourth Edition http://onlinelibrary.wiley.com/ book/10.1002/9780470773666
6. ^ See e.g. Stata FAQ on CIs for odds ratios, hazard ratios, IRRs and RRRs at https://www.stata.com/support/faqs/stat/2deltameth.html
7. ^ Deeks J (1998). "When can odds ratios mislead? Odds ratios should be used only in case-control studies and logistic regression analyses". BMJ. 317 (7166): 1155–6. doi:10.1136/bmj.317.7166.1155a. PMC . PMID 9784470.
8. ^ "Odds ratio versus relative risk". Medical University of South Carolina. Archived from the original on 30 April 2009. Retrieved September 8, 2005.
9. ^
10. ^ Jewell, Nicholas P. (2004). Statistics for epidemiology. Boca Raton: Chapman & Hall/CRC. ISBN 978-1584884330.
11. ^ a b Ohno S, Ohguchi M, Hirose S, Matsuda H, Wakisaka A, Aizawa M (1982). "Close association of HLA-BW51, MT2 and Behçet's disease," In Inaba, G, ed. (1982). Behçet's Disease : Pathogenetic Mechanism and Clinical Future: Proceedings of the International Conference on Behçet's Disease, held October 23–24, 1981, pp. 73–79, Tokyo: University of Tokyo Press, ISBN 0-86008-322-5.