HOME

TheInfoList



OR:

The randomized weighted majority algorithm is an algorithm in
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
theory. It improves the
mistake bound Mistake(s) may refer to: * An error Law * Mistake (contract law), an erroneous belief, at contracting, that certain facts are true ** Mistake in English contract law, a specific type of mistake, pertaining to England * Mistake (criminal law), ...
of the
weighted majority algorithm In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human exper ...
.


Example

Imagine that every morning before the
stock market A stock market, equity market, or share market is the aggregation of buyers and sellers of stocks (also called shares), which represent ownership claims on businesses; these may include ''securities'' listed on a public stock exchange, as ...
opens, we get a prediction from each of our "experts" about whether the stock market will go up or down. Our goal is to somehow combine this set of predictions into a single prediction that we then use to make a buy or sell decision for the day. The RWMA gives us a way to do this combination such that our prediction record will be nearly as good as that of the single best expert in hindsight.


Motivation

In
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
, the
weighted majority algorithm In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human exper ...
(WMA) is a meta-learning algorithm which "predicts from expert advice". It is not a randomized algorithm: initialize all experts to weight 1. for each round: poll all the experts and predict based on a weighted majority vote of their predictions. cut in half the weights of all experts that make a mistake. Suppose there are n experts and the best expert makes m mistakes. The
weighted majority algorithm In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human exper ...
(WMA) makes at most 2.4(\log_2n+ m) mistakes, which is not a very good bound. We can do better by introducing randomization.


Randomized weighted majority algorithm (RWMA)

The nonrandomized
weighted majority algorithm In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human exper ...
(WMA) only guarantees an upper bound of 2.4 (\log_2n + m), which is problematic for highly error-prone experts (e.g. the best expert still makes a mistake 20% of the time.) Suppose we do N = 100 rounds using n = 10 experts. If the best expert makes m = 20 mistakes, we can only guarantee an upper bound of 2.4 (\log_2 10 + 20) \approx 56 on our number of mistakes. As this is a known limitation of WMA, attempts to improve this shortcoming have been explored in order to improve the dependence on m. Instead of predicting based on majority vote, the weights are used as probabilities: hence the name randomized weighted majority. If w_i is the weight of expert i, let W=\sum_iw_i. We will follow expert i with probability \frac. The goal is to bound the worst-case expected number of mistakes, assuming that the adversary (the world) has to select one of the answers as correct before we make our coin toss. Why is this better in the worst case? Idea: the worst case for the deterministic algorithm (
weighted majority algorithm In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human exper ...
) was when the weights split 50/50. But, now it is not so bad since we also have a 50/50 chance of getting it right. Also, to trade-off between dependence on m and \log_2n, we will generalize to multiply by \beta < 1, instead of necessarily by \frac.


Analysis

At the \ t-th round, define \ F_t to be the fraction of weight on the wrong answers. so, \ F_t is the probability we make a mistake on the \ t-th round. Let \ M denote the total number of mistakes we made so far. Furthermore, we define E \ \sum_tF_t, using the fact that expectation is additive. On the \ t-th round, W becomes \ W(1-(1-\beta)F_t). Reason: on \ F_t fraction, we are multiplying by \ \beta. So, \ W_=n*(1-(1-\beta)F_1)*(1-(1-\beta)F_2)...
Let's say that \ m is the number of mistakes of the best expert so far. We can use the inequality \ W\geq \beta^m. Now we solve. First, take the natural log of both sides. We get: \ m \ln\beta \leq \ln(n) + \sum_t \ln(1-(1-\beta)F_t), Simplify:
\ \ln(1-x)= -x -\frac - \frac -..., So,
\ \ln(1-(1-\beta)F_t)< -(1-\beta)F_t.
\ m \ln\beta \leq \ln(n) - (1-\beta)* \sum_tF_t
Now, use \ E =\ \sum_tF_t, and the result is:
\ E \leq \frac
Let's see if we made any progress: If \ \beta=\frac, we get, \ 1.39m+2 \ln(n).,
if \ \beta=\frac, we get, \ 1.15m+4 \ln(n).
so we can see we made progress. Roughly, of the form \ (1+\epsilon)*m+\epsilon^*\ln(n).


Uses of Randomized Weighted Majority Algorithm (RWMA)

The Randomized Weighted Majority Algorithm can be used to combine multiple algorithms in which case RWMA can be expected to perform nearly as well as the best of the original algorithms in hindsight. Furthermore, one can apply the Randomized Weighted Majority Algorithm in situations where experts are making choices that cannot be combined (or can't be combined easily). For example, RWMA can be applied to repeated game-playing or the online shortest path problem. In the online shortest path problem, each expert is telling you a different way to drive to work. You pick one path using RWMA. Later you find out how well you would have done using all of the suggested paths and penalize appropriately. To do this right, we want to generalize from "losses" of 0 or 1 to losses in ,1 The goal is to have an expected loss not much larger than the loss of the best expert. We can generalize the RWMA by applying a penalty of \beta^{loss} (i.e. two losses of one half result in the same weight as one loss of 1 and one loss of 0). The analysis given in the previous section does not change significantly.


Extensions

*
Multi-armed bandit In probability theory and machine learning, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices ...
problem. * Efficient algorithm for some cases with many experts. * Sleeping experts/"specialists" setting.


See also

*
Machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
*
Weighted majority algorithm In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human exper ...
*
Game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appli ...
*
Multi-armed bandit In probability theory and machine learning, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices ...


References


Further reading


Weighted Majority & Randomized Weighted Majority

Avrim Blum (2004) machine learning theory

Rob Schapire 2006 Foundations of Machine Learning

Predicting From Experts Advice

Uri Feige, Robi Krauthgamer, Moni Naor. Algorithmic Game Theory
Machine learning algorithms