Co-training
   HOME

TheInfoList



OR:

Co-training is a
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in
text mining Text mining, also referred to as ''text data mining'', similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extract ...
for
search engines A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a ...
. It was introduced by Avrim Blum and Tom Mitchell in 1998.


Algorithm design

Co-training is a
semi-supervised learning Weak supervision is a branch of machine learning where noisy, limited, or imprecise sources are used to provide supervision signal for labeling large amounts of training data in a supervised learning setting. This approach alleviates the burden of ...
technique that requires two ''views'' of the data. It assumes that each example is described using two different sets of features that provide complementary information about the instance. Ideally, the two views are
conditionally independent In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
(i.e., the two feature sets of each instance are conditionally independent given the class) and each view is sufficient (i.e., the class of an instance can be accurately predicted from each view alone). Co-training first learns a separate classifier for each view using any labeled examples. The most confident predictions of each classifier on the unlabeled data are then used to iteratively construct additional labeled
training data In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
. The original co-training paper described experiments using co-training to classify web pages into "academic course home page" or not; the classifier correctly categorized 95% of 788 web pages with only 12 labeled web pages as examples. The paper has been cited over 1000 times, and received the 10 years Best Paper Award at the 25th
International Conference on Machine Learning The International Conference on Machine Learning (ICML) is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artifici ...
(
ICML The International Conference on Machine Learning (ICML) is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artifici ...
2008), a renowned
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includi ...
conference. Krogel and Scheffer showed in 2004 that co-training is only beneficial if the data sets are independent; that is, if one of the classifiers correctly labels a data point that the other classifier previously misclassified. If the classifiers agree on all unlabeled data, i.e. they are dependent, labeling the data does not create new information. In an experiment where dependence of the classifiers was greater than 60%, results worsened.


Uses

Co-training has been used to classify web pages using the text on the page as one view and the anchor text of
hyperlinks In computing, a hyperlink, or simply a link, is a digital reference to data that the user can follow or be guided by clicking or tapping. A hyperlink points to a whole document or to a specific element within a document. Hypertext is text wi ...
on other pages that point to the page as the other view. Simply put, the text in a hyperlink on one page can give information about the page it links to. Co-training can work on "unlabeled" text that has not already been classified or
tagged Tagged may refer to: * Tagged (website), a social discovery website * Tagged (web series), an American teen psychological thriller web series {{disambiguation ...
, which is typical for the text appearing on web pages and in emails. According to Tom Mitchell, "The features that describe a page are the words on the page and the links that point to that page. The co-training models utilize both classifiers to determine the likelihood that a page will contain data relevant to the search criteria." Text on websites can judge the relevance of link classifiers, hence the term "co-training". Mitchell claims that other search algorithms are 86% accurate, whereas co-training is 96% accurate. Co-training was used on FlipDog.com, a job search site, and by the U.S. Department of Labor, for a directory of continuing and distance education. It has been used in many other applications, including
statistical parsing Statistical parsing is a group of parsing methods within natural language processing. The methods have in common that they associate grammar rules with a probability. Grammar rules are traditionally viewed in computational linguistics as defining ...
and visual detection.


References

;Notes * * * *{{cite book, last=Wang, first=William Yang, author2=Kapil Thadani , author3=Kathleen McKeown , title=Identifying Event Descriptions using Co-training with Online News Summaries, publisher=AFNLP & ACL, year=2011, series=the 5th International Joint Conference on Natural Language Processing (IJCNLP 2011), url=https://www.cs.cmu.edu/~yww/papers/ijcnlp2011.pdf


External links


Lecture by Tom Mitchell introducing co-training and other semi-supervised machine learning for use on unlabeled dataLecture by Avrim Blum on semi-supervised learning, including co-trainingCo-Training group at Pittsburgh Science of Learning Center
Classification algorithms