HOME

TheInfoList



OR:

In
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human ...
, the problem of object categorization from image search is the problem of training a classifier to recognize categories of objects, using only the images retrieved automatically with an Internet
search engine A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a ...
. Ideally, automatic image collection would allow classifiers to be trained with nothing but the category names as input. This problem is closely related to that of content-based image retrieval (CBIR), where the goal is to return better image search results rather than training a classifier for image recognition. Traditionally, classifiers are trained using sets of images that are labeled by hand. Collecting such a set of images is often a very time-consuming and laborious process. The use of Internet search engines to automate the process of acquiring large sets of labeled images has been described as a potential way of greatly facilitating computer vision research.


Challenges


Unrelated images

One problem with using Internet image search results as a training set for a classifier is the high percentage of unrelated images within the results. It has been estimated that, when a search engine such as Google images is queried with the name of an object category (such as airplane?, up to 85% of the returned images are unrelated to the category.


Intra-class variability

Another challenge posed by using Internet image search results as training sets for classifiers is that there is a high amount of variability within object categories, when compared with categories found in hand-labeled datasets such as Caltech 101 and
Pascal Pascal, Pascal's or PASCAL may refer to: People and fictional characters * Pascal (given name), including a list of people with the name * Pascal (surname), including a list of people and fictional characters with the name ** Blaise Pascal, Frenc ...
. Images of objects can vary widely in a number of important factors, such as scale, pose, lighting, number of objects, and amount of occlusion.


pLSA approach

In a 2005 paper by Fergus et al., pLSA (probabilistic latent semantic analysis) and extensions of this model were applied to the problem of object categorization from image search. pLSA was originally developed for
document classification Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") ...
, but has since been applied to
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human ...
. It makes the assumption that images are documents that fit the
bag of words model The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
.


Model

Just as text documents are made up of words, each of which may be repeated within the document and across documents, images can be modeled as combinations of ''visual words''. Just as the entire set of text words are defined by a dictionary, the entire set of visual words is defined in a ''codeword dictionary''. pLSA divides documents into ''topics'' as well. Just as knowing the topic(s) of an article allows you to make good guesses about the kinds of words that will appear in it, the distribution of words in an image is dependent on the underlying topics. The pLSA model tells us the probability of seeing each word w given the category \displaystyle d in terms of topics \displaystyle z: \displaystyle P(w, d) = \sum_^Z P(w, z)P(z, d) An important assumption made in this model is that \displaystyle w and \displaystyle d are conditionally independent given \displaystyle z. Given a topic, the probability of a certain word appearing as part of that topic is independent of the rest of the image. Training this model involves finding \displaystyle P(w, z) and \displaystyle P(z, d) that maximizes the likelihood of the observed words in each document. To do this, the
expectation maximization Expectation or Expectations may refer to: Science * Expectation (epistemic) * Expected value, in mathematical probability theory * Expectation value (quantum mechanics) * Expectation–maximization algorithm, in statistics Music * ''Expectati ...
algorithm is used, with the following
objective function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cos ...
: \displaystyle L = \prod_^D \prod_^W P(w, d)^


Application


ABS-pLSA

Absolute position pLSA (ABS-pLSA) attaches location information to each visual word by localizing it to one of X 揵ins?in the image. Here, \displaystyle x represents which of the bins the visual word falls into. The new equation is: \displaystyle P(w, d) = \sum_^Z P(w,x, z)P(z, d) \displaystyle P(w,x, z) and \displaystyle P(d) can be solved for in a manner similar to the original pLSA problem, using the
EM algorithm EM, Em or em may refer to: Arts and entertainment Music * EM, the E major musical scale * Em, the E minor musical scale * Electronic music, music that employs electronic musical instruments and electronic music technology in its production * Ency ...
A problem with this model is that it is not translation or scale invariant. Since the positions of the visual words are absolute, changing the size of the object in the image or moving it would have a significant impact on the spatial distribution of the visual words into different bins.


TSI-pLSA

Translation and scale invariant pLSA (TSI-pLSA). This model extends pLSA by adding another latent variable, which describes the spatial location of the target object in an image. Now, the position \displaystyle x of a visual word is given relative to this object location, rather than as an absolute position in the image. The new equation is: \displaystyle P(w,x, d) = \sum_^Z \sum_^C P(w,x, c,z)P(c)P(z, d) Again, the parameters \displaystyle P(w,x, c,z) and \displaystyle P(d) can be solved using the
EM algorithm EM, Em or em may refer to: Arts and entertainment Music * EM, the E major musical scale * Em, the E minor musical scale * Electronic music, music that employs electronic musical instruments and electronic music technology in its production * Ency ...
. \displaystyle P(c) can be assumed to be a uniform distribution.


Implementation


Selecting words

Words in an image were selected using 4 different feature detectors: *
Kadir–Brady saliency detector The Kadir–Brady saliency detector extracts features of objects in images that are distinct and representative. It was invented by Timor Kadir and J. Michael Brady in 2001 and an affine invariant version was introduced by Kadir and Brady in 200 ...
* Multi-scale Harris detector *
Difference of Gaussians In imaging science, difference of Gaussians (DoG) is a feature enhancement algorithm that involves the subtraction of one Gaussian blurred version of an original image from another, less blurred version of the original. In the simple case of grays ...
* Edge based operator, described in the study Using these 4 detectors, approximately 700 features were detected per image. These features were then encoded as
Scale-invariant feature transform The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local ''features'' in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, ima ...
descriptors, and vector quantized to match one of 350 words contained in a codebook. The codebook was precomputed from features extracted from a large number of images spanning numerous object categories.


Possible object locations

One important question in the TSI-pLSA model is how to determine the values that the random variable \displaystyle C can take on. It is a 4-vector, whose components describe the object抯 centroid as well as x and y scales that define a bounding box around the object, so the space of possible values it can take on is enormous. To limit the number of possible object locations to a reasonable number, normal pLSA is first carried out on the set of images, and for each topic a
Gaussian mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observatio ...
is fit over the visual words, weighted by \displaystyle P(w, z). Up to \displaystyle K Gaussians are tried (allowing for multiple instances of an object in a single image), where \displaystyle K is a constant.


Performance

The authors of the Fergus et al. paper compared performance of the three pLSA algorithms (pLSA, ABS-pLSA, and TSI-pLSA) on handpicked datasets and images returned from Google searches. Performance was measured as the error rate when classifying images in a test set as either containing the image or containing only background. As expected, training directly on Google data gives higher error rates than training on prepared data.? In about half of the object categories tested do ABS-pLSA and TSI-pLSA perform significantly better than regular pLSA, and in only 2 categories out of 7 does TSI-pLSA perform better than the other two models.


OPTIMOL

OPTIMOL (automatic Online Picture collection via Incremental MOdel Learning) approaches the problem of learning object categories from online image searches by addressing model learning and searching simultaneously. OPTIMOL is an iterative model that updates its model of the target object category while concurrently retrieving more relevant images.


General framework

OPTIMOL was presented as a general iterative framework that is independent of the specific model used for category learning. The algorithm is as follows: * Download a large set of images from the Internet by searching for a keyword * Initialize the dataset with seed images * While more images needed in the dataset: ** Learn the model with most recently added dataset images ** Classify downloaded images using the updated model ** Add accepted images to the dataset Note that only the most recently added images are used in each round of learning. This allows the algorithm to run on an arbitrarily large number of input images.


Model

The two categories (target object and background) are modeled as Hierarchical Dirichlet processes (HDPs). As in the pLSA approach, it is assumed that the images can be described with the
bag of words model The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
. HDP models the distributions of an unspecified number of topics across images in a category, and across categories. The distribution of topics among images in a single category is modeled as a
Dirichlet process In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a p ...
(a type of
non-parametric Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distri ...
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
). To allow the sharing of topics across classes, each of these Dirichlet processes is modeled as a sample from another 損arent?Dirichlet process. HDP was first described by Teh et al. in 2005.


Implementation


Initialization

The dataset must be initialized, or seeded with an original batch of images which serve as good exemplars of the object category to be learned. These can be gathered automatically, using the first page or so of images returned by the search engine (which tend to be better than the subsequent images). Alternatively, the initial images can be gathered by hand.


Model learning

To learn the various parameters of the HDP in an incremental manner,
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is dif ...
is used over the latent variables. It is carried out after each new set of images is incorporated into the dataset. Gibbs sampling involves repeatedly sampling from a set of
random variables A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
in order to approximate their distributions. Sampling involves generating a value for the random variable in question, based on the state of the other random variables on which it is dependent. Given sufficient samples, a reasonable approximation of the value can be achieved.


Classification

At each iteration, \displaystyle P(z, c) and \displaystyle P(x, z,c) can be obtained from model learned after the previous round of Gibbs sampling, where \displaystyle z is a topic, \displaystyle c is a category, and \displaystyle x is a single visual word. The likelihood of an image being in a certain class, then, is: \displaystyle P(I, c) = \prod_i \sum_j P(x_i, z_j,c)P(z_j, c) This is computed for each new candidate image per iteration. The image is classified as belonging to the category with the highest likelihood.


Addition to the dataset and "cache set"

In order to qualify for incorporation into the dataset, however, an image must satisfy a stronger condition: \displaystyle \frac > \frac\frac Where \displaystyle c_f and \displaystyle c_b are foreground (object) and background categories, respectively, and the ratio of constants describes the risk of accepting false positives and false negatives. They are adjusted automatically at every iteration, with the cost of a false positive set higher than that of a false negative. This ensures that a better dataset is collected. Once an image is accepted by meeting the above criterion and incorporated into the dataset, however, it needs to meet another criterion before it is incorporated into the 揷ache set敆the set of images to be used for training. This set is intended to be a diverse subset of the set of accepted images. If the model were trained on all accepted images, it might become more and more highly specialized, only accepting images very similar to previous ones.


Performance

Performance of the OPTIMOL method is defined by three factors: * ''Ability to collect images'': OPTIMOL, it is found, can automatically collect large numbers of good images from the web. The size of the OPTIMOL-retrieved image sets surpass that of large human-labeled image sets for the same categories, such as those found in Caltech 101. * ''Classification accuracy'': Classification accuracy was compared to the accuracy displayed by the classifier yielded by the pLSA methods discussed earlier. It was discovered that OPTIMOL achieved slightly higher accuracy, obtaining 74.8% accuracy on 7 object categories, as compared to 72.0%. * ''Comparison with batch learning'': An important question to address is whether OPTIMOL's incremental learning gives it an advantage over traditional batch learning methods, when everything else about the model is held constant. When the classifier learns incrementally, by selecting the next images based on what it learned from the previous ones, three important results are observed: ** Incremental learning allows OPTIMOL to collect a better dataset ** Incremental learning allows OPTIMOL to learn faster (by discarding irrelevant images) ** Incremental learning does not negatively affect the
ROC curve A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of m ...
of the classifier; in fact, incremental learning yielded an improvement


Object categorization in content-based image retrieval

Typically, image searches only make use of text associated with images. The problem of content-based image retrieval is that of improving search results by taking into account visual information contained in the images themselves. Several CBIR methods make use of classifiers trained on image search results, to refine the search. In other words, object categorization from image search is one component of the system. OPTIMOL, for example, uses a classifier trained on images collected during previous iterations to select additional images for the returned dataset. Examples of CBIR methods that model object categories from image search are: * Fergus et al., 2004 * Berg and Forsyth, 2006 * Yanai and Barnard, 2006 {{cite conference , last = Yanai , first = K , author2=Barnard, K. , title = Probabilistic web image gathering , book-title = ACM SIGMM workshop on Multimedia information retrieval , year = 2005 , url = http://portal.acm.org/citation.cfm?id=1101838


References


See also

*
Probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one ca ...
*
Latent Dirichlet allocation In natural language processing, Latent Dirichlet Allocation (LDA) is a generative statistical model that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an ex ...
*
Machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
*
Bag of words model The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
*
Content-based image retrieval Content-based image retrieval, also known as query by image content ( QBIC) and content-based visual information retrieval (CBVIR), is the application of computer vision techniques to the image retrieval problem, that is, the problem of searching ...
Object recognition and categorization Image search