Online Content Analysis
   HOME

TheInfoList



OR:

Online content analysis or online textual analysis refers to a collection of research techniques used to describe and make inferences about online material through systematic coding and interpretation. Online content analysis is a form of
content analysis Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic ...
for analysis of Internet-based communication.


History and definition

Content analysis as a systematic examination and interpretation of communication dates back to at least the 17th century. However, it was not until the rise of the
newspaper A newspaper is a periodical publication containing written information about current events and is often typed in black ink with a white or gray background. Newspapers can cover a wide variety of fields such as politics, business, sports a ...
in the early 20th century that the mass production of printed material created a demand for quantitative analysis of printed words. Berelson’s (1952) definition provides an underlying basis for textual analysis as a "research technique for the objective, systematic and quantitative description of the manifest content of communication." Content analysis consists of categorizing units of texts (i.e. sentences, quasi-sentences, paragraphs, documents, web pages, etc.) according to their substantive characteristics in order to construct a dataset that allows the analyst to interpret texts and draw inferences. While content analysis is often
quantitative Quantitative may refer to: * Quantitative research, scientific investigation of quantitative properties * Quantitative analysis (disambiguation) * Quantitative verse, a metrical system in poetry * Statistics, also known as quantitative analysis ...
, researchers conceptualize the technique as inherently
mixed methods Multimethodology or multimethod research includes the use of more than one method of data collection or research in a research study or set of related studies. Mixed methods research is more specific in that it includes the mixing of qualitative a ...
because textual coding requires a high degree of qualitative interpretation. Social scientists have used this technique to investigate research questions concerning
mass media Mass media refers to a diverse array of media technologies that reach a large audience via mass communication. The technologies through which this communication takes place include a variety of outlets. Broadcast media transmit information ...
,
media effects In media studies, mass communication, media psychology, communication theory, and sociology, media influence and the media effect are topics relating to mass media and media culture's effects on individual or an audience's thoughts, attitudes, an ...
and
agenda setting Agenda setting describes the "ability (of the news media) to influence the importance placed on the topics of the Political agenda, public agenda". The study of agenda-setting describes the way media attempts to influence viewers, and establish a ...
. With the rise of online communication, content analysis techniques have been adapted and applied to
internet research Internet research is the practice of using Internet information, especially free information on the World Wide Web, or Internet-based resources (like Internet discussion forum) in research. Internet research has had a profound impact on the way id ...
. As with the rise of newspapers, the proliferation of online content provides an expanded opportunity for researchers interested in content analysis. While the use of online sources presents new research problems and opportunities, the basic research procedure of online content analysis outlined by McMillan (2000) is virtually indistinguishable from content analysis using offline sources: # Formulate a research question with a focus on identifying testable hypotheses that may lead to theoretical advancements. # Define a
sampling frame In statistics, a sampling frame is the source material or device from which a sample is drawn. It is a list of all those within a population who can be sampled, and may include individuals, households or institutions. Importance of the sampling fra ...
that a sample will be drawn from, and construct a sample (often called a ‘corpus’) of content to be analyzed. # Develop and implement a coding scheme that can be used to categorize content in order to answer the question identified in step 1. This necessitates specifying a time period, a context unit in which content is embedded, and a coding unit which categorizes the content. # Train coders to consistently implement the coding scheme and verify
reliability Reliability, reliable, or unreliable may refer to: Science, technology, and mathematics Computing * Data reliability (disambiguation), a property of some disk arrays in computer storage * High availability * Reliability (computer networking), a ...
among coders. This is a key step in ensuring
replicability Reproducibility, also known as replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a ...
of the analysis. # Analyze and interpret the data. Test hypotheses advanced in step 1 and draw conclusions about the content represented in the dataset.


Content analysis in internet research

Since the rise of online communication, scholars have discussed how to adapt textual analysis techniques to study web-based content. The nature of online sources necessitates particular care in many of the steps of a content analysis compared to offline sources. While offline content such as printed text remains static once produced, online content can frequently change. The dynamic nature of online material combined with the large and increasing volume of online content can make it challenging to construct a sampling frame from which to draw a random sample. The content of a site may also differ across users, requiring careful specification of the sampling frame. Some researchers have used search engines to construct sampling frames. This technique has disadvantages because search engine results are unsystematic and non-random making them unreliable for obtaining an unbiased sample. The sampling frame issue can be circumvented by using an entire population of interest, such as tweets by particular Twitter users or online archived content of certain newspapers as the sampling frame. Changes to online material can make categorizing content (step 3) more challenging. Because online content can change frequently it is particularly important to note the time period over which the sample is collected. A useful step is to archive the sample content in order to prevent changes from being made. Online content is also non-linear. Printed text has clearly delineated boundaries that can be used to identify context units (e.g., a newspaper article). The bounds of online content to be used in a sample are less easily defined. Early online content analysts often specified a ‘Web site’ as a context unit, without a clear definition of what they meant. Researchers recommend clearly and consistently defining what a ‘web page’ consists of, or reducing the size of the context unit to a feature on a website. Researchers have also made use of more discrete units of online communication such as web comments or tweets. King (2008) used an ontology of terms trained from many thousands of pre-classified documents to analyse the subject matter of a number of search engines.


Automatic content analysis

The rise of online content has dramatically increased the amount of digital text that can be used in research. The quantity of text available has motivated methodological innovations in order to make sense of textual datasets that are too large to be practically hand-coded as had been the conventional methodological practice. Advances in methodology together with the increasing capacity and decreasing expense of computation has allowed researchers to use techniques that were previously unavailable to analyze large sets of textual content. Automatic content analysis represents a slight departure from McMillan's online content analysis procedure in that human coders are being supplemented by a computational method, and some of these methods do not require categories to be defined in advanced. Quantitative textual analysis models often employ '
bag of words The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
' methods that remove word ordering, delete words that are very common and very uncommon, and simplify words through
lemmatisation Lemmatisation ( or lemmatization) in linguistics is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form. In computational linguistics, lemma ...
or
stemming In linguistic morphology and information retrieval, stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form. The stem need not be identical to the morpholog ...
that reduces the dimensionality of the text by reducing complex words to their root word. While these methods are fundamentally reductionist in the way they interpret text, they can be very useful if they are correctly applied and validated. Grimmer and Stewart (2013) identify two main categories of automatic textual analysis: ''supervised'' and ''unsupervised'' methods. Supervised methods involve creating a coding scheme and manually coding a sub-sample of the documents that the researcher wants to analyze. Ideally, the sub-sample, called a 'training set' is representative of the sample as a whole. The coded training set is then used to 'teach' an algorithm the how the words in the documents correspond to each coding category. The algorithm can be applied to automatically analyze the remained of the documents in the corpus. * Dictionary Methods: the researcher pre-selects a set of keywords (
n-gram In the fields of computational linguistics and probability, an ''n''-gram (sometimes also called Q-gram) is a contiguous sequence of ''n'' items from a given sample of text or speech. The items can be phonemes, syllables, letters, words or b ...
) for each category. The machine then uses these keywords to classify each text unit into a category. * Individual Methods: the researcher pre-labels a sample of texts and trains a
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
algorithm (i.e. SVM algorithm) using those labels. The machine labels the remainder of the observations by extrapolating information from the training set. * Ensemble Methods: instead of using only one machine-learning algorithm, the researcher trains a set of them and uses the resulting multiple labels to label the rest of the observations (see Collingwood and Wiklerson 2011 for more details). * Supervised Ideological Scaling (i.e. wordscores) is used to place different text units along an ideological continuum. The researcher selects two sets of texts that represent each ideological extreme, which the algorithm can use to identify words that belong to each extreme point. The remainder of the texts in the corpus are scaled depending on how many words of each extreme reference they contain. Unsupervised methods can be used when a set of categories for coding cannot be well-defined prior to analysis. Unlike supervised methods, human coders are not required to train the algorithm. One key choice for researchers when applying unsupervised methods is selecting the number of categories to sort documents into rather than defining what the categories are in advance. * Single membership models: these models automatically cluster texts into different categories that are mutually exclusive, and documents are coded into one and only one category. As pointed out by Grimmer and Stewart (16), "each algorithm has three components: (1) a definition of document similarity or distance; (2) an objective function that operationalizes and ideal clustering; and (3) an optimization algorithm." * Mixed membership models: According also to Grimmer and Stewart (17), mixed membership models "improve the output of single-membership models by including additional and problem-specific structure." Mixed membership FAC models classifies individual words within each document into categories, allowing the document as a whole to be a part of multiple categories simultaneously.
Topic model In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden ...
s represent one example of mixed membership FAC that can be used to analyze changes in focus of political actors or newspaper articles. One of the most used topic modeling technique is
LDA LDA may refer to: Aviation *Localizer type directional aid, an instrument approach to an airport *Landing distance available, the length of runway that is available for the ground run of an airplane landing Law *Legal document assistant, a non-la ...
. *Unsupervised Ideological Scaling (i.e. wordsfish): algorithms that allocate text units into an ideological continuum depending on shared grammatical content. Contrary to supervised scaling methods such as wordscores, methods such as wordfish do not require that the researcher provides samples of extreme ideological texts.


Validation

Results of supervised methods can be validated by drawing a distinct sub-sample of the corpus, called a 'validation set'. Documents in the validation set can be hand-coded and compared to the automatic coding output to evaluate how well the algorithm replicated human coding. This comparison can take the form of inter-coder reliability scores like those used to validate the consistency of human coders in traditional textual analysis. Validation of unsupervised methods can be carried out in several ways. * Semantic (or
internal Internal may refer to: *Internality as a concept in behavioural economics *Neijia, internal styles of Chinese martial arts *Neigong or "internal skills", a type of exercise in meditation associated with Daoism *''Internal (album)'' by Safia, 2016 ...
) validity represents how well documents in each identified cluster represent a distinct, categorical unit. In a topic model, this would be the extent to which the documents in each cluster represent the same topic. This can be tested by creating a validation set that human coders use to manually validate topic choice or the relatedness of within-cluster documents compared to documents from different clusters. * Predictive (or
external External may refer to: * External (mathematics), a concept in abstract algebra * Externality In economics, an externality or external cost is an indirect cost or benefit to an uninvolved third party that arises as an effect of another party' ...
) validity is the extent to which shifts in the frequency of each cluster can be explained by external events. If clusters of topics are valid, the topics that are most prominent should respond across time in a predictable way as a result of outside events that occur.


Challenges in online textual analysis

Despite the continuous evolution of text-analysis in the social science, there are still some unsolved methodological concerns. This is a (non-exclusive) list with some of this concerns: * When should researchers define their categories?
Ex-ante The term ''ex-ante'' (sometimes written ''ex ante'' or ''exante'') is a phrase meaning "before the event". Ex-ante or notional demand refers to the desire for goods and services that is not backed by the ability to pay for those goods and servic ...
, back-and-forth, or
ad-hoc Ad hoc is a Latin phrase meaning literally 'to this'. In English, it typically signifies a solution for a specific purpose, problem, or task rather than a generalized solution adaptable to collateral instances. (Compare with ''a priori''.) Com ...
? Some social scientists argue that researchers should build their theory, expectations and methods (in this case specific categories they will use to classify different text units) before they start collecting and studying the data whereas some others support that defining a set of categories is a back-and-forth process. * Validation. Although most researchers report validation measurements for their methods (i.e. inter-coder reliability, precision and recall estimates, confusion matrices, etc.), some others do not. In particular, a larger number of academics are concerned about how some topic modeling techniques can hardly be validated.Chuang, Jason, John D. Wilkerson, Rebecca Weiss, Dustin Tingley, Brandon M. Stewart, Margaret E. Roberts, Forough Poursabzi-Sangdeh, Justin Grimmer,
Leah Findlater Leah K. Findlater is a Canadian-American computer scientist specializing in human-computer interaction, mobile computing, and computer accessibility. She is an associate professor of computer science at the University of Washington. Education Fin ...
, Jordan Boyd-Graber, and Jeffrey Heer. (2014)
Computer-Assisted Content Analysis: Topic Models for Exploring Multiple Subjective Interpretations
Paper presented at the Conference on Neural Information Processing Systems (NIPS). Workshop on HumanPropelled Machine Learning. Montreal, Canada.
* Random Samples. On the one hand, it is extremely hard to know how many units of one type of texts (for example blogposts) are in a certain time in the Internet. Thus, since most of the time the universe is unknown, how can researcher select a random sample? If in some cases is almost impossible to get a random sample, should researchers work with samples or should they try to collect all the text units that they observer? And on the other hand, sometimes researchers have to work with samples that are given to them by some search engines (i.e. Google) and online companies (i.e. Twitter) but the research do not have access to how these samples have been generated and whether they are random or not. Should researches use such samples?


See also

*
Content analysis Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic ...
*
Text mining Text mining, also referred to as ''text data mining'', similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extract ...


References

{{Online research methods