History
The idea behind Google Flu Trends was that, by monitoring millions of users’ health tracking behaviors online, the large number of Google search queries gathered can be analyzed to reveal if there is the presence of flu-like illness in a population. Google Flu Trends compared these findings to a historic baseline level of influenza activity for its corresponding region and then reports the activity level as either minimal, low, moderate, high, or intense. These estimates have been generally consistent with conventional surveillance data collected by health agencies, both nationally and regionally. Roni Zeiger helped develop Google Flu Trends.Methods
Google Flu Trends was described as using the following method to gather information about flu trends. First, a time series is computed for about 50 million common queries entered weekly within the United States from 2003 to 2008. A query's time series is computed separately for each state and normalized into a fraction by dividing the number of each query by the number of all queries in that state. By identifying the IP address associated with each search, the state in which this query was entered can be determined. A linear model is used to compute the log-odds of Influenza-like illness (ILI) physician visit and the log-odds of ILI-related search query: : ''P'' is the percentage of ILI physician visit and ''Q'' is the ILI-related query fraction computed in previous steps. β0 is the intercept and β1 is the coefficient, while ε is the error term. Each of the 50 million queries is tested as ''Q'' to see if the result computed from a single query could match the actual history ILI data obtained from the U.S. Centers for Disease Control and Prevention (CDC). This process produces a list of top queries which gives the most accurate predictions of CDC ILI data when using the linear model. Then the top 45 queries are chosen because, when aggregated together, these queries fit the history data the most accurately. Using the sum of top 45 ILI-related queries, the linear model is fitted to the weekly ILI data between 2003 and 2007 so that the coefficient can be gained. Finally, the trained model is used to predict flu outbreak across all regions in the United States. This algorithm has been subsequently revised by Google, partially in response to concerns about accuracy, and attempts to replicate its results have suggested that the algorithm developers "felt an unarticulated need to cloak the actual search terms identified".Privacy concerns
Google Flu Trends tries to avoid privacy violations by only aggregating millions of anonymous search queries, without identifying individuals that performed the search. Their search log contains the IP address of the user, which could be used to trace back to the region where the search query is originally submitted. Google runs programs on computers to access and calculate the data, so no human is involved in the process. Google also implemented the policy to anonymize IP address in their search logs after 9 months. However, Google Flu Trends has raised privacy concerns among some privacy groups.Impact
An initial motivation for GFT was that being able to identify disease activity early and respond quickly could reduce the impact of seasonal and pandemic influenza. One report was that Google Flu Trends was able to predict regional outbreaks of flu up to 10 days before they were reported by the CDC (Centers for Disease Control and Prevention). In theAccuracy
The initial Google paper stated that the Google Flu Trends predictions were 97% accurate comparing with CDC data. However subsequent reports asserted that Google Flu Trends' predictions have sometimes been very inaccurate—especially over the interval 2011–2013, when it consistently overestimated relative flu incidence, and over one interval in the 2012-2013 flu season predicted twice as many doctors' visits as the CDC recorded. One source of problems is that people making flu-related Google searches may know very little about how to diagnose flu; searches for flu or flu symptoms may well be researching disease symptoms that are similar to flu, but are not actually flu. Furthermore, analysis of search terms reportedly tracked by Google, such as "fever" and "cough", as well as effects of changes in their search algorithm over time, have raised concerns about the meaning of its predictions. In fall 2013, Google began attempting to compensate for increases in searches due to prominence of flu in the news, which was found to have previously skewed results. However, one analysis concluded that "by combining GFT and lagged CDC data, as well as dynamically recalibrating GFT, we can substantially improve on the performance of GFT or the CDC alone." A later study also demonstrates that Google search data can indeed be used to improve estimates, reducing the errors seen in a model using CDC data alone by up to 52.7 per cent. By re-assessing the original GFT model, researchers uncovered that the model was aggregating queries about different health conditions, something that could lead to an over-prediction of ILI rates; in the same work, a series of more advanced linear and nonlinear better-performing approaches to ILI modelling have been proposed. However, followup work was able to substantially improve the accuracy of GFT through the use of a random forrest regression model trained on both the incidence of influenza-like illness and the output of the original GFT model.Related systems
Similar projects such as the flu-prediction project by the institute of Cognitive Science Osnabrück carry the basic idea forward, by combiningReferences
External links
* {{official website Internet properties established in 2008 Projects established in 2008 Flu Trends Influenza Data analysis software Prediction Public health and biosurveillance software