CROWDSOURCING is a specific sourcing model in which individuals or
organizations use contributions from
* 1 Definitions
* 2 Historical examples
* 2.1 Timeline of major events * 2.2 Early competitions * 2.3 In astronomy * 2.4 In energy system research * 2.5 In genealogy research * 2.6 In genetic genealogy research * 2.7 In journalism * 2.8 In linguistics * 2.9 In ornithology * 2.10 In public policy * 2.11 In seismology * 2.12 In libraries
* 3 Modern methods
* 4 Examples
* 4.1 Crowdvoting * 4.2 Crowdsourcing creative work * 4.3 Crowdsourcinglanguage-related data collection * 4.4 Crowdsolving * 4.5 Crowdsearching * 4.6 Crowdfunding * 4.7 Mobile crowdsourcing * 4.8 Macrowork * 4.9 Microwork * 4.10 Simple projects * 4.11 Complex projects * 4.12 Inducement prize contests * 4.13 Implicit crowdsourcing * 4.14 Health-care crowdsourcing * 4.15 Crowdsourcingin agriculture * 4.16 Crowdsourcingin cheating in bridge
* 5 Crowdsourcers
* 5.2 Motivations
* 5.2.1 Contributors * 5.2.2 Requesters
* 5.3 Participation in crowdsourcing
* 6 Limitations and controversies
* 6.1 Impact of crowdsourcing on product quality * 6.2 Entrepreneurs contribute less capital themselves * 6.3 Increased number of funded ideas
* 6.4 Concerns
* 6.4.1 Irresponsible crowdsourcing
* 7 See also * 8 References * 9 External links
The term "crowdsourcing" was coined in 2005 by Jeff Howe and Mark
Robinson, editors at _Wired_ , to describe how businesses were using
"Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers."
In a February 1, 2008, article, Daren C. Brabham, "the first to publish scholarly research using the word crowdsourcing" and writer of the 2013 book, _Crowdsourcing,_ defined it as an "online, distributed problem-solving and production model."
After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estellés-Arolas and Fernando González Ladrón-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:
" Crowdsourcingis a type of participative online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task; of variable complexity and modularity, and; in which the crowd should participate, bringing their work, money, knowledge **** experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken".
As mentioned by the definitions of Brabham and Estellés-Arolas and Ladrón-de-Guevara above, crowdsourcing in the modern conception is an IT-mediated phenomenon, meaning that a form of IT is always used to create and access crowds of people. In this respect, crowdsourcing has been considered to encompass three separate, but stable techniques; competition crowdsourcing, virtual labor market crowdsourcing, and open collaboration crowdsourcing.
Henk van Ess, a college lecturer in online communications, emphasizes the need to "give back" the crowdsourced results to the public on ethical grounds. His nonscientific, noncommercial definition is widely cited in the popular press:
" Crowdsourcingis channeling the experts’ desire to solve a problem and then freely sharing the answer with everyone."
Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Members of the public submit solutions that are then owned by the entity, which originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or with recognition. In other cases, the only rewards may be a kudos or intellectual satisfaction. Crowdsourcingmay produce solutions from amateurs or volunteers working in their spare time or from experts or small businesses, which were previously unknown to the initiating organization.
Another consequence of the multiple definitions is the controversy surrounding what kinds of activities that may be considered crowdsourcing.
While the term "crowdsourcing" was popularized on the
TIMELINE OF MAJOR EVENTS
A brief timeline of events prior to 2006:
* 1714 – The
Longitude Prize: When the British government was
trying to find a way to measure a ship’s longitudinal position, they
offered the public a monetary prize to whomever came up with the best
* 1783 –
King Louis XVIoffered an award to the person who could
‘make the alkali’ by decomposing sea salt by the ‘simplest and
most economic method.’
* 1848 –
Matthew Fontaine Maurydistributed 5000 copies of his
_Wind and Current Charts_ free of charge on the condition that sailors
returned a standardized log of their voyage to the U.S. Naval
Observatory . By 1861, he had distributed 200,000 copies free of
charge, on the same conditions.
* 1884 – Publication of the _
Oxford English Dictionary
Crowdsourcinghas often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded with Montyon Prizes, created for poor Frenchmen who had done virtuous acts. These included the Leblanc process , or the Alkali prize, where a reward was provided for separating the salt from the alkali, and the Fourneyron\'s turbine , when the first hydraulic commercial turbine was developed.
In response to a challenge from the French government, Nicolas Appert
won a prize for inventing a new way of food preservation that involved
sealing food in air-tight jars. The British government provided a
similar reward to find an easy way to determine a ship's longitude in
Longitude Prize. During the Great Depression, out-of-work clerks
tabulated higher mathematical functions in the Mathematical Tables
Project as an outreach project. One of the biggest crowdsourcing
campaigns was a public design contest in 2010 hosted by the Indian
government's finance ministry to create a symbol for the Indian rupee
. Thousands of people sent in entries before the government zeroed in
on the final symbol based on the
Crowdsourcingin astronomy was used in the early 19th century by astronomer Denison Olmsted. After being awakened in a late November night due to a meteor shower taking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. “As the cause of ‘Falling Stars’ is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible,” Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists’ observations sent to the _American Journal of Science and Arts_. These responses helped him make a series of scientific breakthroughs, the major discovery being that meteor showers are seen nationwide, and fall from space under the influence of gravity. Also, they demonstrated that the showers appeared in yearly cycles, a fact that often eluded scientists. The responses allowed him to suggest a velocity for the meteors, although his estimate turned out to be too conservative. If he had just taken the responses as presented, his conjecture on the meteors' velocity would have been closer to their actual speed.
A more recent version of crowdsourcing in astronomy is NASA's photo organizing project, which asks internet users to browse photos taken from space and try to identify the location the picture is documenting.
IN ENERGY SYSTEM RESEARCH
Energy system models require large and diverse datasets , increasingly so given the trend towards greater temporal and spatial resolution. In response, there have been several initiatives to crowdsource this data. Launched in December 2009, OpenEIis a collaborative website , run by the US government, providing open energy data. While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world. The semantic wiki and database Enipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011. :184–188
IN GENEALOGY RESEARCH
Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.
IN GENETIC GENEALOGY RESEARCH
Since 2005, the
Genographic Projecthas used the latest genetic
technology to expand our knowledge of the human story, and its
pioneering use of
Crowdsourcingis increasingly used in professional journalism. Journalists crowdsource information from the crowd, typically fact check the information and then use it in their articles as they see fit. The leading daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013-2014, resulting to over 50,000 submissions. The leading daily newspaper in Finland crowdsourced investigation in stock short selling in 2011-2012, and the crowdsourced information lead to a revelation of a sketchy tax evasion system in a Finnish bank. The bank executive was fired and policy changes followed. TalkingPointsMemo in the United States asked its readers to examine 3000 emails concerning the firing of federal prosecutors in 2008. The British newspaper _the Guardian_ crowdsourced the examination of hundreds of thousands of documents in 2009.
Crowdsourcingstrategies have been applied to estimate word knowledge and vocabulary size.
Another early example of crowdsourcing occurred in the field of ornithology . On December 25, 1900, Frank Chapman, an early officer of the National Audubon Society, initiated a tradition, dubbed the "Christmas Day Bird Census" . The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds. This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles. Christmas 2014 marked the National Audubon Society's 115th annual Christmas Bird Count.
IN PUBLIC POLICY
Crowdsourcingpublic policy and the production of public services is also referred to as citizen sourcing .
The first conference focusing on
Crowdsourcingfor Politics and
Policy took place at Oxford University, under the auspices of the
Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement. Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws. The crowdsourced information and resolutions would then be passed on to legislators for them to refer to when making a decision, letting citizens more directly contribute to public policy. The City of Palo Alto is crowdsourcing people's feedback for its Comprehensive City Plan update in a process, which started in 2015. The House of Representatives in Brazil has used crowdsourcing in policy-reforms, and federal agencies in the United States have used crowdsourcing for several years.
The European-Mediterranean Seismological Centre(EMSC) has developed a seismic detection system by monitoring the traffic peaks on its website and by the analysis of keywords used on Twitter.
Crowdsourcingis used in libraries for OCR corrections on digitized texts, for tagging and for funding.
Currently, crowdsourcing has transferred mainly to the Internet, which provides a particularly beneficial venue for crowdsourcing since individuals tend to be more open in web-based projects where they are not being physically judged or scrutinized, and thus can feel more comfortable sharing. This approach ultimately allows for well-designed artistic projects because individuals are less conscious, or maybe even less aware, of scrutiny towards their work. In an online atmosphere, more attention can be given to the specific needs of a project, rather than spending as much time in communication with other individuals.
According to a definition by Henk van Ess:
"The crowdsourced problem can be huge (epic tasks like finding alien life or mapping earthquake zones) or very small ('where can I skate safely?'). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, subjects that people find sympathetic or any form of injustice."
Crowdsourcingcan either take an explicit or an implicit route. Explicit crowdsourcing lets users work together to evaluate, share, and build different specific tasks, while implicit crowdsourcing means that users solve a problem as a side effect of something else they are doing.
With explicit crowdsourcing, users can evaluate particular items like books or webpages, or share by posting products or items. Users can also build artifacts by providing information and editing other people's work.
Implicit crowdsourcing can take two forms: standalone and piggyback. Standalone allows people to solve problems as a side effect of the task they are actually doing, whereas piggyback takes users' information from a third-party website to gather information.
In his 2013 book, _Crowdsourcing_, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:
* Knowledge discovery and management is used for information management problems where an organization mobilizes a crowd to find and assemble information. It is ideal for creating collective resources. * Distributed human intelligence tasking is used for information management problems where an organization has a set of information in hand and mobilizes a crowd to process or analyze the information. It is ideal for processing large data sets that computers cannot easily do. * Broadcast search is used for ideation problems where an organization mobilizes a crowd to come up with a solution to a problem that has an objective, provable right answer. It is ideal for scientific problem solving. * Peer-vetted creative production is used for ideation problems, where an organization mobilizes a crowd to come up with a solution to a problem which has an answer that is subjective or dependent on public support. It is ideal for design, aesthetic, or policy problems.
Crowdsourcingoften allows participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most likes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate likes. In recent years several crowdsourcing companies have begun to use pairwise comparisons , backed by ranking algorithms. Ranking algorithms do not penalize late contributions. They also produce results faster. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking. "Crowdvoting: How Elo Limits Disruption". _thevisionlab.com_. May 25, 2017. One drawback, however, is that ranking algorithms are more difficult to understand than like counting.
Some common categories of crowdsourcing can be used effectively in the commercial world, including crowdvoting, crowdsolving, crowdfunding , microwork , creative crowdsourcing , crowdsource workforce management , and inducement prize contests . Although this may not be an exhaustive list, the items cover the current major ways in which people use crowds to perform tasks.
Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. The Iowa Electronic Market is a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.
Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have thus crowdsourced a new pizza, bottle design, beer, and song, respectively. Threadless.com selects the T-shirts it sells by having users provide designs and vote on the ones they like, which are then printed and available for purchase.
California Report Card(CRC), a program jointly launched in
January 2014 by the Center for Information Technology Research in the
Interest of Society and Lt. Governor
Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicting the success or failure of a movie based on its trailer, a feat that was replicated in 2013 by Google.
On reddit users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA " and /r/AskScience online interviews .
CROWDSOURCING CREATIVE WORK
Main article: Crowdsourcing creative work
Creative crowdsourcing spans sourcing creative projects such as graphic design , crowdsourcing architecture , apparel design, movies, writing, illustration, etc. While crowdsourcing competitions have been used for decades in some creative fields (such as architecture), creative crowdsourcing has proliferated with the recent development of web-based platforms where clients can solicit a wide variety of creative work at lower cost than by traditional means.
CROWDSOURCING LANGUAGE-RELATED DATA COLLECTION
Crowdsourcinghas also been used for gathering language-related data.
For dictionary work, as was mentioned above, it was applied over a
hundred years ago by the _Oxford English Dictionary_ editors, using
paper and postage. Much later, a call for collecting examples of
proverbs on a specific topic (religious pluralism) was printed in a
journal. Today, as "crowdsourcing" has the inherent connotation of
being web-based, such language-related data gathering is being
conducted on the web by crowdsourcing in accelerating ways. Currently,
a number of dictionary compilation projects are being conducted on the
web, particularly for languages that are not highly academically
documented, such as for the
_ This section DOES NOT CITE ANY SOURCES . Please help improve this section by adding citations to reliable sources . Unsourced material may be challenged and removed . (February 2016)_ _(Learn how and when to remove this template message )_
Main article: Crowdsolving
Crowdsolvingis a collaborative, yet holistic, way of solving a problem using many people, communities, groups, or resources.
Crowdfind, formerly "crowdfynd", uses a
version of crowdsourcing best termed as crowdsearching, which differs
from microwork in that no payment for taking part in the search is
made. Their platform, through geographic location anchoring, builds a
virtual search party of smartphone and
TrackR uses a system they call "crowd GPS" to load Bluetooth identities to a central server to track lost or stolen items.
Main article: Crowdfunding
Crowdfundingis the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet. Crowdfundinghas been used for both commercial and charitable purposes. The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.
Individuals, businesses, and entrepreneurs can showcase their businesses and projects to the entire world by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images. The goal is to create a compelling message towards which readers will be drawn. Funders make monetary contribution for numerous reasons:
* They connect to the greater purpose of the campaign, such as being a part of an entrepreneurial community and supporting an innovative idea or product. * They connect to a physical aspect of the campaign like rewards and gains from investment. * They connect to the creative display of the campaign’s presentation. * They want to see new products before the public.
The dilemma for equity crowdfunding in the US as of 2012 was how the Securities and Exchange Commission (SEC) is going to regulate the entire process. At the time, rules and regulations were being refined by the SEC, which had until January 1, 2013, to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd – Frank and all the other rules and regulations involving public companies and the way they trade. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allows for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.
Mobile crowdsourcing involves activities that take place on smartphones or mobile platforms, frequently characterized by GPS technology. This allows for real-time data gathering and gives projects greater reach and accessibility. However, mobile crowdsourcing can lead to an urban bias, as well as safety and privacy concerns.
Macrowork tasks typically have these characteristics: they can be done independently, they take a fixed amount of time, and they require special skills. Macrotasks could be part of specialized projects or could be part of a large, visible project where workers pitch in wherever they have the required skills. The key distinguishing factors are that macrowork requires specialized skills and typically takes longer, while microwork requires no specialized skills.
Microworkis a crowdsourcing platform where users do small tasks for which computers lack aptitude for low amounts of money. Amazon’s popular Mechanical Turk has created many different projects for users to participate in, where each task requires very little time and offers a very small amount in payment. The Chinese versions of this, commonly called Witkey, are similar and include such sites as Taskcn.com and k68.cn. When choosing tasks, since only certain users “win”, users learn to submit later and pick less popular tasks to increase the likelihood of getting their work chosen. An example of a Mechanical Turk project is when users searched satellite images for a boat to find lost researcher Jim Gray. Based on an elaborate survey of participants in a microtask crowdsourcing platform, Gadiraju _et al._ have proposed a taxonomy of different types of microtasks that are crowdsourced.
Simple projects are those that require a large amount of time and skills compared to micro and macrowork. While an example of macrowork would be writing survey feedback, simple projects rather include activities like writing a basic line of code or programming a database, which both require a larger time commitment and skill level. These projects are usually not found on sites like Amazon Mechanical Turk , and are rather posted on platforms like Upworkthat call for a specific expertise.
Complex projects generally take the most time, have higher stakes, and call for people with very specific skills. These are generally “one-off” projects that are difficult to accomplish and can include projects like designing a new product that a company hopes to patent. Tasks like that would be “complex” because design is a meticulous process that requires a large amount of time to perfect, and also people doing these projects must have specialized training in design to effectively complete the project. These projects usually pay the highest, yet are rarely offered.
INDUCEMENT PRIZE CONTESTS
Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielding around 46,000 ideas. Another example is the Netflix Prizein 2009. The idea was to ask the crowd to come up with a recommendation algorithm more accurate than Netflix's own algorithm. It had a grand prize of US$1,000,000, and it was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings, by 10.06%.
Another example of competition-based crowdsourcing is the 2009 DARPA balloon experiment, where DARPA placed 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team. A similar challenge was the Tag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.
Open innovationplatforms are a very effective way of crowdsourcing
people's thoughts and ideas to do research and development. The
InnoCentiveis a crowdsourcing platform for corporate research
and development where difficult scientific problems are posted for
crowds of solvers to discover the answer and win a cash prize, which
can range from $10,000 to $100,000 per challenge. InnoCentive, of
Waltham, MA and London, England provides access to millions of
scientific and technical experts from around the world. The company
claims a success rate of 50% in providing successful solutions to
previously unsolved scientific and technical problems.
IdeaConnection.com challenges people to come up with new inventions
and innovations and Ninesigma.com connects clients with experts in
various fields. The
X Prize Foundationcreates and runs incentive
competitions offering between $1 million and $30 million for solving
Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks. Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.
A good example of implicit crowdsourcing is the ESP game, where users guess what images are and then these labels are used to tag Google images. Another popular use of implicit crowdsourcing is through re CAPTCHA, which asks people to solve CAPTCHAs to prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.
Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such as Google's AdWords.
Research has emerged that outlines the use of crowdsourcing techniques in the public health domain. The collective intelligence outcomes from crowdsourcing are being generated in three broad categories of public health care; health promotion, health research, and health maintenance. Crowdsourcingalso enables researchers to move from small homogeneous groups of participants to large heterogenous groups, beyond convenience samples such as students or higher educated people.
CROWDSOURCING IN AGRICULTURE
Crowdsource research also reaches to the field of agriculture. This is mainly to give the farmers and experts a kind of help in identification of different types of weeds from the fields and also to give them the best way to remove the weeds from fields.
CROWDSOURCING IN CHEATING IN BRIDGE
Boye Brogelandinitiated a crowdsourcing investigation of cheating by top-level bridge players that showed several players were guilty, which led to their suspension.
A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks, find solutions for problems, or to gather information. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than might be present in one organization, and undertake problems that would have been too difficult to solve internally. Crowdsourcingallows businesses to submit problems on which contributors can work, on topics such as science, manufacturing, biotech, and medicine, with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks can be crowdsourced cheaply and effectively.
Crowdsourcingalso has the potential to be a problem-solving mechanism for government and nonprofit use. Urban and transit planning are prime areas for crowdsourcing. One project to test crowdsourcing's public participation process for transit planning in Salt Lake City was carried out from 2008 to 2009, funded by a U.S. Federal Transit Administration grant. Another notable application of crowdsourcing to government problem solving is the Peer to Patent Community Patent Review project for the U.S. Patent and Trademark Office.
Researchers have used crowdsourcing systems like the Mechanical Turk to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation. Notable examples include using the crowd to create speech and language databases, and using the crowd to conduct user studies. Crowdsourcingsystems provide these researchers with the ability to gather large amount of data. Additionally, using crowdsourcing, researchers can collect data from populations and demographics they may not have had access to locally, but that improve the validity and value of their work.
Artists have also used crowdsourcing systems. In his project called the Sheep Market, Aaron Koblinused Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world. Sam Brown (artist) leverages the crowd by asking visitors of his website explodingdog to send him sentences that he uses as inspirations for paintings. Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized. As with other crowdsourcers, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.
The crowd is an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about the demographics of the crowd, a study by Ross _et al._ surveyed the demographics of a sample of the more than 400,000 registered crowdworkers using Amazon Mechanical Turkto complete tasks for pay. A previous study in 2008 by Ipeirotis found that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population, 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.
The average US user of Mechanical Turk earned $2.30 per hour for tasks in 2009, versus $1.58 for the average Indian worker. While the majority of users worked less than five hours per week, 18% worked 15 hours per week or more. This is less than minimum wage in the United States (but not in India), which Ross suggests raises ethical questions for researchers who use crowdsourcing.
The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together account for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.
Another study of the demographics of the crowd at iStockphoto found a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home. In a crowd-sourcing diary study of 30 days in Europe the participants were predominantly higher educated women.
Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession. Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.
G. D. Saxton _et al._ (2013) studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. Saxton _et al._ developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.
Further information: Online participation § Motivations
Many scholars of crowdsourcing suggest that both intrinsic and extrinsic motivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors. For example, students and people employed full-time rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.
Intrinsicmotivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment that contributors experience through their participation. These motivations include: skill variety, task identity, task autonomy, direct feedback from the job, and pastime. Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change and help their peers.
Extrinsicmotivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially, such as the altruistic motivations of online volunteers . Chandler and Kapelner found that US users of the Amazon Mechanical Turkwere more likely to complete a task when told they were going to “help researchers identify tumor cells,” than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing of the task.
Motivation factors in crowdsourcing are often a mix of intrinsic and extrinsic factors. In a crowdsourced law-making project, the crowd was motivated by a mix of intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsicmotivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one’s own interest or more altruistic goals, such as protecting nature.
Another form of social motivation is prestige or status. The International Children\'s Digital Library recruits volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, providing motivation to produce high-quality work.
Using crowdsourcing through means such as Amazon Mechanical Turkcan help provide researchers and requesters with an already established infrastructure for their projects, allowing them to easily use a crowd and access participants from a diverse culture background. Using crowdsourcing can also help complete the work for projects that would normally have geographical and population size limitations.
PARTICIPATION IN CROWDSOURCING
Despite the potential global reach of IT applications online, recent research illustrates that differences in location affect participation outcomes in IT-mediated crowds.
LIMITATIONS AND CONTROVERSIES
At least five major topics cover the limitations and controversies about crowdsourcing:
* Impact of crowdsourcing on product quality * Entrepreneurs contribute less capital themselves * Increased number of funded ideas * The value and impact of the work received from the crowd * The ethical implications of low wages paid to crowdworkers
IMPACT OF CROWDSOURCING ON PRODUCT QUALITY
Crowdsourcingallows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions. Companies, or additional crowdworkers, then have to sort through all of these low-quality contributions. The task of sorting through crowdworkers’ contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead. For example, susceptibility to faulty results is caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, often a financial incentive causes workers to complete tasks quickly rather than well. Verifying responses is time-consuming, so requesters often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.
Crowdsourcingquality is also impacted by task design. Lukyanenko _et al._ argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common). Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices.
Just as limiting, oftentimes the scenario is that just not enough skills or expertise exist in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. In these cases, it may be difficult or even impossible to find the qualified people in the crowd, as their voices may be drowned out by consistent, but incorrect crowd members. However, if the difficulty of the task is even "intermediate" in its difficultly, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well, albeit with an additional computation cost.
Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by the human development index .
One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final desired product, and often very limited interaction with the final client occurs. This can decrease the quality of product because client interaction is a vital part of the design process.
An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other’s knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowdworkers are left to depend on their own knowledge and means to complete tasks.
A crowdsourced project is usually expected to be unbiased by
incorporating a large population of participants with a diverse
background. However, most of the crowdsourcing works are done by
people who are paid or directly benefit from the outcome (e.g. most of
open source projects working on
ENTREPRENEURS CONTRIBUTE LESS CAPITAL THEMSELVES
To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur’s network and the amount of initial self-generated capital.
The crowdsourcing process allows entrepreneurs to access to a wide range of investors who can take different stakes in the project. In effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase efficiency of projects.
Opponents of this issue argue easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable. This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors why their project can be successful, wary investors can be replaced by others who are willing to take on the risk.
There are translation companies and several users of translations who pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiring professional translators . This situation has been systematically denounced by IAPTIand other translator organizations.
INCREASED NUMBER OF FUNDED IDEAS
The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing.
Proponents argue that crowdsourcing is beneficial because it allows niche ideas that would not survive venture capitalist or angel funding, many times the primary investors in startups, to be started. Many ideas are killed in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.
Crowdsourcingallows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started. However, when the raw number of projects grows, the number of possible failures can also increase. Crowdsourcingassists niche and high-risk projects to start because of a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.
Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that requestors decide whether users' work is acceptable, and reserve the right to withhold pay if it does not meet their standards. Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.
Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcingsite InnoCentiveallows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents report working in a team on their submission. Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation.
The popular forum website reddit came under the spotlight during the
first few days after the events of the
Boston Marathon bombing
* ^ _A_ _B_ Safire, William (February 5, 2009). "On Language". New
York Times Magazine. Retrieved May 19, 2013.
* ^ Schenk, Eric; Guittard, Claude (2009). Crowdsourcing: What can
be Outsourced to the Crowd, and Why ?
* ^ Hirth,Matthias; Hoßfeld, Tobias; Tran-Gia, Phuoc. Anatomy of a
CrowdsourcingPlatform - Using the Example of Microworkers.com. 5th
IEEE International Conference on Innovative Mobile and Internet
Services in Ubiquitous Computing (IMIS 2011), June 2011, doi
* ^ _A_ _B_ Estellés-Arolas, Enrique;
González-Ladrón-de-Guevara, Fernando (2012), "Towards an Integrated
CrowdsourcingDefinition" (PDF), _Journal of Information Science_, 38
(2): 189–200, doi :10.1177/0165551512437638
* ^ _A_ _B_ _C_ Howe, Jeff (2006). "The Rise of Crowdsourcing".
* ^ Brabham, D. C. (2013). Crowdsourcing. Cambridge, Massachusetts;
London, England: The MIT Press.
* ^ Brabham, D. C. (2008). "
Crowdsourcingas a Model for Problem
Solving an Introduction and Cases". _Convergence: The International
Journal of Research into New Media Technologies_. 14 (1): 75–90. doi
* ^ Prpić, J., & Shukla, P. (2016).
* ^ _A_ _B_ _C_ Mason, W.; Suri, S. (2010), "Conducting Behavioral
Research on Amazon’s Mechanical Turk", _Behavior Research Methods_,
SSRN 1691163 _
* ^ Koblin, A. (2008), "The sheep market", Creativity and
* ^ "explodingdog 2015". Explodingdog.com. Retrieved 2015-07-02.
* ^ Linver, D. (2010), _
Crowdsourcingand the Evolving Relationship
between Art and Artist_
* ^ "Why". INRIX.com. 2014-09-13. Retrieved 2015-07-02.
* ^ _A_ _B_ Ross, J.; Irani, L.; Silberman, M.S.; Zaldivar, A.;
Tomlinson, B. (2010). "Who are the Crowdworkers? Shifting Demographics
in Mechanical Turk" (PDF). _CHI 2010_. Archived from the original
(PDF) on March 30, 2012.
* ^ Hirth, M.; Hoßfeld, T.; Train-Gia, P. (2011), _Human Cloud as