Human computation enables you to have a thousand research assistants.
Human computation projects combine the work of many non-experts to solve easy-task-big-scale problems that are not easily solved by computers. They use the split-apply-combine strategy to break a big problem into lots of simple microtasks that can be solved by people without specialized skills. Computer-assisted human computation systems also use machine learning in order to amplify the human effort.
In social research, human computation projects are most likely to be used in situations where researchers want to classify, code, or label images, video, or texts. These classifications are usually not the final product of the research; instead they are the raw material for analysis. For example, the crowd-coding of political manifestos could be used as part of analysis about the dynamics of political debate. These kinds of classification microtasks are likely to work best when they do not require specialized training and when there is broad agreement about the correct answer. If the classification task is more subjective—such as, “Is this news story biased?”—then it becomes increasingly important to understand who is participating and what biases they might bring. In the end, the quality of the output of human computation projects rests on the quality of the inputs that the human participants provide: garbage in, garbage out.
In order to further build your intuition, table 5.1 provides additional examples of how human computation has been used in social research. This table shows that, unlike Galaxy Zoo, many other human computation projects use microtask labor markets (e.g., Amazon Mechanical Turk) and rely on paid workers rather than volunteers. I’ll return to this issue of participant motivation when I provide advice about creating your own mass collaboration project.
Summary | Data | Participants | Reference |
---|---|---|---|
Code political party manifestos | Text | Microtask labor market | Benoit et al. (2016) |
Extract event information from news articles on the Occupy Protests in 200 US cities | Text | Microtask labor market | Adams (2016) |
Classify newspaper articles | Text | Microtask labor market | Budak, Goel, and Rao (2016) |
Extract event information from diaries of soldiers in World War 1 | Text | Volunteers | Grayson (2016) |
Detect changes in maps | Images | Microtask labor market | Soeller et al. (2016) |
Check algorithmic coding | Text | Microtask labor market | Porter, Verdery, and Gaddis (2016) |
Finally, the examples in this section show that human computation can have a democratizing impact on science. Recall, that Schawinski and Lintott were graduate students when they started Galaxy Zoo. Prior to the digital age, a project to classify a million galaxy classification would have required so much time and money that it would have only been practical for well-funded and patient professors. That’s no longer true. Human computation projects combine the work of many non-experts to solve easy-task-big-scale problems. Next, I’ll show you that mass collaboration can also be applied to problems that require expertise, expertise that even the researcher herself might not have.