Building your own experiment might be costly, but it will enable you to create the experiment that you want.
In addition to overlaying experiments on top of existing environments, you can also build your own experiment. The main advantage of this approach is control; if you are building the experiment, you can create the environment and treatments that you want. These bespoke experimental environments can create opportunities to test theories that are impossible to test in naturally occurring environments. The main drawbacks of building your own experiment are that it can be expensive and that the environment that you are able to create might not have the realism of a naturally occurring system. Researchers building their own experiment also must have a strategy for recruiting participants. When working in existing systems, researchers are essentially bringing the experiments to their participants. But, when researchers build their own experiment, they need to bring participants to it. Fortunately, services such as Amazon Mechanical Turk (MTurk) can provide researchers a convenient way to bring participants to their experiments.
One example that illustrates the virtues of bespoke environments for testing abstract theories is the digital lab experiment by Gregory Huber, Seth Hill, and Gabriel Lenz (2012). The experiment explores a possible practical limitation to the functioning of democratic governance. Earlier non-experimental studies of actual elections suggest that voters are not able to accurately assess the performance of incumbent politicians. In particular, voters appear to suffer from three biases: 1) focused on recent rather than cumulative performance; 2) manipulatable by rhetoric, framing, and marketing; and 3) influenced by events unrelated to incumbent performance, such as the success of local sports team and the weather. In these earlier studies, however, it was hard to isolate any of these factors from all the other stuff that happens in real, messy elections. Therefore, Huber and colleagues created a highly simplified voting environment in order to isolate, and then experimentally study, each of these three possible biases.
As I describe the experimental set-up below it is going to sound very artificial, but remember that realism is not a goal in lab-style experiments. Rather, the goal is to clearly isolate the process that you are trying to study, and this tight isolation is sometimes not possible in studies with more realism (Falk and Heckman 2009). Further, in this particular case, the researchers argued that if voters cannot effectively evaluate performance in this highly simplified setting, then they are not going to be able to do it in a more realistic, more complex setting.
Huber and colleagues used Amazon Mechanical Turk (MTurk) to recruit participants. Once a participant provided informed consent and passed a short test, she was told that she was participating in a 32 round game to earn tokens that could be converted into real money. At the beginning of the game, each participant was told that she had been assigned an “allocator” that would give her free tokens each round and that some allocators were more generous than others. Further, each participant was also told that she would have a chance to either keep her allocator or be assigned a new one after 16 rounds of the game. Given what you know about Huber and colleagues’ research goals, you can see that the allocator represents a government and this choice represents an election, but participants were not aware of the general goals of the research. In total, Huber and colleagues recruited about 4,000 participants who were paid about $1.25 for a task that took about 8 minutes.
Recall that one of the findings from earlier research was that voters reward and punish incumbents for outcomes that are clearly beyond their control, such as the success of local sports teams and the weather. To assess whether participants voting decisions could be influenced by purely random events in their setting, Huber and colleagues added a lottery to their experimental system. At either the 8th round or the 16th round (i.e., right before the chance to replace the allocator) participants were randomly placed in a lottery where some won 5000 points, some won 0 points, and some lost 5000 points. This lottery was intended to mimic good or bad news that is independent of the performance of the politician. Even though participants were explicitly told that the lottery was unrelated to the performance of their allocator, the outcome of the lottery still impacted participants’ decisions. Participants that benefited from the lottery were more likely to keep their allocator, and this effect was stronger when the lottery happened in round 16—right before the replacement decision—than when it happened in round 8 (Figure 4.14). These results, along with the results of several other experiments in the paper, led Huber and colleagues to conclude that even in a simplified setting, voters have difficulty making wise decisions, a result that impacted future research about voter decision making (Healy and Malhotra 2013). The experiment of Huber and colleagues shows that MTurk can be used to recruit participants for lab-style experiments to precisely test very specific theories. It also shows the value of building your own experimental environment: it is hard to imagine how these same processes could have been isolated so cleanly in any other setting.
In addition to building lab-like experiments, researchers can also build experiments that are more field-like. For example, Centola (2010) built a digital field experiment to study the effect of social network structure on the spread of behavior. His research question required him to observe the same behavior spreading in populations that had different social network structures but were otherwise indistinguishable. The only way to do this was with a bespoke, custom-built experiment. In this case, Centola built a web-based health community.
Centola recruited about 1,500 participants with advertising on health websites. When participants arrived at the online community—which was called the Healthy Lifestyle Network—they provided informed consent and then were assigned “health buddies.” Because of the way Centola assigned these health buddies he was able to knit together different social network structures in different groups. Some groups were built to have random networks (where everyone was equally likely to be connected) and other groups were built to have clustered networks (where connections are more locally dense). Then, Centola introduced a new behavior into each network, the chance to register for a new website with additional health information. Whenever anyone signed up for this new website, all of her health buddies received an email announcing this behavior. Centola found that this behavior—signing-up for the new website—spread further and faster in the clustered network than the random network, a finding that was contrary to some existing theories.
Overall, building your own experiment gives you much more control; it enables you to construct the best possible environment to isolate what you want to study. It is hard to imagine how either of these experiments could have been performed in an already existing environment. Further, building your own system decreases ethical concerns around experimenting in existing systems. When you build your own experiment, however, you run into many of the problems that are encountered in lab experiments: recruiting participants and concerns about realism. A final downside is that building your own experiment can be costly and time-consuming, although as these examples show, the experiments can range from relatively simple environments (such as the study of voting by Huber, Hill, and Lenz (2012)) to relatively complex environments (such as the study of networks and contagion by Centola (2010)).