Partnering can reduce costs and increase scale, but it can alter the kinds of participants, treatments, and outcomes that you can use.
The alternative to doing it yourself is partnering with a powerful organization such as a company, government, or NGO. The advantage of working with a partner is that they can enable you to run experiments that you just can’t do by yourself. For example, one of the experiments that I’ll tell you about below involved 61 million participants; no individual researcher could achieve that scale. At the same time that partnering increases what you can do, it also, simultaneously, constrains you. For example, most companies will not allow you to run an experiment that could harm their business or their reputation. Working with partners also means that when it comes time to publish, you may come under pressure to “re-frame” your results, and some partners might even try to block the publication of your work if it makes them look bad. Finally, partnering also comes with costs related to developing and maintaining these collaborations.
The core challenge that has to be solved to make these partnerships successful is finding a way to balance the interests of both parties, and a helpful way to think about that balance is Pasteur’s Quadrant (Stokes 1997). Many researchers think that if they are working on something practical—something that might be of interest to a partner—then they cannot be doing real science. This mindset will make it very difficult to create successful partnerships, and it also happens to be completely wrong. The problem with this way of thinking is wonderfully illustrated by the path-breaking research of biologist Louis Pasteur. While working on a commercial fermentation project to convert beet juice into alcohol, Pasteur discovered a new class of microorganism that eventually led to the germ theory of disease. This discovery solved a very practical problem—it helped improve the process of fermentation—and it lead to a major scientific advance. Thus, rather than thinking about research with practical applications as being in conflict with true scientific research, it is better to think of these as two separate dimensions. Research can be motivated by use (or not) and research can seek fundamental understanding (or not). Critically, some research—like Pasteur’s—can be motivated by use and seeking fundamental understanding (Figure 4.16). Research in Pasteur’s Quadrant—research that inherently advances two goals—is ideal for collaborations between researchers and partners. Given that background, I’ll describe two experimental studies with partnerships: one with a company and one with an NGO.
Large companies, particularly tech companies, have developed incredibly sophisticated infrastructure for running complex experiments. In the tech industry, these experiments are often called A/B tests (because they test the effectiveness of two treatments: A and B). These experiments are frequently run for things like increasing click-through rates on ads, but the same experimental infrastructure can also be used for research that advances scientific understanding. An example that illustrates the potential of this kind of research is a study conducted by a partnership between researchers at Facebook and the University of California, San Diego, on the effects of different messages on voter turnout (Bond et al. 2012).
On November 2, 2010—the day of the US congressional elections—all 61 million Facebook users who live in the US and are over 18 took part in the experiment about voting. Upon visiting Facebook, users were randomly assigned into one of three groups, which determined what banner (if any) was placed at the top of their News Feed (Figure 4.17):
Bond and colleagues studied two main outcomes: reported voting behavior and actual voting behavior. First, they found that people in the info + social group were about 2 percentage points more likely than people in the info group to click “I Voted” (about 20% vs 18%). Further, after the researchers merged their data with publicly available voting records for about 6 million people they found that people in the info + social group were 0.39 percentage points more likely to actually vote than people in the control condition and that people in the info group just as likely to vote as people in the control condition (Figure 4.17).
This experiment shows that some online get-out-the-vote messages are more effective than others, and it shows that researcher’s estimate of the effectiveness of a treatment can depend on whether they study reported or actual behavior. This experiment unfortunately does not offer any clues about the mechanisms through which the social information—which some researchers have playfully called a “face pile”—increased voting. It could be that the social information increased the probability that someone noticed the banner or that it increased the probability that someone who noticed the banner actually voted or both. Thus, this experiment provides an interesting finding that further researcher will likely explore (see e.g., Bakshy, Eckles, et al. (2012)).
In addition to advancing the goals of the researchers, this experiment also advanced the goal of the partner organization (Facebook). If you change the behavior studied from voting to buying soap, then you can see that the study has the exact same structure as an experiment to measure the effect of online ads (see e.g., Lewis and Rao (2015)). These ad effectiveness studies frequently measure the effect of exposure to online ads—the treatments in Bond et al. (2012) are basically ads for voting—on offline behavior. Thus, this study could advance Facebook’s ability to study the effectiveness of online ads and could help Facebook convince potential advertisers that Facebook ads are effective.
Even though the interests of the researchers and partners were mostly aligned in this study, they were also partially in tension. In particular, the allocation of participants to the three conditions—control, info, and info + social—was tremendously imbalanced: 98% of the sample was assigned to info + social. This imbalanced allocation is inefficient statistically, and a much better allocation for the researchers would have have been 1/3 of the participants in each group. But, the imbalanced allocation happened because Facebook wanted everyone to receive the info + social treatment. Fortunately, the researchers convinced them to hold back 1% for a related treatment and 1% of participants for a control group. Without the control group it would have been basically impossible to measure the effect of the info + social treatment because it would have been a “perturb and observe” experiment rather than a randomized controlled experiment. This example provides a valuable practical lesson for working with partners: sometimes you create an experiment by convincing someone to deliver a treatment and sometimes you create an experiment by convincing someone not to deliver a treatment (i.e., to create a control group).
Partnership does not always need to involve tech companies and A/B tests with millions of participants. For example, Alexander Coppock, Andrew Guess, and John Ternovski (2016) partnered with an environmental NGO (League of Conservation Voters) to run experiments testing different strategies for promoting social mobilization. The researchers used the NGO’s Twitter account to send out both public tweets and private direct messages that attempted to prime different types of identities. The researchers then measured which of these messages were most effective for encouraging people to sign a petition and retweet information about a petition.
Topic | Citation |
---|---|
Effect of Facebook News Feed on information sharing | Bakshy, Rosenn, et al. (2012) |
Effect of partial anonymity on behavior on online dating website | Bapna et al. (2016) |
Effect of Home Energy Reports on electricity usage | Allcott (2011); Allcott and Rogers (2014); Allcott (2015); Costa and Kahn (2013); Ayres, Raseman, and Shih (2013) |
Effect of app design on viral spread | Aral and Walker (2011) |
Effect of spreading mechanism on diffusion | Taylor, Bakshy, and Aral (2013) |
Effect of social information in advertisements | Bakshy, Eckles, et al. (2012) |
Effect of catalog frequency on sales through catalog and online for different types of customers | Simester et al. (2009) |
Effect of popularity information on potential job applications | Gee (2015) |
Effect of initial ratings on popularity | Muchnik, Aral, and Taylor (2013) |
Effect of message content on political mobilization | Coppock, Guess, and Ternovski (2016) |
Overall, partnering with the powerful enables to you operate at a scale that is hard to do otherwise, and Table 4.3 provides other examples of partnerships between researchers and organizations. Partnering can be much easier than building your own experiment. But, these advantages come with disadvantages: partnerships can limit the kinds of participants, treatments, and outcomes that you can study. Further, these partnerships can lead to ethical challenges. The best way to spot an opportunity for a partnership is to notice a real problem that you can solve while you are doing interesting science. If you are not used to this way of looking at the world, it can be hard to spot problems in Pasteur’s Quadrant, but with practice, you’ll start to notice them more and more.