Beneficence is about understanding and improving the risk/benefit profile of your study, and then deciding if it strikes the right balance.
The Belmont Report argues that the principle of Beneficence is an obligation that researchers have to participants, and that it involves two parts: (1) do not harm and (2) maximize possible benefits and minimize possible harms. The Belmont Report traces the idea of “do not harm” to the Hippocratic tradition in medical ethics, and it can be expressed in a strong form where researchers “should not injure one person regardless of the benefits that might come to others” (Belmont Report 1979). However, the Belmont Report also acknowledges that learning what is beneficial may involve exposing some people to risk. Therefore, the imperative of not doing harm can be in conflict with the imperative to learn, leading researchers to occasionally make difficult decisions about “when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks” (Belmont Report 1979).
In practice, the principle of Beneficence has been interpreted to mean that researchers should undertake two separate processes: a risk/benefit analysis and then a decision about whether the risks and benefits strike an appropriate ethical balance. This first process is largely a technical matter requiring substantive expertise, while the second is largely an ethical matter where substantive expertise may be less valuable, or even detrimental.
A risk/benefit analysis involves both understanding and improving the risks and benefits of a study. Analysis of risk should include two elements: the probability of adverse events and the severity of those events. As the result of a risk/benefit analysis, a researcher could adjust the study design to reduce the probability of an adverse event (e.g., screen out participants who are vulnerable) or reduce the severity of an adverse event if it occurs (e.g., make counseling available to participants who request it). Further, during the risk/benefit analysis researchers need to keep in mind the impact of their work not just on participants, but also on nonparticipants and social systems. For example, consider the experiment by Restivo and van de Rijt (2012) on the effect of awards on Wikipedia editors (discussed in chapter 4). In this experiment, the researchers gave awards to a small number of editors whom they considered deserving and then tracked their contributions to Wikipedia compared with a control group of equally deserving editors to whom the researchers did not give an award. Imagine, if, instead of giving a small number of awards, Restivo and van de Rijt flooded Wikipedia with many, many awards. Although this design might not harm any individual participant, it could disrupt the entire award ecosystem in Wikipedia. In other words, when doing a risk/benefit analysis, you should think about the impacts of your work not just on participants but on the world more broadly.
Next, once the risks have been minimized and the benefits maximized, researchers should assess whether the study strikes a favorable balance. Ethicists do not recommend a simple summation of costs and benefits. In particular, some risks render the research impermissible no matter the benefits (e.g., the Tuskegee Syphilis Study described in the historical appendix). Unlike the risk/benefit analysis, which is largely technical, this second step is deeply ethical and may in fact be enriched by people who do not have specific subject-area expertise. In fact, because outsiders often notice different things from insiders, IRBs in the United States are required to include at least one nonresearcher. In my experience serving on an IRB, these outsiders can be helpful for preventing group-think. So if you are having trouble deciding whether your research project strikes an appropriate risk/benefit analysis don’t just ask your colleagues, try asking some nonresearchers; their answers might surprise you.
Applying the principle of Beneficence to the three examples that we are considering suggests some changes that might improve their risk/benefit balance. For example, in Emotional Contagion, the researchers could have attempted to screen out people under 18 years old and people who might be especially likely to react badly to the treatment. They could also have tried to minimize the number of participants by using efficient statistical methods (as described in detail in chapter 4). Further, they could have attempted to monitor participants and offered assistance to anyone that appeared to have been harmed. In Tastes, Ties, and Time, the researchers could have put extra safeguards in place when they released the data (although their procedures were approved by Harvard’s IRB, which suggests that they were consistent with common practice at that time); I’ll offer some more specific suggestions about data release later when I describe informational risk (section 6.6.2). Finally, in Encore, the researchers could have attempted to minimize the number of risky requests that were created in order to achieve the measurement goals of the project, and they could have excluded participants who are most in danger from repressive governments. Each of these possible changes would introduce trade-offs into the design of these projects, and my goal is not to suggest that these researchers should have made these changes. Rather, it is to show the kinds of changes that the principle of Beneficence can suggest.
Finally, although the digital age has generally made the weighing of risks and benefits more complex, it has actually made it easier for researchers to increase the benefits of their work. In particular, the tools of the digital age greatly facilitate open and reproducible research, where researchers make their research data and code available to other researchers and make their papers available through open access publishing. This change to open and reproducible research, while by no means simple, offers a way for researchers to increase the benefits of their research without exposing participants to any additional risk (data sharing is an exception that will be discussed in detail in section 6.6.2 on informational risk).