This historical appendix provides a very brief review of research ethics in the United States.
Any discussion of research ethics needs to acknowledge that, in the past, researchers have done awful things in the name of science. One of the worst of these was the Tuskegee Syphilis Study (table 6.4). In 1932, researchers from the US Public Health Service (PHS) enrolled about 400 black men infected with syphilis in a study to monitor the effects of the disease. These men were recruited from the area around Tuskegee, Alabama. From the outset the study was nontherapeutic; it was designed to merely document the history of the disease in black males. The participants were deceived about the nature of the study—they were told that it was a study of “bad blood”—and they were offered false and ineffective treatment, even though syphilis is a deadly disease. As the study progressed, safe and effective treatments for syphilis were developed, but the researchers actively intervened to prevent the participants from getting treatment elsewhere. For example, during World War II, the research team secured draft deferments for all men in the study in order to prevent the treatment the men would have received had they entered the Armed Forces. Researchers continued to deceive participants and deny them care for 40 years.
The Tuskegee Syphilis Study took place against a backdrop of racism and extreme inequality that was common in the southern part of the United States at the time. But, over its 40-year history, the study involved dozens of researchers, both black and white. And, in addition to researchers directly involved, many more must have read one of the 15 reports of the study published in the medical literature (Heller 1972). In the mid-1960s—about 30 years after the study began—a PHS employee named Robert Buxtun began pushing within the PHS to end the study, which he considered morally outrageous. In response to Buxtun, in 1969, the PHS convened a panel to do a complete ethical review of the study. Shockingly, the ethical review panel decided that researchers should continue to withhold treatment from the infected men. During the deliberations, one member of the panel even remarked: “You will never have another study like this; take advantage of it” (Brandt 1978). The all-white panel, which was mostly made up of doctors, did decide that some form of informed consent should be obtained. But the panel judged the men themselves incapable of providing informed consent because of their age and low level of education. The panel recommended, therefore, that the researchers receive “surrogate informed consent” from local medical officials. So, even after a full ethical review, the withholding of care continued. Eventually, Buxtun took the story to a journalist, and, in 1972, Jean Heller wrote a series of newspaper articles that exposed the study to the world. It was only after widespread public outrage that the study was finally ended and care was offered to the men who had survived.
Date | Event |
---|---|
1932 | Approximately 400 men with syphilis are enrolled in the study; they are not informed of the nature of the research |
1937-38 | The PHS sends mobile treatment units to the area, but treatment is withheld for the men in the study |
1942-43 | In order to prevent the men in the study from receiving treatment, PHS intervenes to prevent them from being drafted for WWII |
1950s | Penicillin becomes a widely available and effective treatment for syphilis; the men in the study are still not treated (Brandt 1978) |
1969 | The PHS convenes an ethical review of the study; the panel recommends that the study continue |
1972 | Peter Buxtun, a former PHS employee, tells a reporter about the study, and the press breaks the story |
1972 | The US Senate holds hearings on human experimentation, including Tuskegee Study |
1973 | The government officially ends the study and authorizes treatment for survivors |
1997 | US President Bill Clinton publicly and officially apologizes for the Tuskegee Study |
The victims of this study included not just the 399 men, but also their families: at least 22 wives, 17 children, and 2 grandchildren with syphilis may have contracted the disease as a result of the withholding of treatment (Yoon 1997). Further, the harm caused by the study continued long after it ended. The study—justifiably—decreased the trust that African Americans had in the medical community, an erosion in trust that may have led African Americans to avoid medical care to the detriment of their health (Alsan and Wanamaker 2016). Further, the lack of trust hindered efforts to treat HIV/AIDS in the 1980s and 90s (Jones 1993, chap. 14).
Although it is hard to imagine research so horrific happening today, I think there are three important lessons from the Tuskegee Syphilis Study for people conducting social research in the digital age. First, it reminds us that there are some studies that simply should not happen. Second, it shows us that research can harm not just participants, but also their families and entire communities long after the research has been completed. Finally, it shows that researchers can make terrible ethical decisions. In fact, I think it should induce some fear in researchers today that so many people involved in this study made such awful decisions over such a long period of time. And, unfortunately, Tuskegee is by no means unique; there were several other examples of problematic social and medical research during this era (Katz, Capron, and Glass 1972; Emanuel et al. 2008).
In 1974, in response to the Tuskegee Syphilis Study and these other ethical failures by researchers, the US Congress created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and tasked it to develop ethical guidelines for research involving human subjects. After four years of meeting at the Belmont Conference Center, the group produced the Belmont Report, a report that has had a tremendous impact on both abstract debates in bioethics and the everyday practice of research.
The Belmont Report has three sections. In the first—Boundaries Between Practice and Research—the report sets out its purview. In particular, it argues for a distinction between research, which seeks generalizable knowledge, and practice, which includes everyday treatment and activities. Further, it argues that the ethical principles of the Belmont Report apply only to research. It has been argued that this distinction between research and practice is one way that the Belmont Report is not well suited to social research in the digital age (Metcalf and Crawford 2016; boyd 2016).
The second and third parts of the Belmont Report lay out three ethical principles—Respect for Persons; Beneficence; and Justice—and describe how these principles can be applied in research practice. These are the principles that I described in more detail in the main text of this chapter.
The Belmont Report sets broad goals, but it is not a document that can be easily used to oversee day-to-day activities. Therefore, the US Government created a set of regulations that are colloquially called the Common Rule (their official name is Title 45 Code of Federal Regulations, Part 46, Subparts A-D) (Porter and Koski 2008). These regulations describe the process for reviewing, approving, and overseeing research, and they are the regulations that institutional review boards (IRBs) are tasked with enforcing. To understand the difference between the Belmont Report and the Common Rule, consider how each discusses informed consent: the Belmont Report describes the philosophical reasons for informed consent and broad characteristics that would represent true informed consent, while the Common Rule lists the eight required and six optional elements of an informed consent document. By law, the Common Rule governs almost all research that receives funding from the US Government. Further, many institutions that receive funding from the US Government typically apply the Common Rule to all research happening at that institution, regardless of the funding source. But the Common Rule does not automatically apply to companies that do not receive research funding from the US Government.
I think that almost all researchers respect the broad goals of ethical research as expressed in the Belmont Report, but there is widespread annoyance with the Common Rule and the process of working with IRBs (Schrag 2010, 2011; Hoonaard 2011; Klitzman 2015; King and Sands 2015; Schneider 2015). To be clear, those critical of IRBs are not against ethics. Rather, they believe that the current system does not strike an appropriate balance or that it could better achieve its goals through other methods. I, however, will take these IRBs as given. If you are required to follow the rules of an IRB, then you should do so. However, I would encourage you to also take a principles-based approach when considering the ethics of your research.
This background very briefly summarizes how we arrived at the rules-based system of IRB review in the United States. When considering the Belmont Report and the Common Rule today, we should remember that they were created in a different era and were—quite sensibly—responding to the problems of that era, in particular breaches in medical ethics during and after the World War II (Beauchamp 2011).
In addition to efforts by medical and behavioral scientists to create ethical codes, there were also smaller and less well-known efforts by computer scientists. In fact, the first researchers to run into the ethical challenges created by digital-age research were not social scientists: they were computer scientists, specifically researchers in computer security. During the 1990s and 2000s, computer security researchers conducted a number of ethically questionable studies that involved things like taking over botnets and hacking into thousands of computers with weak passwords (Bailey, Dittrich, and Kenneally 2013; Dittrich, Carpenter, and Karir 2015). In response to these studies, the US Government—specifically the Department of Homeland Security—created a blue-ribbon commission to write a guiding ethical framework for research involving information and communication technologies (ICT). The result of this effort was the Menlo Report (Dittrich, Kenneally, and others 2011). Although the concerns of computer security researchers are not exactly the same as those of social researchers, the Menlo Report provides three important lessons for social researchers.
First, the Menlo Report reaffirms the three Belmont principles—Respect for Persons, Beneficence, and Justice—and adds a fourth: Respect for Law and Public Interest. I described this fourth principle and how it should be applied to social research in the main text of this chapter (section 6.4.4).
Second, the Menlo Report calls on researchers to move beyond the narrow definition of “research involving human subjects” from the Belmont Report to a more general notion of “research with human-harming potential.” The limitations of the scope of the Belmont Report are well illustrated by Encore. The IRBs at Princeton and Georgia Tech ruled that Encore was not “research involving human subjects,” and therefore was not subject to review under the Common Rule. However, Encore clearly has human-harming potential; at its most extreme, Encore could potentially result in innocent people being jailed by repressive governments. A principles-based approach means that researchers should not hide behind a narrow, legal definition of “research involving human subjects,” even if IRBs allow it. Rather, they should adopt a more general notion of “research with human-harming potential” and they should subject all of their own research with human-harming potential to ethical consideration.
Third, the Menlo Report calls on researchers to expand the stakeholders that are considered when applying the Belmont principles. As research has moved from a separate sphere of life to something that is more embedded in day-to-day activities, ethical considerations must be expanded beyond just specific research participants to include nonparticipants and the environment in which the research takes place. In other words, the Menlo Report calls for researchers to broaden their ethical field of view beyond just their participants.
This historical appendix has provided a very brief review of research ethics in the social and medical sciences and in computer science. For a book-length treatment of research ethics in medical science, see Emanuel et al. (2008) or Beauchamp and Childress (2012).