Key:
[] In arguing against the Emotional Contagion experiment, Kleinsman and Buckley (2015) wrote:
“Even if it is true that the risks for the Facebook experiment were low and even if, in hindsight, the results are judged to be useful, there is an important principle at stake here that must be upheld. In the same way that stealing is stealing no matter what amounts are involved, so we all have a right not to be experimented on without our knowledge and consent, whatever the nature of the research.”
[] Maddock, Mason, and Starbird (2015) considers the question of whether researchers should use tweets that have been deleted. Read their paper to learn about the background.
[] In an article on the ethics of field experiments, Humphreys (2015), proposed the following hypothetical experiment to highlight the ethical challenges of interventions that are done without consent of all impacted parties and that harms some and help others.
“Say a researcher is contacted by a set of community organizations that want to figure out whether placing street lights in slums will reduce violent crime. In this research the subjects are the criminals: seeking informed consent of the criminals would likely compromise the research and it would likely not be forthcoming anyhow (violation of respect for persons); the criminals will likely bear the costs of the research without benefiting (violation of justice); and there will be disagreement regarding the benefits of the research – if it is effective, the criminals in particular will not value it (producing a difficulty for assessing benevolence). . . . The special issues here are not just around the subjects however. Here there are also risks that obtain to non-subjects, if for example criminals retaliate against the organizations putting the lamps in place. The organization may be very aware of these risks but be willing to bear them because they erroneously put faith in the ill-founded expectations of researchers from wealthy universities who are themselves motivated in part to publish.”
[] In the 1970’s 60 men participated in field experiment that took place in the men’s bathroom at a university in the midwestern part of the US (the researchers don’t name the university) (Middlemist, Knowles, and Matter 1976). The researchers were interested in how people respond to violations of their personal space, which Sommer (1969) defined as the “area with invisible boundaries surrounding a person’s body into which intruders may not come.” More specifically, the researchers chose to study how a man’s urination was impacted by the presence of others nearby. After conducting a purely observation study, the researchers conducted a field experiment. Participants were forced to use the left-most urinal in a three urinal bathroom (the researchers do not explain exactly how this happened). Next, participants were assigned to one of three levels of interpersonal distance. For some men a confederate used a urinal right next to them, for some men a confederate used a urinal one space away from them, and for some men no confederate entered the bathroom. The researchers measured their outcome variables—delay time and persistence—by stationing a research assistant inside the toilet stall adjacent to the participant’s urinal. Here’s how the researchers described the measurement procedure:
“An observer was stationed in the toilet stall immediately adjacent to the subjects’ urinal. During pilot tests of these procedures it became clear that auditory cues could not be used to signal the initiation and cessation of [urination]. . . . Instead, visual cues were used. The observer used a periscopic prism imbedded in a stack of books lying on the floor of the toilet stall. An 11-inch (28-cm) space between the floor and the wall of the toilet stall provided a view, through the periscope, of the user’s lower torso and made possible direct visual sightings of the stream of urine. The observer, however, was unable to see a subject’s face. The observer started two stop watches when a subject stepped up to the urinal, stopped one when urination began, and stopped the other when urination was terminated.”
The researchers found that decreased physical distance leads to increased delay of onset and decreased persistence (Figure 6.7).
[] In August 2006, about 10 days prior to the a primary election, 20,000 people living in Michigan received a mailing that showed their voting behavior and the voting behavior of their neighbors (Figure 6.8). (As discussed in the chapter, in the US, state governments keeps records of who votes in each election and this information is available to the public.) This particular treatment produced the largest effect ever seen up to that point for a single piece mailing: it increased the turnout rate by 8.1 percentage points (Gerber, Green, and Larimer 2008). To put this in context, one piece mailings typically produce increases of about one percentage point (Gerber, Green, and Larimer 2008). The effect was so large that a political operative named Hal Malchow offered Donald Green $100,000 not to publish the result of the experiment (presumably so that Malchow could make use of this information himself) (Issenberg 2012, p 304). But, Alan Gerber, Donald Green, and Christopher Larimer did publish the paper in 2008 in the American Political Science Review.
When you carefully inspect the mailer in Figure 6.8 you may notice that the researchers’ names do not appear on it. Rather, the return address is to Practical Political Consulting. In the acknowledgment to the paper the authors explain: “Special thanks go to Mark Grebner of Practical Political Consulting, who designed and administered the mail program studied here.”
[] Building on the previous question, once these 20,000 mailers were sent (Figure 6.8), as well as 60,000 other potentially less sensitive mailers, there was a backlash from participants. In fact, Issenberg (2012) (p 198) reports that “Grebner [the director of Practical Political Consulting] was never able to calculate how many people took the trouble to complain by phone, because his office answering machine filled so quickly that new callers were unable to leave a message.” In fact, Grebner noted that the backlash could have been even larger if they had scaled up the treatment. He said to Alan Gerber, one of the researchers, “Alan if we had spent five hundred thousand dollars and covered the whole state you and I would be living with Salman Rushdie.” (Issenberg 2012, p 200)
[] In practice, most ethical debate occurs about studies where researchers do not have true informed consent from participants (e.g., the three case studies in this chapter). However, ethical debate can also occur for studies that have true informed consent. Design a hypothetical study where you would have true informed consent from participants, but which you still think would be unethical. (Hint: If you are struggling, you can try reading Emanuel, Wendler, and Grady (2000).)
[] Researchers often struggle to describe their ethical thinking to each other and to the general public. After it was discovered that Taste, Ties, and Time was re-identified, Jason Kauffman, the leader of the research team, made a few public comments about the ethics of the project. Read Zimmer (2010) and then rewrite Kauffman’s comments using the principles and ethical frameworks that are described in this chapter.
[] Banksy is one of the most famous contemporary artist in the United Kingdom, and he is know for politically-oriented street graffiti (Figure 6.9). His precise identity, however, is a mystery. Banksy has a personal website so he could make his identity public if he wanted, but he has chosen not to. In 2008 the Daily Mail, a newspaper, published an article claiming to identify Banksy’s real name. Then in 2016, Michelle Hauge, Mark Stevenson, D. Kim Rossmo and Steven C. Le Comber (2016) attempted to verify this claim using Dirichlet process mixture model of geographic profiling. More specifically, they collected the geographic locations of Banksy’s public graffiti in Bristol and London. Next, by searching through old newspaper articles and public voting records, Hauge and colleagues found past addresses of the named individual, his wife, and his football (i.e., soccer) team. The author’s summarize the finding of their paper as follows:
“With no other serious ‘suspects’ [sic] to investigate, it is difficult to make conclusive statements about Banksy’s identity based on the analysis presented here, other than saying the peaks of the geoprofiles in both Bristol and London include addresses known to be associated with [name redacted].”
Following Metcalf and Crawford (2016), I have decided to not to include the name of the individual when discussing this study.
[] In an interesting article Metcalf (2016) makes the argument that “publicly available datasets containing private data are among the most interesting to researchers and most risky to subjects.”
[] In this chapter I proposed the rule of thumb that all data is potentially identifiable and all data is potentially sensitive. Table 6.5 provides a list of examples of data that has no obviously personally identifying information but which can still be linked to specific people.
Data | Citation |
---|---|
Health insurance records | Sweeney (2002) |
Credit card transaction data | Montjoye et al. (2015) |
Netflix movie rating data | Narayanan and Shmatikov (2008) |
Phone call meta-data | Mayer, Mutchler, and Mitchell (2016) |
Search log data | Barbaro and Zeller Jr (2006) |
Demographic, administrative, and social data about students | Zimmer (2010) |
[] Putting yourself in everyone’s shoes includes your participants and the general public, not just your peers. This distinction is illustrated in the case of the Jewish Chronic Disease Hospital (Katz, Capron, and Glass 1972, Ch. 1; Lerner 2004; Arras 2008).
Dr. Chester M. Southam was a distinguished physician and researcher at Sloan-Kettering Institute for Cancer Research and an Associate Professor of Medicine at the Cornell University Medical College. On July 16, 1963, Southam and two colleagues injected live cancer cells into the bodies of 22 debilitated patients at the Jewish Chronic Disease Hospital in New York. These injections were part of Southam’s research to understand the immune system of cancerous patients. In earlier research, Southam had found that healthy volunteers were able to reject injected cancer cells in roughly 4 to 6 weeks, whereas it took patients who already had cancer much longer. Southam wondered whether the delayed response in the cancer patients was because they had cancer or because they were elderly and debilitated already. To address these possibilities, Southam decided to inject live cancer cells into a group of people who were elderly and debilitated but who did not have cancer. When word of the study spread, triggered in part by the resignation of three physicians who were asked to participate, some made comparisons to the Nazi Concentration Camp Experiments, but others—based in part on assurances by Southam—found the research unproblematic. Eventually, the New York State Board of Regents reviewed the case in order to decide if Southam should be able to continue to practice medicine. Southam argued at his defense that he was acting in “the best tradition of responsible clinical practice.” Southam’s defense was based on a number of claims, which were all supported by several distinguished experts who testified on his behalf: (1) his research was of high scientific and social merit; (2) there were no appreciable risks to participants; a claim based in part of Southam’s 10 years of prior experience with more than 600 subjects; (3) the level of disclosure should be adjusted according to the level of risk posed by the researcher; (4) the research was in conformity with the standard of medical practice at that time. Ultimately, the Regent’s board found Southam guilty of fraud, deceit, and unprofessional conduct, and suspended his medical license for one year. Yet, just a few years later, Chester M. Southam was elected president of the American Association of Cancer Researchers.
[] In a paper titled “Crowdseeding in Eastern Congo: Using Cell Phones to Collect Conflict Events Data in Real Time”, Van der Windt and Humphreys (2016) describe a distributed data collection system (see Chapter 5) that they created in Eastern Congo. Describe how the researchers dealt with the uncertainty about possible harms to participants.
[] In October 2014, three political scientists sent mailers to 102,780 registered voters in Montana as part of an experiment to measure whether voters who are given more information are more likely to vote. The mailers—which were labeled 2014 Montana General Election Voter Information Guide—placed Montana Supreme Court Justice candidates, which is a non-partisan election, on a scale from liberal to conservative, which included Barack Obama and Mitt Romney as comparisons. The mailer also included a reproduction of the Great Seal of the State of Montana (Figure 6.10).
The mailers generated complaints from Montana voters, and they caused Linda McCulloch, Montana’s Secretary of State, to file a formal complaint with the Montana state government. The universities that employed the researchers—Dartmouth and Stanford—sent a letter to everyone that had received the mailer apologizing for any potential confusion and making clear that the mailer “was not affiliated with any political party, candidate or organization, and was not intended to influence any race.” The letter also clarified that the ranking “relied upon public information about who had donated to each of the campaigns.” (Figure 6.11)
In May 2015, the Commissioner of Political Practices of the State of Montana, Jonathan Motl, determined that the researchers violated Montana law: “The Commissioner determines that there are sufficient facts to show that Stanford, Dartmouth and/or its researchers violated Montana campaign practice laws requiring registration, reporting and disclosure of independent expenditures.” (Sufficient Finding Number 3 in Motl (2015)). The Commissioner also recommended that the County Attorney investigate whether the use of the unauthorized use of the Great Seal of Montana violates Montana state law (Motl 2015).
Stanford and Dartmouth disagreed with Motl’s ruling. A Stanford spokeswoman named Lisa Lapin said “Stanford…does not believe any election laws were violated” and that the mailing “did not contain any advocacy supporting or opposing any candidate.” She pointed out that the mailer explicitly stated that it “is nonpartisan and does not endorse any candidate or party.” (Richman 2015)
Candidates | Votes received | Percentage |
---|---|---|
Supreme Court Justice #1 | ||
W. David Herbert | 65,404 | 21.59% |
Jim Rice | 236,963 | 78.22% |
Supreme Court Justice #2 | ||
Lawrence VanDyke | 134,904 | 40.80% |
Mike Wheat | 195,303 | 59.06% |
[] On May 8, 2016, two researchers—Emil Kirkegaard and Julius Bjerrekaer—scraped information from the online dating site OkCupid and publicly released a dataset of about 70,000 users, including variables of username, age, gender, location, religion-related opinions, astrology-related opinions, dating interests, number of photos, etc., as well as answers given to the top 2600 questions on the site. In a draft paper accompanying the released data, the authors stated that “Some may object to the ethics of gathering and releasing this data. However, all the data found in the dataset are or were already publicly available, so releasing this dataset merely presents it in a more useful form.”
In response to the data release, one of the authors was asked on Twitter: “This data set is highly re-identifiable. Even includes usernames? Was any work at all done to anonymize it?”. His response was “No. Data is already public.” (Zimmer 2016; Resnick 2016)
[] In 2010 an intelligence analyst with the U.S. Army gave 250,000 classified diplomatic cables to the organization WikiLeaks, and they were subsequently posted online. Gill and Spirling (2015) argue that “the WikiLeaks disclosure potentially represents a trove of data that might be tapped to test subtle theories in international relations”, and then statistically characterize the sample of leaked documents. For example, the authors estimate that they represent about 5% of all diplomatic cables during that time period, but that this proportion varies from embassy to embassy (see Figure 1 of their paper).
[] In order to study how companies respond to complaints, a researcher sent fake complaint letters to 240 high-end restaurants in New York City. Here’s an excerpt from the fictitious letter.
“I am writing this letter to you because I am outraged about a recent experience I had at your restaurant. Not long ago, my wife and I celebrated our first anniversary. … The evening became soured when the symptoms began to appear about four hours after eating. Extended nausea, vomiting, diarrhea, and abdominal cramps all pointed to one thing: food poisoning. It makes me furious just thinking that our special romantic evening became reduced to my wife watching me curl up in a fetal position on the tiled floor of our bathroom in between rounds of throwing up. …Although it is not my intention to file any reports with the Better Business Bureau or the Department of Health, I want you, [name of the restaurateur], to understand what I went through in anticipation that you will respond accordingly.”
[] Building on this previous question, I’d like you to compare this study to a completely different study that also involved restaurants. In this other study, Neumark and colleagues (1996) sent two male and two female college students with fabricated resumes to apply for jobs as waiters and waitresses at 65 restaurants in Philadelphia, in order to investigate sex discrimination in restaurant hiring. The 130 applications led to 54 interviews and 39 job offers. The study found statistically significant evidence of sex discrimination against women in high-price restaurants.
[] Some time around 2010, 6,548 professors in the United States received emails similar to this one.
“Dear Professor Salganik,
I am writing you because I am a prospective Ph.D. student with considerable interest in your research. My plan is to apply to Ph.D. programs this coming fall, and I am eager to learn as much as I can about research opportunities in the meantime.
I will be on campus today, and although I know it is short notice, I was wondering if you might have 10 minutes when you would be willing to meet with me to briefly talk about your work and any possible opportunities for me to get involved in your research. Any time that would be convenient for you would be fine with me, as meeting with you is my first priority during this campus visit.
Thank you in advance for your consideration.
Sincerely, Carlos Lopez"
These emails were part of a field experiment to measure whether professors were more likely to respond to the email depending on 1) the time-frame (today vs next week) and 2) the name of the sender which was varied to signal ethnicity and gender (e.g., Meredith Roberts, Raj Singh, etc). The researchers found that when the requests were to meet in 1 week, Caucasian males were granted access to faculty members about 25% more often than were women and minorities. But, when the fictitious students requested meetings that same day these patterns were essentially eliminated (Milkman, Akinola, and Chugh 2012).
“Recently, you received an email from a student asking for 10 minutes of your time to discuss your Ph.D. program (the body of the email appears below). We are emailing you today to debrief you on the actual purpose of that email, as it was part of a research study. We sincerely hope our study did not cause you any disruption and we apologize if you were at all inconvenienced. Our hope is that this letter will provide a sufficient explanation of the purpose and design of our study to alleviate any concerns you may have about your involvement. We want to thank you for your time and for reading further if you are interested in understanding why you received this message. We hope you will see the value of the knowledge we anticipate producing with this large academic study.”
After explaining the purpose and design of the study, they further noted that:
“As soon as the results of our research are available, we will post them on our websites. Please rest assured that no identifiable data will ever be reported from this study, and our between subject design ensures that we will only be able to identify email responsiveness patterns in aggregate – not at the individual level. No individual or university will be identifiable in any of the research or data we publish. Of course, any one individual email response is not meaningful as there are multiple reasons why an individual faculty member might accept or decline a meeting request. All data has already been de-identified and the identifiable email responses have already been deleted from our databases and related server. In addition, during the time when the data was identifiable, it was protected with strong and secure passwords. And as is always the case when academics conduct research involving human subjects, our research protocols were approved by our universities’ Institutional Review Boards (the Columbia University Morningside IRB and the University of Pennsylvania IRB).
If you have any questions about your rights as a research subject, you may contact the Columbia University Morningside Institutional Review Board at 212-851-7040 or by email at askirb@columbia.edu and/or the University of Pennsylvania Institutional Review Board at 215-898-2614.
Thank you again for your time and understanding of the work we are doing."