Traditional surveys are closed, boring, and removed from life. Now we can ask questions that are more open, more fun, and more embedded in life.
The total survey error framework encourages researchers to think about survey research as a two-part process: recruiting respondents and asking them questions. In section 3.4, I discussed how the digital age changes how we recruit respondents, and now I’ll discuss how it enables researchers to ask questions in new ways. These new approaches can be used with either probability samples or non-probability samples.
A survey mode is the environment in which the questions are asked, and it can have important impacts on measurement (Couper 2011). In the first era of survey research, the most common mode was face to face, while in the second era, it was telephone. Some researchers view the third era of survey research as just an expansion of survey modes to include computers and mobile phones. However, the digital age is more than just a change in the pipes through which questions and answers flow. Instead, the transition from analog to digital enables—and will likely require—researchers to change how we ask questions.
A study by Michael Schober and colleagues (2015) illustrates the benefits of adjusting traditional approaches to better match digital-age communication systems. In this study, Schober and colleagues compared different approaches to asking people questions via a mobile phone. They compared collecting data via voice conversations, which would have been a natural translation of second-era approaches, to collecting data via many microsurveys sent through text messages, an approach with no obvious precedent. They found that microsurveys sent through text messages led to higher-quality data than voice interviews. In other words, simply transferring the old approach into the new medium did not lead to the highest-quality data. Instead, by thinking clearly about the capabilities and social norms around mobile phones, Schober and colleagues were able to develop a better way of asking questions that lead to higher-quality responses.
There are many dimensions along which researchers can categorize survey modes, but I think the most critical feature of digital-age survey modes is that they are computer-administered, rather than interviewer-administered (as in telephone and face-to-face surveys). Taking human interviewers out of the data collection process offers enormous benefits and introduces some drawbacks. In terms of benefits, removing human interviewers can reduce social desirability bias, the tendency for respondents to try to present themselves in the best possible way by, for example, under-reporting stigmatized behavior (e.g., illegal drug use) and over-reporting encouraged behavior (e.g., voting) (Kreuter, Presser, and Tourangeau 2008). Removing human interviewers can also eliminate interviewer effects, the tendency for responses to be influenced in subtle ways by the characteristics of the human interviewer (West and Blom 2016). In addition to potentially improving accuracy for some types of questions, removing human interviewers also dramatically reduces costs—interview time is one of the biggest expenses in survey research—and increases flexibility because respondents can participate whenever they want, not just when an interviewer is available. However, removing the human interviewer also creates some challenges. In particular, interviewers can develop a rapport with respondents that can increase participation rates, clarify confusing questions, and maintain respondents’ engagement while they slog through a long (potentially tedious) questionnaire (Garbarski, Schaeffer, and Dykema 2016). Thus, switching from an interviewer-administered survey mode to a computer-administered one creates both opportunities and challenges.
Next, I’ll describe two approaches showing how researchers can take advantage of the tools of the digital age to ask questions differently: measuring internal states at a more appropriate time and place through ecological momentary assessment (section 3.5.1) and combining the strengths of open-ended and closed-ended survey questions through wiki surveys (section 3.5.2). However, the move toward computer-administered, ubiquitous asking will also mean that we need to design ways of asking that are more enjoyable for participants, a process sometimes called gamification (section 3.5.3).