As these examples show, researchers have a lot of control over what takes place in experimental research. They can choose the participants and set up the environment to support the research goals. As shown, when researchers participate in the construction or customisation of interactive systems, they can control what participants can do and through what kinds of interactions and processes subjects take part in the experiment. However, online experiments conducted via computer-mediated tools, like online survey experiments and consumer-facing services used in-the-wild, are limited as researchers do not have control over the subjects and how they participate in the research.
To help in generalising the experimental responses to wider society, scholars often seek to conduct random sampling. That means that responders should be drafted at random from the base population, or the population results are sought to generalise to. This way, researchers ensure that results are not biased because of responders' characteristics. In some crowdwork platforms, researchers can control who participates in the experimental study. Platforms allow set quotas based on demographic characteristics or other relevant background variables. Thus, the process follows what many survey companies are doing in their data collection. In other platforms or research approaches, researchers' abilities to control participant pools are more limited. Some of these questions about who are responding to surveys can be addressed through a post-stratification strategy - compute weights for responders to ensure the sample presents the base population, which is similar to traditional survey research. In best cases, online platforms allow convenient access to a versatile pool of people that is more representative than bachelorâs students, a common study population in many experimental studies.
Online experiments have additional concerns that ought to be accounted for when they are used. For example, if survey research is marketed primarily through social networks, it is possible that responders might be acquaintances of previous responders. This is not always bad but can actually be part of the survey design. For example, Itkonen (2015) examined if responses to climate-change-related questions were similar across people who were Facebook friends. In these cases, the goal was to get someone to share an invitation to the survey on their network. Similarly, analysis techniques often used to work with these kinds of data assume responders are independent (i.e. the first responder is not impacting the second responder). However, ethnographic studies among Turkers, people working on Amazon's Mechanical Turk platform, have shown they have extensive social collectives (Gupta et al., 2014; Martin et al., 2014). Gupta et al. (2014) show Turkers help each other when working on human-intelligence tasks, such as recognising and digitising content, to ensure they do these tasks correctly. Naturally, survey research is different from such human-intelligence tasks where participants are expected to return a correct answer. However, ethnographic findings demonstrate that it is a mistake to assume responses to be independent. Survey responders on this kind of platform might actually be dependent. Therefore, depending on the experimental approach and deployment strategies, it might not be sufficient to conduct only a post-stratification. In these cases, it may be beneficial to understand data not as a random sample, but as some kind of convenience sampling that limits the claims that can emerge from data.