Online Data Collection
Recommended experimental protocol:
- Please ensure your IRB protocol covers your experimental procedures
- Use to code the survey/experiment and then post that link to Amazon Mechanical Turk (MTurk, for a general how-to on MTurk: see and )
- For a great how-to on Qualtrics and programming a psychological experiment:
- For tips on coding in HTML: see this
- In the Human Neuroscience Laboratory, we have direct experience coding up experiments employing both word and picture stimuli testing old-new recognition memory, recall, source memory (e.g., for spatial location), as well as a variety of creativity tasks (e.g., Alternate Uses Task and Remote Associates Task). For templates contact Preston. We also have experience with counterbalancing and looping in Qualtrics (e.g., see this ), which is necessary for appropriate and efficient experimental designs!
- If using Qualtrics, there are FOUR things that are critical:
- 1a) Ensuring that you monitor off-task performance (for related publication, )
- a) See here for the code to add in your (see the 'Look and Feel' and then 'General' section of your survey)
- b) See here for the code to add in your (see the 'Look and Feel' and then 'General' section of your survey)
- c) Make sure to add the Embedded Data fields as detailed in Permut et al. (2019)
- 1b) The code listed in 1a, in certain cases may not work, due to compatibility issues in Qualtrics
- See for another option ()
- 2) At the end of online experiments, each MTurk worker is provided a unique code which they are asked to input into the MTurk survey. They must do this before submitting the Human Intelligence Task (HIT) to receive payment. If participants do not enter this code, payment is not given.
- For example, build a final question that looks like
- 3) To ensure you create a Qualification in MTurk to ensure MTurkers cannot retake your experiment
- Why do you need this? and
- To get your unique MTurk code:
- There are additional qualifications/restrictions you can setup in your MTurk requester account when posting your study, see here for the Human Neuroscience Laboratory (as you can see in the picture, I have set a restriction such that the study has not been participated in before, here called, 'Remembering Lists')
- E.g., We like recruiting participants with a HIT approval rate of >95% (meaning that those who participate are those that are likely to finish it as that is their prior history), and also having participants with at least 50 HITs approved meaning they have done a considerable amount of MTurk studies (note that these setting follow other published studies).
- Note also that some restrictions, like age, cost additional money (so called, 'Premium Restrictions')
- 4) How to save/extract your data (e.g., as a .csv file)
- Turn on variable naming using the gear button next to the question. Check box called 'variable naming': this will save the actual response selected as opposed to the position of the click or choice.
- In the export section make sure to export the .csv with the 'choice text' selected (see also additional options like removing blanks)
- Point is, do not assume all your data/responses will be saved. Ensure the output captures it!
- 1a) Ensuring that you monitor off-task performance (for related publication, )
Alternatives to Qualtrics
- You can code in Java/HTML straight in MTurk, however that is not GUI-based and can be time consuming to learn and do
- If interested in this approach, see these very helpful
- In addition, you can also use or
- REDCap is more survey-based. Research Services @涩里番下载 are available to meet with members of the 涩里番下载 community to discuss individual REDCap projects. Individual consultations or customized class consultations are available by emailing researchservices@bc.edu or dalgin@bc.edu.
- PsyToolkit has an amazing of classic cognitive psychology experiment templates some of which you think would not be suitable for online data collection (e.g., Stroop Task, Mental Rotation, Posner Cueing Task, Wisconsin Cart Sorting Task, Go/NoGo task, Inhibition of Return, Multitasking, etc.)
(Not Comprehensive) Checklist for Online Data Collection
These are suggestions and not rules ()
1) Hiding sensitive content in your client-side code
- Check your client-side code for sensitive content. Open your experimental procedure in a browser window. If you are using Chrome, click View > Developer > View Source. In Firefox, click Tools > Web Developer > Page Source. (Google 鈥渉ow to view page source in [browser name]鈥 to find instructions for other browsers.) A new browser window will open with the source code for the particular page of your experiment you are viewing. Scroll through this code, and make sure there isn鈥檛 anything in there you wouldn鈥檛 want participants to see (e.g. hidden payoff values, logic that reveals your experimental manipulation, names of different conditions, etc.).
- Obfuscate any JavaScript code that you are sending client-side so participants can鈥檛 view hidden values or the logic of your procedure by viewing the source code. Open-source tools like created by Timofey Kachalov can be used to obfuscate your JavaScript code. A Web UI for javascript-obfuscator@0.18.6 can be found at . (A better option is to keep all sensitive content server-side, but this isn鈥檛 always possible.)
2) Testing your procedure
- Create a Mechanical Turk worker account so you can test your procedure as a participant before launching publicly
- Set a custom qualification and assign it only to your own worker account
- Launch your experiment with the custom qualification assigned to your worker account, preventing other workers from viewing the HIT
- Log into your worker account, and complete your HIT at least 8 times, each time using one of the following 8 configurations (Browser + Operating System):
- Chrome + Mac
- Chrome + Windows
- Firefox + Mac
- Firefox + Windows
- Edge + Mac
- Edge + Windows
- Safari + Mac
- Safari + Windows
- Either block mobile devices, or test your HIT on mobile to ensure your procedure functions as intended. (You can simulate a mobile request in Chrome by selecting View > Developer Tools, clicking on the Toggle Device Toolbar icon, selecting 鈥渕id-tier mobile鈥 and selecting a mobile device model in the device toolbar that pops up. More information can be found on .)
- Check the URLs for each page in your procedure to ensure they do not contain any information that might reveal your experimental manipulation
- Check your logged data to make sure all activity is being tracked appropriately
- Have a few friends complete your survey and time the duration of their responses to each question type, so you have some sense of how long it *should* take the average participant to complete the procedure
3) Protecting yourself against low quality data
- Always include a neutral answer (e.g. 鈥淐hoose here鈥) as the preselected or default response in form elements such as drop down lists
- Test multiple response styles (e.g. radial, slider, drop down) to determine whether response type may bias results
- Do not advertise eligibility requirements as part of the study sign up process, instead use qualification restrictions (such as country, number of studies completed, etc.) or a pre-survey to restrict eligibility (Siegel et al., 2015)
- Prevent retakes by assigning custom qualifications through the Mechanical Turk API, or using Qualtrics鈥檚 built-in system
- Collect IP addresses and location data, as these can be used to detect bot activity
- Use a transcription task at the start of your experiment that requires participants to transcribe a photograph of handwritten words
- Use a comprehension check with a non-obvious pattern (i.e. answers to multiple choice comprehension questions should not all be assigned to the top radial in each choice list)
- Include a mandatory free-form response at the end of your survey (you can instruct participants simply to type 鈥渘a鈥 if they have no response/comments)
- Use custom JavaScript to count the number of times a subject leaves the page, to keep track of how long the subject spends away from the page, or even to display a pop up message when the participant leaves the page, warning her that the HIT will not be approved if she leaves the procedure before completing it. (You can find some sample code that implements this sort of tracking in Qualtrics, see above)
4) Reducing the negative impact of attrition
- Ask personal demographic questions, and announce chances for additional financial incentives at the start of your procedure, as both of these measures have been shown to reduce dropouts (Reips, 2002a)
- Introduce a high hurdle task by concentrating motivationally adverse factors toward the start of the procedure
- Ask participants to estimate the likelihood they will complete the whole experiment before they begin the procedure
- Use a prewarning with an appeal to conscience (e.g. 鈥淥ur research depends on good quality data, and that quality is compromised whenever participants fail to complete the full procedure.鈥) (Zhou & Fishbach, 2016)
- Use practice trials, pilot new experimental protocols, or ask participants to complete a boring task prior to starting the focal experimental procedure
5) Analyzing your data
- Wait until all data have posted to your server, or deactivate your Qualtrics survey, before beginning your analysis 鈥 some platforms (e.g. Qualtrics) do not post unfinished responses to the server for up to one week after a participant initiates her response, and you will not be able to measure attrition rates until these data have been posted
- Check every free-form response you have collected, looking for suspicious patterns (e.g. one script I鈥檝e detected in my data copies text from the surrounding page, mixes up the words, then pastes the result into the free-form response box)
- Check for repeated IP addresses and latitude/longitude coordinates, as significant numbers of such repeated information may be a sign of bot activity (Bai, 2018)
- Include browser and operating system as factors in your analysis, as system configurations may impact participants鈥 interaction with your procedure, and have been found to be correlated with important individual differences that may affect your results (Buchanan & Reips, 2001)
Arechar, A. A., Kraft-Todd, G. T., & Rand, D. G. (2017). Turking overtime: how participant characteristics and behavior vary over time and day on Amazon Mechanical Turk. Journal of the Economic Science Association, 3(1), 1-11.
Bai, H. (2018). Evidence that A Large Amount of Low Quality Responses on MTurk Can Be Detected with Repeated GPS Coordinates. Retrieved from: https://www.maxhuibai.com/blog/evidence-that-responses-from-repeating-gps-are-random
Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey research. Behavior Research Methods, 43(3), 800.
Buchanan, T., & Reips, U.-D. (2001). Platform-dependent biases in Online Research: Do Mac users really think different? In K. J. Jonas, P. Breuer, B. Schauenburg, & M. Boos (Eds.), Perspectives on Internet Research: Concepts and Methods.
Casey, L. S., Chandler, J., Levine, A. S., Proctor, A., & Strolovitch, D. Z. (2017). Intertemporal differences among MTurk workers: Time-based sample variations and implications for online data collection. SAGE Open, 7(2), 2158244017712774.
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonna茂vet茅 among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112鈥130.
Chandler, J. J., & Paolacci, G. (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8(5), 500-508.
Dennis, S. A., Goodson, B. M., & and Pearson, C. (2018). Mturk Workers鈥 Use of Low-Cost 鈥榁irtual Private Servers鈥 to Circumvent Screening Methods: A Research Note. Available at SSRN:
Krantz, J. H., & Reips, U. D. (2017). The state of web-based research: A survey and call for inclusion in curricula. Behavior Research Methods, 49(5), 1621-1629.
Necka, E.A., Cacioppo S., Norman G.J., Cacioppo, J.T. (2016). Measuring the Prevalence of Problematic Respondent Behaviors among MTurk, Campus, and Community Participants. PLoS ONE, 11(6): e0157732.
Plant, R. (2016). A reminder on millisecond timing accuracy and potential replication failure in computer-based psychology experiments: An open letter. Behavior Research Methods, 48(1), 408-411.
Reips, U. D. (2002a). Standards for Internet-based experimenting. Experimental psychology, 49(4), 243.
Reips, U. D. (2002b). Internet-based psychological experimenting: Five dos and five don鈥檛s. Social Science Computer Review, 20 (3), 241-249.
Reips, U. D. (2010). Design and formatting in Internet-based research. Advanced methods for conducting online behavioral research, 29-43.
Sharpe Wessling, K., Huber, J., & Netzer, O. (2017). MTurk character misrepresentation: Assessment and solutions. Journal of Consumer Research, 44(1), 211-230.
Siegel, J. T., Navarro, M. A., & Thomson, A. L. (2015). The impact of overtly listing eligibility requirements on MTurk: An investigation involving organ donation, recruitment scripts, and feelings of elevation. Social Science & Medicine, 142, 256-260.
Zhou, H., & Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493.