The following guide to developing questionnaire items and organising the questionnaire is based on best practice (Gehlbach & Brinkworth, 2011; Gehlbach & Artino Jr., 2018). These best practices have been tested across over 40 years of research (Krosnick & Presser, 2010; Schwarz, 1999).
Best practice for creating items
- Word items as questions rather than statements and avoid 'agree-disagree' response options
- Use verbal labels for each response option
- Ask about one idea at a time
- Phrase questions with positive language
- Use at least five response options per scale
- Maintain equal spacing between repsonse options. Use additional space to visually separate non-substansive response options
Best practice for organising the whole questionnaire
This guidance has been summarised from Gehlbach and Artino (2018).
Best practice for organising the whole questionnaire
- Ask the more important items earlier in the questionnaire
- Ensure each item applies to each respondent
- Use scales rather than single items
- Maintain a consistent visual layout of the questionnaire
- Place sensitive items (e.g. demographic questions) later in the questionnaire
References
Artino, Jr., A. R., & Gehlbach, H. (2012). AM Last Page: Avoiding Four Visual-Design Pitfalls in Survey Development. Academic Medicine, 87(10), 1452. Retrieved from https://www.researchgate.net/profile/Hunter_Gehlbach/publication/231210670_AM_Last_Page_Avoiding_Four_Visual-Design_Pitfalls_in_Survey_Development/links/5a835de6aca272d6501eb6a3/AM-Last-Page-Avoiding-Four-Visual-Design-Pitfalls-in-Survey-Development.pdf
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method (4th ed.). Hoboken, New Jersey: John Wiley & Sons, Inc.
Gehlbach, H., & Artino Jr., A. R. (2018). The survey checklist (manifesto). Academic Medicine, 93(3), 360-366. Retrieved from https://journals.lww.com/academicmedicine/fulltext/2018/03000/The_Survey_Checklist__Manifesto_.18.aspx#pdf-link
Gehlbach, H., & Brinkworth, M. E. (2011). Measure twice, cut down error: A process for enhancing the validity of survey scales. Review of General Psychology, 15(4), 380-387. Retrieved from https://dash.harvard.edu/bitstream/handle/1/8138346/Gehlbach%20-%20Measure%20twice%208-31-11.pdf?sequence=1&isAllowed=y
Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In P. V. Marsden, & J. D. Wright (Eds.), Handbook of Survey Research. Bingley, England: Emerald Group Publishing.
Nielsen, T., Makransky, G., Vang, M. L., & Danmeyer, J. (2017). How specific is specific self-efficacy? A construct validity study using Raschmeasurement models. Studies in Educational Evaluation, 53, 87-97.
Saris, W. E., Revilla, M., Krosnick, J. A., Schaeffer, E. M., & Shaeffer, E. M. (2010). Comparing questions with agree/disagree response options to questions with item-specific response options. Survey Research Methods, 4, 61-79.
Schwarz, N. (1999). Self-reports: how the questions shape the answers. American Psychology, 54, 93-105.
Swain, S. D., Weathers, D., & Niedrich, R. W. (2008). Assessing three sources of misreponse to reversed Likert items. Journal of Marketing Research, 45, 116-131.
Weng, L. -J. (2004). Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educational and Psychological Measurement, 64, 956-972. Retrieved from https://journals.sagepub.com/doi/pdf/10.1177/0013164404268674
Wright, J. D. (1975). Does acquiescence bias the 'Index of Political Efficacy?'. The Public Opinion Quarterly, 39(2), 219-226.