sample questionnaire for medical research

Yearly paid plans are up to 65% off for the spring sale. Limited time only! 🌸

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

sample questionnaire for medical research

HubSpot CRM

sample questionnaire for medical research

Google Sheets

sample questionnaire for medical research

Google Analytics

sample questionnaire for medical research

Microsoft Excel

sample questionnaire for medical research

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

sample questionnaire for medical research

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • 20 Amazing health survey questions for questionnaires

20 Amazing health survey questions for questionnaires

Surveys are an excellent approach to acquiring data that isn't revealed by lab results or spoken in casual conversation. Patients can be reluctant to offer you personal feedback, but surveys allow them to do so confidently. Online surveys encourage communication with the patient by collecting opinions from clients and employees.

The health assessment of a person plays a significant role in determining and assessing their health status . Healthcare organizations frequently use health assessment survey questions to gather patient data more effectively, quickly, and conveniently. This article will explain a health survey, how you can create it quickly and promptly on forms.app , and examples of health survey questions you can use in your excellent survey.

  • What is a health survey?

Health surveys are a crucial and practical decision-making tool when creating a health plan. Health studies provide detailed information about the chronic illnesses that patients have, as well as about patient perspectives on health trends, way of life, and use of healthcare services .

A patient satisfaction survey is a collection of questions designed to get feedback from patients and gauge their satisfaction with the service and quality of their healthcare provider . The patient satisfaction survey questionnaire assists in identifying critical indicators for patient care that aid medical institutions in understanding the quality of treatment offered and potential service issues.

sample questionnaire for medical research

  • How to write better questions in your health survey

The proper application of a health survey is its most crucial component. The timing is critical to health surveys. Patients in the hospital typically need a certain amount of uninterrupted time to complete the survey questions without interruption; instead, they should complete the surveys after their visit. Here are some tips on how to create a good health survey question:

1. Ask clear questions

In general, people avoid solving long and obscure surveys. Patients want to understand the questions clearly when sharing their views and ideas. If you keep the health survey questions clear and short , you can increase the number of respondents and get more effective results. To make your questions clear, you can add descriptions under question titles.

sample questionnaire for medical research

2. Use visual power

The use of visuals in surveys positively affects the number of participants. By using some images in health surveys, you can enable patients to respond more quickly and accurately . For example, in a health survey question asking the patient which region he has pain, you can make it easier for patients to answer by using visuals.

sample questionnaire for medical research

3. Reserve a section in the questionnaire for patient suggestions

Opinions and suggestions of patients are essential to improving the treatment, health, and hospital systems. In the last part of the questionnaire, you can ask the patients to present their opinions and suggestions . In this way, the patient can feel more important , and you can reach the patient's views more effectively .

sample questionnaire for medical research

4. Include the 'other' option in the answer choices.

There may not be an option suitable for the patients in the answer choice. This may cause the patient to leave the question blank or give an incorrect answer. In this case, you can ask the patient to write their reply by adding the ' other ' option to the options .

sample questionnaire for medical research

  • 20 excellent health survey question examples

A health survey question asks respondents about their general health and condition. Researchers can use these questions to gather data about a patient's public health, disease risk factors, feelings about their medical care, and other relevant information . 

A health survey effectively gathers information from a large population or a specific target group. You can collect critical data from the patient by asking the appropriate questions at the right time. Below, this article  has shared 20 Great health care survey question examples for surveys:

1  - How healthy do you feel on a scale of 1 to 10?

2  - How often do you go to the hospital?

  a) Once a week

  b) Once every two weeks

  c) Once a month

  d) Once every three months

  e) Once a year

  f) Other (Please write your answer)

3  - Do you have any chronic diseases?

  a) Yes 

  b) No 

4  - Do you have any genetic diseases?

  a) Diabetes

  b) High blood pressure

  c) Huntington

  d) Thalassemia

  e) Hemophilia

  f) Other (Please specify)

5  - Do you regularly use alcohol and/or drugs?

  a) Yes to both

  b) Only to drugs

  c) Only to alcohol

  d) No

6  - How frequently do you get your health checkup?

  a) Once in 2 months

  b) Once in 6 months

  c) Once a year

  d) Only when needed

  e) Never get it done

7  - Does anyone in your family members have a hereditary disease?

  a) Yes

  b) No

8  - How often do you exercise?

  a) Every day

  b) Once in two days

  c) Once a week

  d) Once a month

  e) Never

9  - Have you had an allergic reaction or received treatment for it?

  a) Yes, I did. I also received treatment.

  b) I had it but did not receive treatment

  c) I've never had one.

10  - What level of function can you carry out routine tasks?

  a) Excellent level

  b) Good level

  c) Intermediate level

  d) Bad level

  e) Terrible level

11  - Have you experienced depression or psychological distress in the last four weeks?

  a) Yes very much

  b) Sometimes

  c) Never

12  - How much have your emotional issues impacted your interactions with friends and family over the past four weeks?

  a) It didn't affect me at all

  b) Very little

  c) Moderate

  d) Quite a few

  e) Too much

13  - How would you rate your treatment process?

  a) Wonderful

  b) Above average

  c) Average

  d) Below average

  e) Very poor

14  - Do you use any medication regularly?

15  - What various medications have you used over the last 24 hours?

16  - How was the doctor's attitude towards you on a scale of 1 to 10?

17  - How do you rate the local hospitals in your area?

  a) Excellent

  b) Good

  d) Poor

18  - Please rate (1-10) your agreement with the following: Health insurance is affordable.

19  - Which of the following have you experienced pain in the past month?

  a) Heart

  b) Kidney

  c) Lung

  d) Stomach

  e) Other (Please specify)

20  - Do you recommend this health facility to your family and friends?

  a) Definitely yes

  b) Yes

  c) No  

  d) Definitely not

  • How to create a health survey on forms.app

forms.app is one of the best survey makers . It offers its users a wide variety of ready-to-use forms, surveys, and quizzes. The free template for health survey on forms.app is easy to use. It will be explained step by step how to use the forms.app to create the health questionnaire.

1  - Sign up or log in to forms.app : For health surveys that you can create quickly and easily on forms.app, you must first log in to forms.app. You can register for free and quickly if you do not have an existing account.

sample questionnaire for medical research

2  - Choose a sample or start from scratch :  On forms.app, you can select from a wide selection of templates covering a wide range of topics. You can edit an existing survey template on forms.app by selecting it and making the necessary changes, or you can start with a blank form and add fields as you see fit.

sample questionnaire for medical research

3  - Select a theme or manually customize your form : You can also select a different theme from the many options offered by form.app.

sample questionnaire for medical research

4  - Complete the settings : Finish the settings, and save. After completing all the sets, the test is ready to use! It can be used to save and share with attendees.

sample questionnaire for medical research

Free health survey templates

A hospital or health center can find patients' feedback on their care and services by conducting a health survey. You can quickly and efficiently get the answers from the patients using the forms.app questionnaire you created. This survey tool enables medical professionals to pinpoint risk factors in the neighborhood surrounding hospitals or healthcare facilities, including prevalent health practices like drug usage, smoking, poor dietary choices, and inactivity .

Hospitals can determine whether patients' diagnoses are accurate and whether their medications are enough to treat them. These surveys will undoubtedly move more quickly and contribute to improving health services if they ask each patient the right questions. You can get started using the free templates below.

Mental Health Quiz

Mental Health Quiz

Mental Health Evaluation Form

Mental Health Evaluation Form

Telemental Health Consent Form Template

Telemental Health Consent Form Template

Sena is a content writer at forms.app. She likes to read and write articles on different topics. Sena also likes to learn about different cultures and travel. She likes to study and learn different languages. Her specialty is linguistics, surveys, survey questions, and sampling methods.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

55+ excellent in-app survey questions to ask in your software

55+ excellent in-app survey questions to ask in your software

Şeyma Beyazçiçek

55 Amazing parent survey questions for your next questionnaire

55 Amazing parent survey questions for your next questionnaire

Defne Çobanoğlu

Market research surveys: Types and examples

Market research surveys: Types and examples

Medical Health Questionnaire

Streamline health assessments with our Medical Health Questionnaire to ensure accurate and efficient patient information gathering.

sample questionnaire for medical research

By Joshua Napilay on Apr 08, 2024.

Fact Checked by Nate Lacson.

sample questionnaire for medical research

Why is the Medical Health Questionnaire essential for accurate patient assessment?

The Medical Health Questionnaire is indispensable for precise patient evaluation, particularly about high blood pressure, general health, and diabetes. This crucial form assists hospitals in determining the overall health of clients, ensuring accurate data on blood pressure, diabetes status, and the presence of conditions like depression. Respondents, including employees and regular clients, are prompted to complete the form regularly, providing a comprehensive snapshot of their health.

By incorporating fields such as age, birth date, tobacco use, and job-related details, the form helps gauge users' average health levels. The data collected aids in the early detection of conditions like high blood pressure and diabetes, enabling hospitals to offer timely interventions. Moreover, assessing the number of days people experience conditions like depression or heart-related issues allows medical professionals to tailor care plans accordingly.

Efficiently organized sections save time for respondents and medical staff, promoting ease of use. Users can easily share relevant information about their health status, ensuring that hospitals can read and interpret the data promptly. This form is an essential tool in the healthcare system, offering a systematic approach to gathering crucial health data, ultimately contributing to better patient outcomes.

Printable Medical Health Questionnaire

Download this Medical Health Questionnaire for precise patient evaluation, particularly about high blood pressure, general health, and diabetes.

What key details should be gathered to establish a comprehensive medical profile?

To establish a comprehensive medical profile, it is crucial to gather critical details systematically. The process begins by determining the person's basic information, such as name, age, and address. The following step involves eliciting information about their medical history, starting with the year they started seeking medical attention regularly. This information is essential to assess the progression of their health over time.

The person's present condition is a focal point, requiring a detailed exploration of ongoing health issues or concerns. Understanding the day-to-day impact of these conditions is essential, as it provides insights into their daily life and activities. Additionally, inquiring about the week-to-week variations in their health allows for a more nuanced understanding of their overall well-being.

In the section dedicated to work, it's essential to determine the level of physical activity and any occupational hazards that may contribute to their medical profile. Adding details about their home environment further enriches the understanding of factors influencing their health.

As you gather information, consider incorporating a section on lifestyle factors, including habits, diet, and exercise routines. This holistic approach ensures that the medical profile is comprehensive and offers a more nuanced understanding of the person's overall health. Starting with basic information and systematically adding pertinent details, this approach allows healthcare professionals to create a thorough and accurate medical profile.

Have any significant medical events or conditions been in the patient's history?

When delving into a patient's medical history, the quest for significant events or conditions is paramount for a comprehensive understanding of their health trajectory. The inquiry begins with meticulously examining the patient's past, aiming to identify any noteworthy occurrences that may have left a lasting impact. This involves exploring major medical events, such as surgeries, hospitalizations, or significant illnesses that have shaped the patient's health narrative.

Chronic conditions are central to this investigation, as they often play a defining role in a patient's overall well-being. Uncovering conditions like diabetes, hypertension, or cardiovascular issues provides essential context for current health concerns. A detailed exploration of significant injuries, accidents, or allergic reactions also contributes valuable insights into the patient's medical history.

It is crucial to inquire about hereditary factors that might influence the patient's health, as a family history of certain conditions can significantly contribute to the overall risk assessment. Moreover, lifestyle-related events, such as changes in habits, diet, or exercise routines, are vital to understanding the patient's holistic health approach.

Pursuing significant medical events or conditions in a patient's history is integral to crafting a thorough medical profile. This comprehensive exploration enables healthcare professionals to tailor their approach, offering personalized care and interventions based on the patient's unique health journey.

How do the patient's daily habits, such as diet and exercise, impact their health?

The patient's daily habits, including diet and exercise, profoundly influence their health and well-being. Diet, as a cornerstone of health, significantly shapes the body's nutritional intake, playing a pivotal role in various physiological functions.

A balanced and nutritious diet fosters optimal organ function, immune system strength, and energy levels. Conversely, poor dietary choices may contribute to nutritional deficiencies, obesity, or the development of chronic conditions such as diabetes and cardiovascular diseases.

Exercise, another crucial component, contributes not only to physical fitness but also to mental health. Regular physical activity promotes cardiovascular health, muscular strength, and flexibility. It aids in weight management, reduces the risk of chronic diseases, and enhances overall mood by releasing endorphins. Conversely, a sedentary lifestyle may lead to weight gain, muscle atrophy, and an increased susceptibility to health issues.

The symbiotic relationship between diet and exercise further underscores the importance of a holistic approach to health. Healthy dietary choices synergize with regular exercise to create a robust foundation for overall wellness.

Healthcare professionals consider these daily habits when crafting personalized care plans, emphasizing the significance of lifestyle modifications in preventive healthcare. Recognizing the impact of diet and exercise empowers individuals to make informed choices, fostering a proactive approach to maintaining and enhancing their health.

Medical Health Questionnaire example (sample)

Empower your healthcare practice with our free Medical Health Questionnaire example to enhance patient care and streamline information gathering. Download now to access a user-friendly template that prioritizes accuracy and efficiency, ensuring a seamless healthcare experience for providers and patients.

Take a proactive step towards comprehensive health assessments and improved medical outcomes. Your journey to enhanced patient care starts with a simple click – download your free guide today.

Download this free Medical Health Questionnaire example here 

Medical Health Questionnaire example (sample)

How does the family's medical history contribute to understanding the patient's health?

The family's medical history is a valuable lens through which healthcare professionals gain insights into the patient's health predispositions, potential risks, and genetic susceptibilities. Examining the health trajectory of close relatives aids in understanding familial patterns of certain conditions, offering crucial information for risk assessment and preventive care.

Genetic factors play a significant role in determining an individual's susceptibility to various illnesses. A family history of diabetes, cardiovascular diseases, or certain cancers can highlight potential genetic links, informing healthcare providers about the patient's inherent risk factors. This knowledge enables a proactive approach to preventive measures, early screenings, and targeted interventions.

Furthermore, understanding the family's medical history aids in identifying hereditary conditions or genetic disorders that may affect the patient's health. This information allows healthcare professionals to tailor their care plans, screenings, and diagnostic approaches to account for the familial context.

A comprehensive understanding of the family's medical history contributes to a more holistic and personalized approach to healthcare. It empowers healthcare providers to proactively anticipate and address potential health issues, emphasizing the importance of integrating genetic and familial factors into assessing a patient's health and well-being.

What role do habits like smoking and alcohol consumption play in the patient's health?

Habits such as smoking and alcohol consumption play a pivotal role in shaping the patient's overall health, exerting both immediate and long-term impacts. Smoking, a well-established health risk, is linked to a myriad of detrimental health outcomes.

It significantly increases the risk of respiratory conditions like chronic obstructive pulmonary disease (COPD) and lung cancer, cardiovascular diseases and compromises overall lung function. The harmful effects extend beyond the respiratory system, affecting nearly every organ in the body.

Alcohol consumption similarly influences health outcomes. While moderate alcohol intake may have certain cardiovascular benefits, excessive or chronic consumption poses serious health risks. Long-term alcohol abuse is associated with liver diseases, cardiovascular issues, increased susceptibility to infections, and mental health disorders.

Both smoking and excessive alcohol consumption contribute to the development of chronic conditions, compromising the immune system and overall well-being. These habits are often intertwined, amplifying their collective impact on health. Moreover, they can exacerbate existing health conditions and hinder the efficacy of medical treatments.

Understanding the role of these habits is crucial for healthcare providers to develop targeted interventions and counseling strategies. Addressing smoking and alcohol consumption within the context of a patient's health allows for a more comprehensive and tailored approach to preventive care and health management. Encouraging lifestyle modifications forms an integral part of promoting overall well-being and preventing the onset of severe health conditions.

Who should be contacted in case of a medical emergency, and what are their details?

In the event of a medical emergency, it is imperative to have immediate access to individuals who can provide crucial information and make decisions on behalf of the patient. The primary contact is typically the patient's designated emergency contact person. This individual should be someone close to the patient, aware of their medical history, and capable of making prompt decisions in critical situations. It is vital to provide the emergency contact person's name, relationship to the patient, and a reachable phone number.

Additionally, their details should be included if the patient has a designated healthcare proxy or power of attorney. These individuals can make healthcare decisions for the patient if they cannot do so themselves. Providing the healthcare proxy's name, relationship, and contact information ensures a seamless emergency communication channel.

For minors, it is essential to list the contact details of parents or legal guardians. Schools, childcare providers, or relevant institutions should have this information readily available.

Ensuring that emergency contacts are well-informed and easily reachable is paramount for swift and effective medical intervention. These details are crucial in facilitating communication between healthcare providers and the patient's support network during critical moments.

Research and evidence

The Medical Health Questionnaire has a rich history rooted in the evolution of healthcare practices and the growing recognition of the importance of comprehensive patient assessments (Bhat, 2023). Over the years, medical professionals and researchers have continually refined and expanded the scope of health questionnaires to enhance diagnostic accuracy, treatment planning, and overall patient care (Akman, 2023).

The development of medical questionnaires can be traced back to early efforts to systematize patient information. As medical science progressed, clinicians recognized the need for standardized tools to collect relevant data efficiently. This led to the creating of the first health questionnaires to capture a holistic view of an individual's health status, medical history, and lifestyle factors.

The evolution of these questionnaires has been heavily influenced by ongoing research in various medical disciplines (Cowley et al., 2022). Evidence-based practices and clinical studies have played a crucial role in shaping the questions included in health assessments, ensuring that they align with the latest medical knowledge and diagnostic criteria. The constant feedback loop between research findings and questionnaire refinement has resulted in more accurate and insightful tools for healthcare practitioners.

Today, the Medical Health Questionnaire is a testament to the collaboration between medical professionals, researchers, and technological advancements. Incorporating evidence-based elements ensures that the questionnaire remains a dynamic and reliable resource, adapting to the ever-changing healthcare landscape. As a result, healthcare providers can trust the historical foundation and ongoing research supporting the Medical Health Questionnaire as an invaluable instrument in promoting proactive and personalized patient care.

Why use Carepatron as your Medical Health Questionnaire software?

Elevate your healthcare practice with Carepatron, the ultimate solution for streamlined practice management . Our Medical Health Questionnaire software boasts a user-friendly interface, ensuring accessibility for all, regardless of technical expertise.

Benefit from customizable templates catering to various health questionnaires, including daily symptom surveys, medical assessments, and health risk evaluations. The platform's robust features, such as Electronic Health Record (EHR) integration , secure messaging, and automated appointment reminders, enhance practice efficiency and compliance with HIPAA regulations.

Experience unparalleled support and guidance from the team, assisting healthcare providers in interpreting results and developing personalized care plans. Revolutionize your clinical workflows, improve care delivery efficiency, and boost patient engagement.

Choose Carepatron as your go-to Medical Health Questionnaire software and embark on a journey towards optimized care and proactive patient well-being.

Why use Carepatron as your Medical Health Questionnaire software?

Akman, S. (2023, May 25). 35+ essential questions to ask in a health history questionnaire . 35+ Essential Questions to Ask in a Health History Questionnaire - forms. App. https://forms.app/en/blog/health-history-questionnaire-questions

Bhat, A. (2023, June 30). Health History questionnaire: 15 Must-Have Questions . QuestionPro. https://www.questionpro.com/blog/health-history-questionnaire/

Cowley, D. S., Burke, A., & Lentz, G. M. (2022). Additional considerations in gynecologic care. In Elsevier eBooks (pp. 148-187.e6). https://doi.org/10.1016/b978-0-323-65399-2.00018-8

Commonly asked questions

The specific questions on a health questionnaire vary but generally cover medical history, lifestyle, and current health status.

A medical questionnaire is a document that gathers information about an individual's medical history, conditions, and lifestyle for healthcare assessment.

Health assessment questions typically inquire about an individual's overall health, symptoms, lifestyle choices, and any relevant medical history.

Related Templates

Popular templates.

Shame Resilience Theory Template

Download a free resource that clients can use for a more goal-directed approach to building shame resilience.

Disruptive Mood Dysregulation Disorder Treatment Plan

Unlock efficient anxiety care with Carepatron's software, featuring patient management tools, secure communication, and streamlined billing.

Multiple Sclerosis Test

Discover the symptoms, causes, diagnosis, and treatment options for Multiple Sclerosis and understand comprehensive care approaches.

MCL Injury Test

Discover key insights on MCL injuries, from symptoms and diagnosis to recovery. Get expert advice for effective treatment and healing.

Dermatomyositis Diagnosis Criteria

Learn about the diagnostic criteria for dermatomyositis and see an example with Carepatron's free PDF download. Get the information you need to understand this condition.

Hip Mobility Test

Learn how to assess your hip mobility with a simple test. Download Carepatron's free PDF guide with examples to improve hip flexibility and function.

Massage Chart

Explore our comprehensive Massage Chart for holistic healthcare solutions and techniques. Perfect for practitioners and enthusiasts alike!

Occupational Therapy Acute Care Cheat Sheet

Streamline patient recovery with key strategies for enhancing independence in acute settings. Download our free Occupational Therapy Acute Care Cheat Sheet.

Neck Pain Chart

Explore our comprehensive Neck Pain Chart for insights into causes, symptoms, and treatments. Find relief and understanding today.

School Readiness Assessment

Discover the benefits of our School Readiness Assessment template with a free PDF download to ensure children are prepared for kindergarten through grade three.

Sedation Scale Nursing (Ramsay Sedation Scale)

Discover key sedation scales used in nursing, including RASS and Ramsay Scale, for accurate patient sedation assessment in critical care settings.

Blood Tests for Lupus

Download a free Blood Tests for Lupus template. Learn the various types of tests used to diagnose this disease.

Neurological Exam Template

Discover the importance of neurological assessments for diagnosing and monitoring conditions with our comprehensive guide and template. templates

Homeopathic Dosage Chart

Discover the principles of homeopathy, dosage guidelines, and effectiveness, with references and a downloadable chart for practical use.

Gout Diagnosis Criteria

Uncover the essential Gout Diagnosis Criteria here, including symptoms, diagnostic tests, and the American College of Rheumatology guidelines.

Peroneal Tendon Tear Test

Learn about the Peroneal Tendon Tear Test and use our template to record results!

Ear Seeds Placement Chart

Learn what ear seeds are and download our Ear Seeds Placement Chart template!

Global Rating of Change Scale

Explore the Global Rating of Change Scale, a key tool to assess the clinical significance of patient-reported outcomes. Get your free PDF template now.

Physical Therapy Initial Evaluation

Unlock the benefits of Physical Therapy Initial Evaluation: a key step in personalized treatment planning for improved mobility and pain relief.

8-Week Group Counseling Plan

Discover a flexible template for creating effective 8-Week Group Counseling Plans, tailored to meet diverse needs and foster growth.

Osteoarthritis Treatment Guidelines

Learn about the management of osteoarthritis, treatments, causes, and care strategies. Access tips for managing symptoms and improving quality of life in our guide.

Prone Knee Bend Test

Learn about the Prone Knee Bend Test and how it assesses radicular pain.

PTSD Treatment Guidelines

Download our PTSD Treatment Guidelines template to access evidence-based treatment plans, diagnostic tools, and personalized care strategies.

Assess alcohol use with AUDIT-C: quick, accurate alcohol screening tests for hazardous drinking. Learn your risk level in minutes. Download our free template now!

Cincinnati Stroke Scale Scoring

Learn to quickly identify stroke symptoms with the Cincinnati Stroke Scale Scoring guide, which is essential for emergency responders and healthcare professionals.

Lupus Diagnosis Criteria

Download a helpful checklist tool to help diagnose lupus among patients for early identification and intervention. Access your free resource here!

Quadriceps Strain Test

Get access to a free Quadriceps Strain Test template. Learn how to interpret results and streamline your documentation with a free PDF.

Dysarthria Treatment Exercises

Discover effective dysarthria exercises to improve speech clarity. Download our free guide for tailored speech therapy techniques.

Antisocial Personality Disorder Test

Use a helpful evidence-based Antisocial Personality Disorder Test to identify ASPD symptoms among clients and improve health outcomes.

Persistent Depressive Disorder Test

Explore an evidence-based screening tool to help diagnose persistent depressive disorder among clients.

Thinking Traps Worksheet

Unlock a healthier mindset with our Thinking Traps Worksheet, designed to identify and correct cognitive distortions. Download your free example today.

Coping Cards

Coping Cards can aid clients in managing distressing emotions. Explore examples, download a free sample, and learn how to integrate them into therapy effectively.

EFT Cycle Worksheet

Download our free EFT Cycle Worksheet example and discover the benefits of integrating it into your practice.

Torn Meniscus Self Test

Discover how to identify a torn meniscus with our self-test guide. Learn about meniscus function and symptoms, and download our free self-test template today.

Prediabetes Treatment Guidelines

Download a free Prediabetes Treatment Guidelines and example to learn more about managing prediabetes effectively.

Nurse Practitioner Performance Evaluation

Enhance clinical performance, communication, and patient satisfaction in your healthcare system. Download our Nurse Practitioner Performance Evaluation today!

Psychophysiological Assessment

Explore Psychophysiological Assessments with our free template. Understand the link between mental processes and physical responses. Download now.

Occupational Therapy Pediatric Evaluation

Learn about the process of pediatric occupational therapy evaluation. Download Carepatron's free PDF example to assist in understanding and conducting assessments for children.

Dementia Treatment Guidelines

Explore comprehensive dementia treatment guidelines and use Carepatron's free PDF download of an example plan. Learn about effective strategies and interventions for dementia care.

Coping with Auditory Hallucinations Worksheet

Use our Coping with Auditory Hallucinations Worksheet to help your patients manage and differentiate real sensations from hallucinations effectively.

Tight Hip Flexors Test

Assess the tightness of your patient's hip flexor muscles with a tight hip flexors test. Click here for a guide and free template!

Gender Dysphoria DSM 5 Criteria

Explore the DSM-5 criteria for Gender Dysphoria with our guide and template, designed to assist with understanding and accurate diagnosis.

Bronchitis Treatment Guidelines

Explore bronchitis treatment guidelines for both acute and chronic forms, focusing on diagnosis, management, and preventive measures.

Patient Safety Checklist

Enhance safety in healthcare processes with the Patient Safety Checklist, ensuring comprehensive adherence to essential safety protocols.

Fatigue Assessment

Improve workplace safety and employee well-being with this valuable resource. Download Carepatron's free PDF of a fatigue assessment tool here to assess and manage fatigue in various settings.

AC Shear Test

Get access to a free AC Shear Test. Learn how to perform this assessment and record findings using our PDF template.

Hip Flexor Strain Test

Access our Hip Flexor Strain Test template, designed for healthcare professionals to diagnose and manage hip flexor strains, complete with a detailed guide.

Integrated Treatment Plan

Explore the benefits of Integrated Treatment for dual diagnosis, combining care for mental health and substance abuse for holistic recovery.

Nursing Assessment of Eye

Explore comprehensive guidelines for nursing eye assessments, including techniques, common disorders, and the benefits of using Carepatron.

Nursing Care Plan for Impaired Memory

Discover our Nursing Care Plan for Impaired Memory to streamline your clinical documentation. Download a free PDF template here.

Clinical Evaluation

Explore the process of conducting clinical evaluations and its importance in the therapeutic process. Access a free Clinical Evaluation template to help you get started.

Finger Prick Blood Test

Learn how to perform the Finger Prick Blood Test in this guide. Download a free PDF and sample here.

Discover PROMIS 29, a comprehensive tool for measuring patient-reported health outcomes, facilitating better healthcare decision-making.

Criteria for Diagnosis of Diabetes

Streamline diabetes management with Carepatron's templates for accurate diagnosis, early treatment, care plans, and effective patient strategies.

Psych Nurse Report Sheet

Streamline patient care with our comprehensive Psych Nurse Report Sheet, designed for efficient communication and organization. Download now!

Promis Scoring

Introduce a systematic approach to evaluating patient health through the PROMIS measures and learn how to interpret its scores.

Chronic Illnesses List

Access this helpful guide on chronic illnesses you can refer to when working on prevention, diagnosis, and treatment planning.

List of Tinctures and Uses

Discover the power of herbal tinctures with our List of Tinctures and Uses, detailing their uses, benefits, and ways to incorporate them into your life.

Ankle Bump Test

Learn how to do an Ankle Bump Test and how to interpret the results. Download a free PDF template here.

Workout Form

Master the proper way to exercise with the Workout Form. Avoid serious injury and achieve your fitness goals effectively. Start your journey now!

Critical Thinking Worksheets

Unlock the power of critical thinking with our expertly crafted Critical Thinking Worksheets, designed to foster analytical skills and logical reasoning in students.

Therapy Letter for Court

Explore our guide on writing Therapy Letters for Court, offering templates and insights for therapists to support clients' legal cases effectively.

Nursing Nutrition Assessment

Learn about nursing nutrition assessments, including examples and Carepatron's free PDF download to help you understand the process and improve patient care.

Facial Massage Techniques PDF

Unlock the secrets of rejuvenating facial massage techniques with Carepatron's comprehensive PDF guide. Learn how to enhance your skincare routine and achieve a radiant, glowing complexion.

Blood Test for Heart Attack

Learn about the importance of blood tests for detecting heart attacks and download Carepatron's free PDF example for reference. Find crucial information to help you understand the process.

Substance Use Disorder DSM 5 Criteria

Understanding substance use disorder, its symptoms, withdrawal symptoms, causes, and diagnosis through DSM 5 criteria. Download our free Substance Use Disorder DSM 5 Criteria

Mental Health Handout

Learn key insights into mental health conditions, warning signs, and resources. Access a free Mental Health Handout today!

SNAP Assessment

Learn more about SNAP Assessment, its purpose, and how to use it effectively. Download a free example and learn about scoring, interpretation, and next steps.

Procedure Note Template

Ensure patient identity, consent, anesthesia, vital signs, and complications are documented. Download our accessible Procedure Note Template today!

Nursing Home Report Sheet

Discover a comprehensive guide on creating a Nursing Home Report Sheet. Includes tips, examples, and a free PDF download to streamline healthcare reporting.

Obesity Chart

Explore our free Obesity Chart and example, designed to help practitioners and patients monitor overall health risks associated with obesity.

Cognitive Processing Therapy Worksheets

Download our free CPT Worksheet to tackle traumatic beliefs and foster recovery with structured exercises for emotional well-being.

Charge Nurse Duties Checklist

Take charge with our comprehensive Charge Nurse Duties Checklist. Free PDF download available!

Pre-employment Medical Exam

Ensure job applicants' fitness for duty with our Pre-Employment Medical Exam Template. Comprehensive guide for thorough health assessments.

Medical Clearance Form

Get your health clearance certificate easily with our Medical Clearance Form. Download for free for a streamlined process and hassle-free experience.

Whipple Test

Access a free Whipple Test PDF template for your physical therapy practice. Streamline your documentation with our free form.

Shoulder Mobility Test

Get access to a free Shoulder Mobility Test PDF. Learn how to assess your patient's shoulder mobility and streamline your clinical documentation.

Level of Care Assessment

Discover what a Level of Care Assessment entails and access Carepatron's free PDF download with an example to help you better understand the process.

Standard Intake Questionnaire Template

Access a standard intake questionnaire tool to help you enhance the initial touchpoint with patients in their healthcare process.

Patient Workup Template

Optimize patient care with our comprehensive Patient Workup Template. Streamline assessments and treatment plans efficiently.

Stroke Treatment Guidelines

Explore evidence-based Stroke Treatment Guidelines for effective care. Expert recommendations to optimize stroke management.

Counseling Theories Comparison Chart

Explore a tool to differentiate counseling theories and select approaches that can work best for each unique client.

Long Term Care Dietitian Cheat Sheet

Discover how a Long-Term Care Dietitian Cheat Sheet can streamline nutritional management to ensure personalized and efficient dietary planning for patients.

Perio Chart Form

Streamline patient care with detailed periodontal assessments, early disease detection, and personalized treatment plans. Download our Perio Chart Forms.

Nursing Registration Form

Learn what a nurse registry entails, its significance to registered nurses, and the application process completed through a Nursing Registration Form.

Straight Leg Test for Herniated Disc

Download a free Straight Leg Test for Herniated Disc template. Learn how to perform the test and streamline your clinical documentation.

Physical Therapy Plan of Care

Download Carepatron's free PDF example of a comprehensive Physical Therapy Plan of Care. Learn how to create an effective treatment plan to optimize patient outcomes.

ABA Intake Form

Access a free PDF template of an ABA Intake Form to improve your initial touchpoint in the therapeutic process.

Safety Plan for Teenager Template

Discover our comprehensive Safety Plan for Teenagers Template with examples. Download your free PDF!

Speech Language Pathology Evaluation Report

Get Carepatron's free PDF download of a Speech Language Pathology Evaluation Report example to track therapy progress and communicate with team members.

Home Remedies for Common Diseases PDF

Explore natural and effective Home Remedies for Common Diseases with our guide, and educate patients and caretakers to manage ailments safely at home.

HIPAA Policy Template

Get a comprehensive HIPAA policy template with examples. Ensure compliance, protect patient privacy, and secure health information. Free PDF download available.

Health Appraisal Form

Download a free Health Appraisal Form for young patients. Streamline your clinical documentation with our PDF template and example.

Consent to Treat Form for Adults

Discover the importance of the Consent to Treat Form for adults with our comprehensive guide and example. Get your free PDF download today!

Nursing Skills Assessment

Know how to evaluate nursing skills and competencies with our comprehensive guide. Includes an example template for a Nursing Skills Assessment. Free PDF download available.

Medical Record Request Form Template

Discover how to streamline medical record requests with our free template & example. Ensure efficient, compliant processing. Download your PDF today.

Personal Training Questionnaire

Access a comprehensive Personal Training Questionnaire to integrate when onboarding new clients to ensure a personalized fitness plan.

Agoraphobia DSM 5 Criteria

Explore a helpful documentation tool to help screen for the symptoms of agoraphobia among clients. Download a free PDF resource here.

Dental Inventory List

Streamline your dental practice's inventory management with our Dental Inventory List & Example, available for free PDF download.

Personal Trainer Intake Form

Discover how to create an effective Personal Trainer Intake Form with our comprehensive guide & free PDF example. Streamline your fitness assessments now.

Join 10,000+ teams using Carepatron to be more productive

  • Open access
  • Published: 11 January 2010

Questionnaires in clinical trials: guidelines for optimal design and administration

  • Phil Edwards 1  

Trials volume  11 , Article number:  2 ( 2010 ) Cite this article

73k Accesses

107 Citations

25 Altmetric

Metrics details

A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). The mode of administration can also impact on the cost, quality and completeness of data collected. There is good evidence for design features that improve data completeness but further research is required to evaluate strategies in clinical trials. Theory-based guidelines for style, appearance, and layout of self-administered questionnaires have been proposed but require evaluation.

Peer Review reports

Introduction

With fixed trial resources there will usually be a trade off between the number of participants that can be recruited into a trial and the quality and quantity of information that can be collected from each participant [ 1 ]. Although half a century ago there was little empirical evidence for optimal questionnaire design, Bradford Hill suggested that for every question asked of a study participant the investigator should be required to answer three himself, perhaps to encourage the investigator to keep the number of questions to a minimum [ 2 ].

To assess the empirical evidence for how questionnaire length and other design features might influence data completeness in a clinical trial, a systematic review of randomised controlled trials (RCTs) was conducted, and has recently been updated [ 3 ]. The strategies found to be effective in increasing response to postal and electronic questionnaires are summarised in the section on increasing data completeness below.

Clinical trial investigators have also relied on principles of questionnaire design that do not have an established empirical basis, but which are nonetheless considered to present 'good practice', based on expert opinion. The section on questionnaire development below includes some of that advice and presents general guidelines for questionnaire development which may help investigators who are about to design a questionnaire for a clinical trial.

As this paper concerns the collection of outcome data by questionnaire from trial participants (patients, carers, relatives or healthcare professionals) it begins by introducing the regulatory guidelines for data collection in clinical trials. It does not address the parallel (and equally important) needs of data management, cleaning, validation or processing required in the creation of the final clinical database.

Regulatory guidelines

The International Conference on Harmonisation (ICH) of technical requirements for registration of pharmaceuticals for human use states:

'The collection of data and transfer of data from the investigator to the sponsor can take place through a variety of media, including paper case record forms, remote site monitoring systems, medical computer systems and electronic transfer. Whatever data capture instrument is used, the form and content of the information collected should be in full accordance with the protocol and should be established in advance of the conduct of the clinical trial. It should focus on the data necessary to implement the planned analysis, including the context information (such as timing assessments relative to dosing) necessary to confirm protocol compliance or identify important protocol deviations. 'Missing values' should be distinguishable from the 'value zero' or 'characteristic absent'...' [ 4 ].

This suggests that the choice of variables that are to be measured by the questionnaire (or case report form) is constrained by the trial protocol, but that the mode of data collection is not. The trial protocol is unlikely, however, to list all of the variables that may be required to evaluate the safety of the experimental treatment. The choice of variables to assess safety will depend on the possible consequences of treatment, on current knowledge of possible adverse effects of related treatments, and on the duration of the trial [ 5 ]. In drug trials there may be many possible reactions due to the pharmacodynamic properties of the drug. The Council for International Organisations of Medical Sciences (CIOMS) advises that:

'Safety data that cannot be categorized and succinctly collected in predefined data fields should be recorded in the comment section of the case report form when deemed important in the clinical judgement of the investigator' [ 5 ].

Safety data can therefore initially be captured on a questionnaire as text responses to open-ended questions that will subsequently be coded using a common adverse event dictionary, such as the Medical Dictionary for Drug Regulatory Activities (MEDRA). The coding of text responses should be performed by personnel who are blinded to treatment allocation. Both ICH and CIOMS warn against investigators collecting too much data that will not be analysed, potentially wasting time and resources, reducing the rate of recruitment, and increasing losses to follow-up.

Before questionnaire design begins, the trial protocol should be available at least in draft. This will state which outcomes are to be measured and which parameters are of interest (for example, percentage, mean, and so on). Preferably, a statistical analysis plan will also be available that makes explicit how each variable will be analysed, including how precisely each is to be measured and how each variable will be categorised in analysis. If these requirements are known in advance, the questionnaire can be designed in such a way that will reduce the need for data to be coded once questionnaires have been completed and returned.

Questionnaire development

If a questionnaire has previously been used in similar trials to the one planned, its use will bring the added advantage that the results will be comparable and may be combined in a meta-analysis. However, if the mode of administration of the questionnaire will change (for example, questions developed for administration by personal interview are to be included in a self-administered questionnaire), the questionnaire should be piloted before it is used (see section on piloting below). To encourage the consistent reporting of serious adverse events across trials, the CIOMS Working Group has prepared an example of the format and content of a possible questionnaire [ 5 ].

If a new questionnaire is to be developed, testing will establish that it measures what is intended to be measured, and that it does so reliably. The validity of a questionnaire may be assessed in a reliability study that assesses the agreement (or correlation) between the outcome measured using the questionnaire with that measured using the 'gold standard'. However, this will not be possible if there is no recognised gold standard measurement for outcome. The reliability of a questionnaire may be assessed by quantifying the strength of agreement between the outcomes measured using the questionnaire on the same patients at different times. The methods for conducting studies of validity and reliability are covered in depth elsewhere [ 6 ]. If new questions are to be developed, the reading ease of the questions can be assessed using the Flesch reading ease score. This score assesses the number of words in sentences, and the number syllables in words. Higher Flesch reading scores indicate material that is easier to read [ 7 ].

Types of questions

Open-ended questions offer participants a space into which they can answer by writing text. These can be used when there are a large number of possible answers and it is important to capture all of the detail in the information provided. If answers are not factual, open-ended questions might increase the burden on participants. The text responses will subsequently need to be reviewed by the investigator, who will (whilst remaining blind to treatment allocation) assign one or more codes that categorise the response (for example, applying an adverse event dictionary) before analysis. Participants will need sufficient space so that full and accurate information can be provided.

Closed-ended questions contain either mutually exclusive response options only, or must include a clear instruction that participants may select more than one response option (for example, 'tick all that apply'). There is some evidence that answers to closed questions are influenced by the values chosen by investigators for each response category offered and that respondents may avoid extreme categories [ 8 ]. Closed-ended questions where participants are asked to 'tick all that apply' can alternatively be presented as separate questions, each with a 'yes' or 'no' response option (this design may be suitable if the analysis planned will treat each response category as a binary variable).

Asking participants subsidiary questions (that is, 'branching off') depending on their answers to core questions will provide further detail about outcomes, but will increase questionnaire length and could make a questionnaire harder to follow. Similarly 'matrix' style questions (that is, multiple questions with common response option categories) might seem complicated to some participants, adding to the data collection burden [ 9 ].

Style, appearance and layout

The way that a self-administered questionnaire looks is considered to be as important as the questions that are asked [ 9 , 10 ]. There is good evidence that in addition to the words that appear on the page (verbal language) the questionnaire communicates meaning and instructions to participants via symbols and graphical features (non-verbal language). The evidence from several RCTs of alternative question response styles and layouts suggests that participants view the middle (central) response option as the one that represents the midpoint of an outcome scale. Participants then expect response options to appear in an order of increasing or decreasing progression, beginning with the leftmost or uppermost category; and they expect response options that are closer to each other to also have values that are 'conceptually closer'. The order, spacing and grouping of response options are therefore important design features, as they will affect the quality of data provided on the questionnaire, and the time taken by participants to provide it [ 10 ].

Some attempts have been made to develop theory-based guidelines for self-administered questionnaire design [ 11 ]. Based on a review of psychological and sociological theories about graphic language, cognition, visual perception and motivation, five principles have been derived:

'Use the visual elements of brightness, colour, shape, and location in a consistent manner to define the desired navigational path for respondents to follow when answering the questionnaire;

When established format conventions are changed in the midst of a questionnaire use prominent visual guides to redirect respondents;

Place directions [instructions] where they are to be used and where they can be seen;

Present information in a manner that does not require respondents to connect information from separate locations in order to comprehend it;

Ask people to answer only one question at a time' [ 11 ].

Adherence to these principles may help to ensure that when participants complete a questionnaire they understand what is being asked, how to give their response, and which question to answer next. This will help participants to give all the information being sought and reduce the chances that they become confused or frustrated when completing the questionnaire. These principles require evaluation in RCTs.

Font size and colour may further affect the legibility of a questionnaire, which may also impact on data quality and completeness. Questionnaires for trials that enrol older participants may therefore require the use of a larger font (for example, 11 or 12 point minimum) than those for trials including younger participants. The legibility and comprehension of the questionnaire can be assessed during the pilot phase (see section on piloting below).

Perhaps most difficult to define are the factors that make a questionnaire more aesthetically pleasing to participants, and that may potentially increase compliance. The use of space, graphics, underlining, bold type, colour and shading, and other qualities of design may affect how participants react and engage with a questionnaire. Edward Tufte's advice for achieving graphical excellence [ 12 ] might be adapted to consider how to achieve excellence in questionnaire design, viz : ask the participant the simplest, clearest questions in the shortest time using the fewest words on the fewest pages; above all else ask only what you need to know.

Further research is therefore needed (as will be seen in the section on increasing data completeness) into the types of question and the aspects of style, appearance and layout of questionnaires that are effective in increasing data quality and completeness.

Mode of administration

Self-administered questionnaires are usually cheaper to use as they require no investigator input other than that for their distribution. Mailed questionnaires require correct addresses to be available for each participant, and resources to cover the costs of delivery. Electronically distributed questionnaires require correct email addresses as well as access to computers and the internet. Mailed and electronically distributed questionnaires have the advantage that they give participants time to think about their responses to questions, but they may require assistance to be available for participants (for example, a telephone helpline).

As self-administered questionnaires have least investigator involvement they are less susceptible to information bias (for example, social desirability bias) and interviewer effects, but are more susceptible to item non-response [ 8 ]. Evidence from a systematic review of 57 studies comparing self-reported versus clinically verified compliance with treatment suggests that questionnaires and diaries may be more reliable than interviews [ 13 ].

In-person administration allows a rapport with participants to be developed, for example through eye contact, active listening and body language. It also allows interviewers to clarify questions and to check answers. Telephone administration may still provide the aural dimension (active listening) of an in-person interview. A possible disadvantage of telephone interviews is that participants may become distracted by other things going on around them, or decide to end the call [ 9 ].

A mixture of modes of administration may also be considered: for example, participant follow-up might commence with postal or email administration of the questionnaire, with subsequent telephone calls to non-respondents. The offer of an in-person interview may also be necessary, particularly if translation to a second language is required, or if participants are not sufficiently literate. Such approaches may risk introducing selection bias if participants in one treatment group are more or less likely than the other group to respond to one mode of administration used (for example, telephone follow-up in patients randomised to a new type of hearing aid) [ 14 ].

An advantage of electronic and web-based questionnaires is that they can be designed automatically to screen and filter participant responses. Movement from one question to the next can then appear seamless, reducing the data collection burden on participants who are only asked questions relevant to previous answers. Embedded algorithms can also check the internal consistency of participant responses so that data are internally valid when submitted, reducing the need for data queries to be resolved later. However, collection of data from participants using electronic means may discriminate against participants without access to a computer or the internet. Choice of mode of administration must therefore take into account its acceptability to participants and any potential for exclusion of eligible participants that may result.

Piloting is a process whereby new questionnaires are tested, revised and tested further before they are used in the main trial. It is an iterative process that usually begins by asking other researchers who have some knowledge and experience in a similar field to comment on the first draft of the questionnaire. Once the questionnaire has been revised, it can then be piloted in a non-expert group, such as among colleagues. A further revision of the questionnaire can be piloted with individuals who are representative of the population who will complete it in the main trial. In-depth 'cognitive interviewing' might also provide insights into how participants comprehend questions, process and recall information, and decide what answers to give [ 15 ]. Here participants are read each question and are either asked to 'think aloud' as they consider what their answer will be, or are asked further 'probing' questions by the interviewer.

For international multicentre trials it will be necessary to translate a questionnaire. Although a simple translation to, and translation back from the second language might be sufficient, further piloting and cognitive interviews may be required to identify and correct for any cultural differences in interpretation of the translated questionnaire. Translation into other languages may alter the layout and formatting of words on the page from the original design and so further redesign of the questionnaire may be required. If a questionnaire is to be developed for a clinical trial, sufficient resources are therefore required for its design, piloting and revision.

Increasing data completeness

Loss to follow-up will reduce statistical power by reducing the effective sample size. Losses may also introduce bias if the trial treatment is an effect modifier for the association between outcome and participation at follow-up [ 16 ].

There may be exceptional circumstances for allowing participants to skip certain questions (for example, sensitive questions on sexual lifestyle) to ensure that the remainder of the questionnaire is still collected; the data that are provided may then be used to impute the values of variables that were not provided. Although the impact of missing outcome data and missing covariates on study results can be reduced through the use of multiple imputation techniques, no method of analysis can be expected to overcome them completely [ 17 ].

Longer and more demanding tasks might be expected to have fewer volunteers than shorter, easier tasks. The evidence from randomised trials of questionnaire length in a range of settings seems to support the notion that when it comes to questionnaire design 'shorter is better' [ 18 ]. Recent evidence that a longer questionnaire achieved the same high response proportion as that of a shorter alternative might cast doubt on the importance of the number of questions included in a questionnaire [ 19 ]. However, under closer scrutiny the results of this study (96.09% versus 96.74%) are compatible with an average 2% reduction in odds of response for each additional page added to the shorter version [ 18 ]. The main lesson seems to be that when the baseline response proportion is very high (for example, over 95%) then few interventions are likely to have effects large enough to increase it further.

There is a trade off between increased measurement error from using a simplified outcome scale and increased power from achieving measurement on a larger sample of participants (from fewer losses to follow-up). If a shorter version of an outcome scale provides measures of an outcome that are highly correlated with the longer version, then it will be more efficient for the trial to use the shorter version [ 1 ]. A moderate reduction to the length of a shorter questionnaire will be more effective in reducing losses to follow-up than a moderate change to the length of a longer questionnaire [ 18 ].

In studies that seek to collect information on many outcomes, questionnaire length will necessarily be determined by the number of items required from each participant. In very compliant populations there may be little lost by using a longer questionnaire. However, using a longer questionnaire to measure more outcomes may also increase the risk of false positive findings that result from multiple testing (for example, measuring 100 outcomes may produce 5 that are significantly associated with treatment by chance alone) [ 4 , 20 ].

Other strategies to increase completeness

A recently updated Cochrane systematic review presents evidence from RCTs of methods to increase response to postal and electronic questionnaires in a range of health and non-health settings [ 3 ]. The review includes 481 trials that evaluated 110 different methods for increasing response to postal questionnaires and 32 trials that evaluated 27 methods for increasing response to electronic questionnaires. The trials evaluate aspects of questionnaire design, the introductory letter, packaging and methods of delivery that might influence the tendency for participants to open the envelope (or email) and to engage with its contents. A summary of the results follows.

What participants are offered

Postal questionnaires.

The evidence favours offering monetary incentives and suggests that money is more effective than other types of incentive (for example, tokens, lottery tickets, pens, and so on). The relationship between the amount of monetary incentive offered and questionnaire response is non-linear with diminishing marginal returns for each additional amount offered [ 21 ]. Unconditional incentives appear to be more effective, as are incentives offered with the first rather than a subsequent mailing. There is less evidence for the effects of offering the results of the study (when complete) or offering larger non-monetary incentives.

Electronic questionnaires

The evidence favours non-monetary incentives (for example, Amazon.com gift cards), immediate notification of lottery results, and offering study results. Less evidence exists for the effect of offering monetary rather than non-monetary incentives.

How questionnaires look

The evidence favours using personalised materials, a handwritten address, and printing single sided rather than double sided. There is also evidence that inclusion of a participant's name in the salutation at the start of the cover letter increases response and that the addition of a handwritten signature on letters will further increase response [ 22 ]. There is less evidence for positive effects of using coloured or higher quality paper, identifying features (for example, identity number), study logos, brown envelopes, coloured ink, coloured letterhead, booklets, larger paper, larger fonts, pictures in the questionnaire, matrix style questions, or questions that require recall in order of time period.

The evidence favours using a personalised approach, a picture in emails, a white background for emails, a simple header, and textual rather than a visual presentation of response categories. Response may be reduced when 'survey' is mentioned in the subject line. Less evidence exists for sending emails in text format or HTML, including a topic in email subject lines, or including a header in emails.

How questionnaires are received or returned

The evidence favours sending questionnaires by first class or recorded delivery, using stamped return envelopes, and using several stamps. There is less evidence for effects of mailing soon after discharge from hospital, mailing or delivering on a Monday, sending to work addresses, using stamped outgoing envelopes (rather than franked), using commemorative or first class stamps on return envelopes, including a prepaid return envelope, using window or larger envelopes, or offering the option of response by internet.

Methods and number of requests for participation

The evidence favours contacting participants before sending questionnaires, follow-up contact with non-responders, providing another copy of the questionnaire at follow-up and sending text message reminders rather than postcards. There is less evidence for effects of precontact by telephone rather than by mail, telephone follow-up rather than by mail, and follow-up within a month rather than later.

Nature and style of questions included

The evidence favours placing more relevant questions and easier questions first, user friendly and more interesting or salient questionnaires, horizontal orientation of response options rather than vertical, factual questions only, and including a 'teaser'. Response may be reduced when sensitive questions are included or when a questionnaire for carers or relatives is included. There is less evidence for asking general questions or asking for demographic information first, using open-ended rather than closed questions, using open-ended questions first, including 'don't know' boxes, asking participants to 'circle answer' rather than 'tick box', presenting response options in increasing order, using a response scale with 5 levels rather than 10 levels, or including a supplemental questionnaire or a consent form.

The evidence favours using a more interesting or salient e-questionnaire.

Who sent the questionnaire

The evidence favours questionnaires that originate from a university rather than government department or commercial organisation. Less evidence exists for the effects of precontact by a medical researcher (compared to non-medical), letters signed by more senior or well known people, sending questionnaires in university-printed envelopes, questionnaires that originate from a doctor rather than a research group, names that are ethnically identifiable, or questionnaires that originate from male rather than female investigators.

The evidence suggests that response is reduced when e-questionnaires are signed by male rather than female investigators. There is less evidence for the effectiveness of e-questionnaires originating from a university or when sent by more senior or well known people.

What participants are told

The evidence favours assuring confidentiality and mentioning an obligation to respond in follow-up letters. Response may be reduced when endorsed by an 'eminent professional' and requesting participants to not remove ID codes. Less evidence exists for the effects of stating that others have responded, a choice to opt out of the study, providing instructions, giving a deadline, providing an estimate of completion time, requesting a telephone number, stating that participants will be contacted if they do not respond, requesting an explanation for non-participation, an appeal or plea, requesting a signature, stressing benefits to sponsor, participants or society, or assuring anonymity rather than participants being identifiable.

The evidence favours stating that others have responded and giving a deadline. There is less evidence for the effect of an appeal (for example, 'request for help') in the subject line of an email.

So although uncertainty remains about whether some strategies increase data completeness there is sufficient evidence to produce some guidelines. Where there is a choice, a shorter questionnaire can reduce the size of the task and burden on respondents. Begin a questionnaire with the easier and most relevant questions, and make it user friendly and interesting for participants. A monetary incentive can be included as a little unexpected 'thank you for your time'. Participants are more likely to respond with advance warning (by letter, email or phone call in advance of being sent a questionnaire). This is a simple courtesy warning participants that they are soon to be given a task to do, and that they may need to set some time aside to complete it. The relevance and importance of participation in the trial can be emphasised by addressing participants by name, signing letters by hand, and using first class postage or recorded delivery. University sponsorship may add credibility, as might the assurance of confidentiality. Follow-up contact and reminders to non-responders are likely to be beneficial, but include another copy of the questionnaire to save participants having to remember where they put it, or if they have thrown it away.

The effects of some strategies to increase questionnaire response may differ when used in a clinical trial compared with a non-health setting. Around half of trials included in the Cochrane review were health related (patient groups, population health surveys and surveys of healthcare professionals). The other included trials were conducted among business professionals, consumers, and the general population. To assess whether the size of the effects of each strategy on questionnaire response differ in health settings will require a sufficiently sophisticated analysis that controls for covariates (for example, number of pages in the questionnaire, use of incentives, and so on). Unfortunately, these details are seldom included by investigators in the published reports [ 3 ].

However, a review of 15 RCTs of methods to increase response in healthcare professionals and patients found evidence for using some strategies (for example, shorter questionnaires and sending reminders) in the health-related setting [ 23 ]. There is also evidence that incentives do improve questionnaire response in clinical trials [ 24 , 25 ]. The offer of monetary incentives to participants for completion of a questionnaire may, however, be unacceptable to some ethics committees if they are deemed likely to exert pressure on individuals to participate [ 26 ]. Until further studies establish whether other strategies are also effective in the clinical trial setting, the results of the Cochrane review may be used as guidelines for improving data completeness. More discussion on the design and administration of questionnaires is available elsewhere [ 27 ].

Risk factors for loss to follow-up

Irrespective of questionnaire design it is possible that some participants will not respond because: (a) they have never received the questionnaire or (b) they no longer wish to participate in the study. An analysis of the information collected at randomisation can be used to identify any factors (for example, gender, severity of condition) that are predictive of loss to follow-up [ 28 ]. Follow-up strategies can then be tailored for those participants most at risk of becoming lost (for example, additional incentives for 'at risk' participants). Interviews with a sample of responders and non-responders may also identify potential improvements to the questionnaire design, or to participant information. The need for improved questionnaire saliency, explanations of trial procedures, and stressing the importance of responding have all been identified using this method [ 29 ].

Further research

Few clinical trials appear to have nested trials of methods that might increase the quality and quantity of the data collected by questionnaire, and of participation in trials more generally. Trials of alternative strategies that may increase the quality and quantity of data collected by questionnaire in clinical trials are needed. Reports of these trials must include details of the alternative instruments used (for example, number of items, number of pages, opportunity to save data electronically and resume completion at another time), mean or median time to completion of electronic questionnaires, material costs and the amount of staff time required. Data collection in clinical trials is costly, and so care is needed to design data collection instruments that will provide sufficiently reliable measures of outcomes whilst ensuring high levels of follow-up. Whether shorter 'quick and dirty' outcome measures (for example, a few simple questions) are better than more sophisticated questionnaires will require assessment of the costs in terms of their impact on bias, precision, trial completion time, and overall costs.

A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). Questionnaire design still does remain as much an art as a science, but the evidence base for improving the quality and completeness of data collection in clinical trials is growing.

Armstrong BG: Optimizing power in allocating resources to exposure assessment in an epidemiologic study. Am J Epidemiol. 1996, 144: 192-197.

Article   CAS   PubMed   Google Scholar  

Hill AB: Observation and experiment. N Engl J Med. 1953, 248: 995-1001. 10.1056/NEJM195306112482401.

Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, 3: MR000008-

PubMed   Google Scholar  

International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH harmonised tripartite guideline, statistical principles for clinical trials E9. http://www.ich.org/LOB/media/MEDIA485.pdf

CIOMS: Management of safety information from clinical trials: report of CIOMS working group VI. 2005, Geneva, Switzerland: Council for International Organisations of Medical Sciences (CIOMS)

Google Scholar  

Streiner DL, Norman GR: Health measurement scales: a practical guide to their development and use. 2004, Oxford University Press, 3

Farr JN, Jenkins JJ, Paterson DG: Simplification of Flesch reading ease formula. J Appl Psychol. 1951, 35: 333-337. 10.1037/h0062427.

Article   Google Scholar  

Armstrong BK, White E, Saracci R: Principles of exposure measurement in epidemiology. Monographs in Epidemiology and Biostatistics. 1995, New York, NY: Oxford University Press, 21:

Nieuwenhuijsen M: Design of exposure questionnaires for epidemiological studies. Occup Environ Med. 2005, 62: 272-280. 10.1136/oem.2004.015206.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tourangeau R, Couper MP, Conrad F: Spacing, position, and order: interpretive heuristics for visual features of survey questions. Pub Opin Quart. 2004, 68: 368-393. 10.1093/poq/nfh035.

Jenkins CR, Dillman DA: Towards a theory of self-administered questionnaire design. http://www.census.gov/srd/papers/pdf/sm95-06.pdf

Tufte E: The visual display of quantitative information. 1999, Cheshire, CT: Graphics Press

Garber MC, Nau DP, Erickson SR, Aikens JE, Lawrence JB: The concordance of self-report with other measures of medication adherence: a summary of the literature. Med Care. 2004, 42: 649-652. 10.1097/01.mlr.0000129496.05898.02.

Article   PubMed   Google Scholar  

Heerwegh D: Mode differences between face-to-face and web surveys: an experimental investigation of data quality and social desirability effects. Int J Pub Opin Res. 2009, 21: 111-121. 10.1093/ijpor/edn054.

Willis GB: Cognitive interviewing: a how-to guide. http://www.appliedresearch.cancer.gov/areas/cognitive/interview.pdf

Greenland S: Response and follow-up bias in cohort studies. Am J Epidemiol. 1977, 106: 184-187.

CAS   PubMed   Google Scholar  

Kenward MG, Carpenter J: Multiple imputation: current perspectives. Stat Methods Med Res. 2007, 16: 199-218. 10.1177/0962280206075304.

Edwards P, Roberts I, Sandercock P, Frost C: Follow-up by mail in clinical trials: does questionnaire length matter?. Contr Clin Trials. 2004, 25: 31-52. 10.1016/j.cct.2003.08.013.

Rothman K, Mikkelsen EM, Riis A, Sørensen HT, Wise LA, Hatch EE: Randomized trial of questionnaire length. Epidemiology. 2009, 20: 154-10.1097/EDE.0b013e31818f2e96.

Sterne JAC, Davey Smith G: Sifting the evidence - what's wrong with significance tests?. BMJ. 2001, 322: 226-231. 10.1136/bmj.322.7280.226.

Edwards P, Cooper R, Roberts I, Frost C: Meta-analysis of randomised trials of monetary incentives and response to mailed questionnaires. J Epidemiol Comm Health. 2005, 59: 987-999. 10.1136/jech.2005.034397.

Scott P, Edwards P: Personally addressed hand-signed letters increase questionnaire response: a meta-analysis of randomised controlled trials. BMC Health Serv Res. 2006, 6: 111-10.1186/1472-6963-6-111.

Article   PubMed   PubMed Central   Google Scholar  

Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires - a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.

Kenyon S, Pike K, Jones D, Taylor D, Salt A, Marlow N, Brocklehurst P: The effect of a monetary incentive on return of a postal health and development questionnaire: a randomised trial. BMC Health Serv Res. 2005, 5: 55-10.1186/1472-6963-5-55.

Gates S, Williams MA, Withers E, Williamson E, Mt-Isa S, Lamb SE: Does a monetary incentive improve the response to a postal questionnaire in a randomised controlled trial? The MINT incentive study. Trials. 2009, 10: 44-10.1186/1745-6215-10-44.

McColl E: Commentary: methods to increase response rates to postal questionnaires. Int J Epidemiol. 2007, 36: 968-

McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J: Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol Assess. 2001, 5: 1-256.

Edwards P, Fernandes J, Roberts I, Kuppermann N: Young men were at risk of becoming lost to follow-up in a cohort of head-injured adults. J Clin Epidemiol. 2007, 60: 417-424. 10.1016/j.jclinepi.2006.06.021.

Nakash R, Hutton JL, Lamb SE, Gates S, Fisher J: Response and non-response to postal questionnaire follow-up in a clinical trial - a qualitative study of the patient's perspective. J Eval Clin Prac. 2008, 14: 226-235. 10.1111/j.1365-2753.2007.00838.x.

Download references

Acknowledgements

I would like to thank Lambert Felix for his help with updating the Cochrane review summarised in this article, and Graham Try for his comments on earlier drafts of the manuscript.

Author information

Authors and affiliations.

Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK

Phil Edwards

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phil Edwards .

Additional information

Competing interests.

The author declares that he has no competing interests.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Edwards, P. Questionnaires in clinical trials: guidelines for optimal design and administration. Trials 11 , 2 (2010). https://doi.org/10.1186/1745-6215-11-2

Download citation

Received : 29 July 2009

Accepted : 11 January 2010

Published : 11 January 2010

DOI : https://doi.org/10.1186/1745-6215-11-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Monetary Incentive
  • Questionnaire Design
  • Electronic Questionnaire
  • Text Response
  • Flesch Reading Ease

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

sample questionnaire for medical research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Prevent plagiarism. Run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

sample questionnaire for medical research

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved April 10, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Selecting, designing,...

Selecting, designing, and developing your questionnaire

  • Related content
  • Peer review

Data supplement

Posted as supplied by author

Further illustrative examples

Table A Examples of research questions for which a questionnaire may not be the most appropriate design

Table B Pros and cons of open and closed-ended questions

Table C Checklist for developing a questionnaire

Table D Types of sampling techniques for questionnaire research

Table E Critical appraisal checklist for a questionnaire study

  • Bowling A. Constructing and evaluating questionnaires for health services research. In: Research methods in health: investigating health and health services . Buckingham: Open University Press, 1997.
  • Fox C. Questionnaire development. J Health Soc Policy 1996;8:39-48.
  • Joyce CR. Use, misuse and abuse of questionnaires on quality of life. Patient Educ Counsel 1995;26:319-23.
  • Murray P. Fundamental issues in questionnaire design. Accid Emerg Nurs 1999;7:148-53.
  • Robson C. Real world research: a resource for social science and practitioner-researchers . Oxford: Blackwell Press, 1993.
  • Sudman S, Bradburn N. Asking questions: a practical guide to questionnaire design . San Francisco: Jossey Bass, 1983.
  • Wolfe F. Practical issues in psychosocial measures. J Rheumatol 1997;24:990-3.
  • Labaw PJ. Advanced questionnaire design . Cambridge, MA: Art Books, 1980.
  • Brooks R. EuroQol: the current state of play. Health Policy 1996;37:53-72.
  • Anderson RT, Aaronson NK, Bullinger M, McBee WL. A review of the progress towards developing health-related quality-of-life instruments for international clinical studies and outcomes research. Pharmacoeconomics 1996;10:336-55.
  • Beurskens AJ, de Vet HC, Koke AJ, van der Heijden GJ, Knipschild PG. Measuring the functional status of patients with low back pain. Assessment of the quality of four disease-specific questionnaires. Spine 1995;20:1017-28.
  • Bouchard S, Pelletier MH, Gauthier JG, Cote G, Laberge B. The assessment of panic using self-report: a comprehensive survey of validated instruments. J Anxiety Disord 1997;11:89-111.
  • Adams AS, Soumerai SB, Lomas J, Ross-Degnan D. Evidence of self-report bias in assessing adherence to guidelines. Int J Qual Health Care 1999;11:187-92.
  • Bradburn NM,.Miles C. Vague Quantifiers. Public Opin Q 1979;43:92-101.
  • Gariti P, Alterman AI, Ehrman R, Mulvaney FD, O’Brien CP. Detecting smoking following smoking cessation treatment. Drug Alcohol Depend 2002;65:191-6.
  • Little P. Margetts B. Dietary and exercise assessment in general practice. Fam Pract 1996;13:477-82.
  • Ware JE, Kosinski M, Keller SD. A 12-item short-form health survey construction of scales and preliminary tests of reliability and validity. Medical Care 1996;34:220-33.

Related articles

  • Correction Use of automated external defibrillator by first responders in out of hospital cardiac arrest: prospective controlled trial Published: 12 February 2004; BMJ 328 doi:10.1136/bmj.328.7436.396
  • Devolved powers for Greater Manchester led to some health improvements, study shows BMJ March 28, 2024, 384 q767; DOI: https://doi.org/10.1136/bmj.q767
  • Long waits in child mental health are a “ticking time bomb” regulator warns BMJ March 22, 2024, 384 q724; DOI: https://doi.org/10.1136/bmj.q724
  • Doctors report big rise in patients with illness because of socioeconomic factors BMJ March 01, 2024, 384 q538; DOI: https://doi.org/10.1136/bmj.q538
  • Diphtheria: WHO publishes first ever guidance following outbreaks BMJ February 14, 2024, 384 q407; DOI: https://doi.org/10.1136/bmj.q407
  • Measles outbreaks: Investing in patient relationships through GP continuity will be key to boosting MMR confidence BMJ January 29, 2024, 384 q221; DOI: https://doi.org/10.1136/bmj.q221

Cited by...

sample questionnaire for medical research

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

sample questionnaire for medical research

Home Surveys Questionnaire

21 Questionnaire Templates: Examples and Samples

Questionnaire Templates and Examples

Questionnaire: Definition

A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. A questionnaire is typically a mix of open-ended questions and close-ended questions ; the latter allowing for respondents to enlist their views in detail.

A questionnaire can be used in both, qualitative market research as well as quantitative market research with the use of different types of questions .

LEARN ABOUT: Open-Ended Questions

Types of Questionnaires

We have learnt that a questionnaire could either be structured or free-flow. To explain this better:

  • Structured Questionnaires: A structured questionnaires helps collect quantitative data . In this case, the questionnaire is designed in a way that it collects very specific type of information. It can be used to initiate a formal enquiry on collect data to prove or disprove a prior hypothesis.
  • Unstructured Questionnaires: An unstructured questionnaire collects qualitative data . The questionnaire in this case has a basic structure and some branching questions but nothing that limits the responses of a respondent. The questions are more open-ended.

LEARN ABOUT:   Structured Question

Types of Questions used in a Questionnaire

A questionnaire can consist of many types of questions . Some of the commonly and widely used question types though, are:

  • Open-Ended Questions: One of the commonly used question type in questionnaire is an open-ended question . These questions help collect in-depth data from a respondent as there is a huge scope to respond in detail.
  • Dichotomous Questions: The dichotomous question is a “yes/no” close-ended question . This question is generally used in case of the need of basic validation. It is the easiest question type in a questionnaire.
  • Multiple-Choice Questions: An easy to administer and respond to, question type in a questionnaire is the multiple-choice question . These questions are close-ended questions with either a single select multiple choice question or a multiple select multiple choice question. Each multiple choice question consists of an incomplete stem (question), right answer or answers, close alternatives, distractors and incorrect answers. Depending on the objective of the research, a mix of the above option types can be used.
  • Net Promoter Score (NPS) Question: Another commonly used question type in a questionnaire is the Net Promoter Score (NPS) Question where one single question collects data on the referencability of the research topic in question.
  • Scaling Questions: Scaling questions are widely used in a questionnaire as they make responding to the questionnaire, very easy. These questions are based on the principles of the 4 measurement scales – nominal, ordinal, interval and ratio .

Questionnaires help enterprises collect valuable data to help them make well-informed business decisions. There are powerful tools available in the market that allows using multiple question types, ready to use survey format templates, robust analytics, and many more features to conduct comprehensive market research.

LEARN ABOUT: course evaluation survey examples

For example, an enterprise wants to conduct market research to understand what pricing would be best for their new product to capture a higher market share. In such a case, a questionnaire for competitor analysis can be sent to the targeted audience using a powerful market research survey software which can help the enterprise conduct 360 market research that will enable them to make strategic business decisions.

Now that we have learned what a questionnaire is and its use in market research , some examples and samples of widely used questionnaire templates on the QuestionPro platform are as below:

LEARN ABOUT: Speaker evaluation form

Customer Questionnaire Templates: Examples and Samples

QuestionPro specializes in end-to-end Customer Questionnaire Templates that can be used to evaluate a customer journey right from indulging with a brand to the continued use and referenceability of the brand. These templates form excellent samples to form your own questionnaire and begin testing your customer satisfaction and experience based on customer feedback.

LEARN ABOUT: Structured Questionnaire

USE THIS FREE TEMPLATE

Employee & Human Resource (HR) Questionnaire Templates: Examples and Samples

QuestionPro has built a huge repository of employee questionnaires and HR questionnaires that can be readily deployed to collect feedback from the workforce on an organization on multiple parameters like employee satisfaction, benefits evaluation, manager evaluation , exit formalities etc. These templates provide a holistic overview of collecting actionable data from employees.

Community Questionnaire Templates: Examples and Samples

The QuestionPro repository of community questionnaires helps collect varied data on all community aspects. This template library includes popular questionnaires such as community service, demographic questionnaires, psychographic questionnaires, personal questionnaires and much more.

Academic Evaluation Questionnaire Templates: Examples and Samples

Another vastly used section of QuestionPro questionnaire templates are the academic evaluation questionnaires . These questionnaires are crafted to collect in-depth data about academic institutions and the quality of teaching provided, extra-curricular activities etc and also feedback about other educational activities.

MORE LIKE THIS

Government Customer Experience

Government Customer Experience: Impact on Government Service

Apr 11, 2024

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

employee evaluation software

Top 15 Employee Evaluation Software to Enhance Performance

event feedback software

Event Feedback Software: Top 11 Best in 2024

Apr 9, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Open access
  • Published: 27 January 2022

The most used questionnaires for evaluating satisfaction, usability, acceptance, and quality outcomes of mobile health

  • Sadrieh Hajesmaeel-Gohari 1 ,
  • Firoozeh Khordastan 2 ,
  • Farhad Fatehi 3 , 4 ,
  • Hamidreza Samzadeh 5 &
  • Kambiz Bahaadinbeigy 6  

BMC Medical Informatics and Decision Making volume  22 , Article number:  22 ( 2022 ) Cite this article

22k Accesses

33 Citations

3 Altmetric

Metrics details

Various questionnaires are used for evaluating satisfaction, usability, acceptance, and quality outcomes of mobile health (mHealth) services. Using the best one to meet the needs of an mHealth study is a challenge for researchers. Therefore, this study aimed to review and determine the frequently used questionnaires for evaluating the mentioned outcomes of mHealth services.

The PubMed database was searched for conducting this review in April 2021. Papers that used a referenced questionnaire to evaluate the satisfaction, usability, acceptance, or quality outcomes of mHealth were included. The first author’s name, year of publication, evaluation outcome, and evaluation questionnaire were extracted from relevant papers. Data were analyzed using descriptive statistics.

In total, 247 papers were included in the study. Questionnaires were used for usability (40%), quality (34.5%), acceptance (8.5%), and satisfaction (4%) outcomes, respectively. System usability scale (36.5%), mobile application rating scale (35.5%), post study system usability questionnaire (6%), user mobile application rating scale (5%), technology acceptance model (4.5%), computer system usability questionnaire (2.5%), net promoter score (2%), health information technology usability evaluation scale (2%), the usefulness, satisfaction, and ease of use (1.5%), client satisfaction questionnaire (1.5%), unified theory of acceptance and use of technology (1.5%), questionnaire for user interaction satisfaction (1%), user experience questionnaire (1%), and after-scenario questionnaire (1%) were the most used questionnaires, respectively.

Despite the existence of special questionnaires for evaluating several outcomes of mHealth, general questionnaires with fewer items and higher reliability have been used more frequently. Researchers should pay more attention to questionnaires with a goal-based design.

Peer Review reports

In recent years, mobile phones have found a special role in people's daily lives because of their portability and availability. Mobile phones are also used in the healthcare field for different purposes [ 1 ]. The use of mobile and wireless communication technologies to improve disease management, medication adherence, medical decision-making, medical education, and research is named mobile health (mHealth) [ 2 , 3 ]. mHealth includes the use of simple capabilities of a mobile device such as voice call and short messaging service (SMS) as well as more complex applications designed for medical, fitness, and public health purposes [ 4 ].

mHealth could help patients to monitor and control their health when they do not have access to healthcare providers [ 1 ]. Along with the potential benefits of mHealth, some factors such as perceived ease of use, perceived usefulness, content quality and accuracy, and consumer attitude can influence the use of this technology [ 5 ]. Therefore, evaluating mHealth services in terms of different aspects such as usability, user satisfaction and acceptance, and quality is important.

Usability is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” by ISO 9241-11 [ 6 ]. A review study showed that about 88% of the studies that evaluated the usability of mobile applications used the above-mentioned definition [ 7 ]. There are two general methods for usability evaluation, including user evaluation and expert inspection [ 8 ]. User satisfaction is defined as “the net feeling of pleasure or displeasure that results from aggregating all the benefits that a person hopes to receive from interaction with the information system” [ 9 ]. The Cambridge Dictionary defines acceptance as a “general agreement that something is satisfactory or right” [ 10 ]. In the technology acceptance lifecycle, acceptance is measured in both the initial and sustained use stages of mHealth services [ 11 ]. As Stoyanov et al. indicated in their study, the quality of mHealth applications is evaluated in different categories, including engagement, functionality, aesthetics, information quality, and subjective quality [ 12 ].

There are various methods for evaluating mHealth services, such as questionnaires, interviews, and observation [ 11 , 13 , 14 ]. Researchers use a variety of general and specified questionnaires for evaluating different aspects of mHealth services. Studies usually use previously designed questionnaires [ 15 , 16 ] and sometimes design a new one with compliance to their needs [ 12 , 17 ]. The validity and reliability of the used questionnaires are important in any scientific project.

Due to the existence of a large number of questionnaires, selecting and using the appropriate one to meet the needs of an mHealth study is a challenge for researchers. To the best of our knowledge, no study has reviewed and listed the most appropriate questionnaires for evaluating different outcomes of mHealth services including satisfaction, usability, acceptance, and quality. Therefore, this study aimed to review and introduce the frequently used questionnaires for evaluating the mentioned outcomes. The results of this study will help other investigations to select the appropriate goal-based questionnaire.

Database and date

PubMed database was searched for conducting this review study. The search was performed on 18 April 2021 without date restriction.

Search strategy

We used three categories of keywords for setting the search strategy (Table 1 ). The keywords in each category were combined and searched by OR Boolean operator. Then the results of these searches were combined by AND Boolean operator for retrieving relevant papers. The search was conducted in the Title/Abstract search field and was filtered with English language.

Inclusion criteria

The following studies were included in the study:

Original observational and interventional research papers in which a referenced questionnaire or a questionnaire that has been used at least two times in the studies was used to evaluate the satisfaction, usability, acceptance, and quality outcomes of mhealth.

App review studies that used the Mobile Application Rating Scale (MARS) for the evaluation of mhealth applications.

Exclusion criteria

The following studies were excluded from the study:

Review, protocol, conference, and report papers

Papers without full text

Papers that did not use mhealth services

Papers that did not evaluate satisfaction, usability, acceptance, and quality outcomes

Papers that did not use a referenced questionnaire

Papers that did not include details about the questionnaires used

Paper selection

In the first stage, all the retrieved papers were reviewed based on title and abstract by two authors (S.H, F.Kh). Next, the same individuals assessed the full text of the selected papers. In the cases of disagreements, the opinion of the other author (K.B) was asked. Finally, a list of included papers was provided.

Data extraction

The first author’s name, year of publication, evaluation outcome, and evaluation questionnaire were extracted from the included papers.

Data analysis

Data were analyzed using descriptive statistics including frequency and frequency percentage.

Searching the PubMed database resulted in 1028 papers. The title and abstract of all these papers were screened. A total of 683 papers were excluded. After that, the full text of the 345 remaining papers was reviewed. Finally, 247 papers were included for extracting data (Fig.  1 ).

figure 1

The process of finding and including the papers

The extracted data from the included papers are presented in Additional file 1 : Appendix 1. The main results were as follows:

Year of publication

The included papers have been published since 2014. The number of papers has increased since that time (Fig.  2 ). More than half of the papers (67%) were published in the last three years (2019, 2020, and 2021).

figure 2

The number of papers based on the year of publication

Evaluation outcome

The evaluation outcomes in this study referred to usability, satisfaction, acceptance, and quality of mHealth services. Usability is the most evaluated outcome that was assessed by a questionnaire in the studies (n = 99, 40%). After that, quality (n = 85, 34.5%), acceptance (n = 21, 8.5%), and satisfaction (n = 10, 4%) were the most evaluated outcomes, respectively. The remaining papers evaluated more than one outcome, including usability and satisfaction (n = 10, 4%), usability and quality (n = 9, 3.5%), usability and acceptance (n = 9, 3.5%), satisfaction and quality (n = 3, 1%), and satisfaction and acceptance (n = 3, 1%).

Evaluation questionnaire

The most used questionnaires (more than two times) for evaluating mHealth services are shown in Table 2 . The other questionnaires have been used in 17 papers (7%). Forty-three (17.5%) papers used more than one questionnaire.

This study was performed to review the most frequently used questionnaires for evaluating satisfaction, usability, acceptance, and quality outcomes of mHealth services. Usability is the most evaluated outcome in the mHealth studies. SUS, PSSUQ, and CSUQ were the top three most used questionnaires for evaluating the usability of mHealth services, respectively. The two most used questionnaires for evaluating the quality of mHealth applications were MARS and uMARS. In addition, TAM and UTAUT were the most used questionnaires for measuring the user acceptance of mHealth services. The three most used questionnaires for evaluating user satisfaction were NPS, CSQ, and GEQ.

Usability evaluation questionnaires

The present study showed that SUS questionnaire had been used much more than similar questionnaires such as PSSUQ and CSUQ in evaluating the usability of mHealth services. SUS is a general questionnaire that is used for evaluating the usability of electronic systems such as mobile devices. Compared with other questionnaires such as CSUQ, SUS is a quicker tool for judging the perceived usability of systems because it has fewer items with less scale pointing. This questionnaire also includes a question regarding the satisfaction of the user with the digital solution. The satisfaction evaluation questionnaires focus on tools that evaluate only this outcome, but it is also contained in the usability outcome [ 18 , 19 ]. Because of these features and its reproducibility, reliability, and validity, researchers and evaluators of mHealth services have frequently used the SUS questionnaire. Another study that reviewed the most used questionnaires for evaluating telemedicine services also showed that SUS is the most used general questionnaire after the Telehealth Usability Questionnaire (TUQ), which is a specific questionnaire for evaluating the usability of telemedicine systems [ 35 ].

Although MAUQ was specifically designed for evaluating the usability of mHealth applications and considered both interactive and standalone mHealth applications [ 17 ], it was rarely used in the studies that were included in our review. This lack of use might be due to the fact that MAUQ was introduced 2 years ago, and researchers are less familiar with this questionnaire. It is recommended that researchers and evaluators of mHealth services use such questionnaires that were specifically designed for evaluating these services.

Quality evaluation questionnaires

MARS and its user version (uMARS) were the most used questionnaires for assessing the quality of mHealth applications. To use MARS for evaluating mHealth applications, users should be professional in mHealth. Because of this limitation, uMARS was designed to be administered by end-users without special expertise. The importance of the quality and reliability of information and content provided in mHealth applications and the impact that this content has on people's health led to the design of MARS [ 12 ]. MARS prompted researchers to look at another consequence of mHealth, which significantly impacts the practical and safe use of mHealth applications. This issue has led to the use of these questionnaires in many studies.

Acceptance evaluation questionnaires

This study revealed that TAM and UTAUT were the most used questionnaires for measuring mHealth acceptance. These questionnaires were derived from two models with the same name. Generally, TAM and UTAUT are the most used acceptance models in health informatics because of their simplicity [ 36 ]. Both models focus on the usefulness and easy use of technology. Since UTAUT derives from eight models such as TAM, it evaluates two additional factors, including social environment and organizational infrastructure, that may impact the adoption of the new technology [ 36 ]. However, since TAM and UTAUT have not been developed in healthcare settings, different emotional, organizational, and cultural factors that may influence technology acceptance in healthcare settings are not covered by these two questionnaires [ 23 , 30 ]. Therefore, researchers in health informatics would better design the acceptance questionnaire based on the objective systems.

Satisfaction evaluation questionnaires

The present research revealed that NPS is the most widely used tool for measuring the satisfaction of m-Health users. NPS is a very small tool for evaluating client satisfaction. This tool only has one question [ 25 ]. The fact that this scale has only one item has probably contributed to its wide use. It should be taken into account that a single question cannot identify the various factors that affect user satisfaction with a service. After NPS, CSQ and GEQ were the most used questionnaires in reviewed articles. CSQ has two characteristics that may affect its usage. The first one is that it considers the quality of different aspects, such as procedure, environment, staff, service, and outcome. The second characteristic is that with this comprehensiveness, this questionnaire has only eight items [ 29 ]. Studies that used mobile-based games to provide mHealth services used GEQ [ 37 , 38 ] because it is a specific, comprehensive, and practical questionnaire that measures game user satisfaction [ 34 ]. Melin et al. presented a questionnaire for assessing the satisfaction of mHealth applications users [ 39 ]. However, none of the papers included in our study used this questionnaire because this is a new tool, and researchers are less familiar with it. It is recommended that researchers in mHealth, use this specific questionnaire in their future studies.

Evaluation outcomes

Most of the included papers evaluated the usability of mHealth services using a questionnaire. Usability is a critical issue that affects willingness to use a system. Therefore, it is essential to evaluate this outcome in different phases of system development. The questionnaire is the most used method for evaluating the usability outcome of a mobile application because of its simpleness in terms of accomplishment and data analysis [ 17 ]. A review study also showed that the usability of mHealth applications is mostly assessed using a questionnaire [ 40 ]. Another study revealed that questionnaires were mostly used for evaluating the user satisfaction outcome of telemedicine [ 35 ]. The differences between the results of our research and those of this study may be due to the fact that mHealth services are mostly presented with an application; therefore, evaluating the user interface of the application is very important and should be considered for effective use [ 40 ].

Limitations

To the best of our knowledge, this is the first study that reviewed the most used questionnaires for evaluating the satisfaction, usability, acceptance, and quality outcomes of mHealth services. Nevertheless, this study has some limitations. We only searched the PubMed database to retrieve relevant papers. Also, we restricted our search to the Title/Abstract field. Moreover, we excluded review papers and only included the app review studies that use MARS. These limitations may have led to the missing of some papers from our study.

This study showed that usability and quality were the most frequently considered outcomes in the mHealth field. Since user acceptance and satisfaction with mHealth services lead to more engagement in using these applications, they should be more considered. Although there is a questionnaire that is specifically designed for measuring several mHealth outcomes, general questionnaires such as SUS, PSSUQ, TAM, CSUQ, Health-ITUES, the USE, CSQ, UTAUT, QUIS, UEQ, and ASQ are mostly used for evaluating mHealth services. Moreover, the results showed that researchers prefer to use questionnaires with high reliability and fewer items. Therefore, when selecting the best-fitted questionnaires for evaluating different outcomes of mHealth services, it is better to pay more attention to the reliability and the number of questions and items.

Availability of data and materials

Not applicable.

Abbreviations

  • Mobile health

International Organization for Standardization

Mobile application rating scale

System usability scale

Post study system usability questionnaire

User mobile application rating scale

Technology acceptance model

Computer system usability questionnaire

Net promoter score

Health information technology usability evaluation scale

Usefulness, satisfaction, and ease of use

Client satisfaction questionnaire

Unified theory of acceptance and use of technology

Questionnaire for user interaction satisfaction

User experience questionnaire

After-scenario questionnaire

MHealth app usability questionnaire

Game experience questionnaire

Telehealth usability questionnaire

Service user technology acceptability questionnaire

Kao CK, Liebovitz DM. Consumer mobile health apps: current state, barriers, and future directions. PM & R J Inj Funct Rehabil. 2017;9(5s):S106–15.

Google Scholar  

Park Y-T. Emerging new era of mobile health technologies. Healthc Inform Res. 2016;22(4):253–4.

Article   Google Scholar  

Singh K, Landman AB. Mobile health. In: Sheikh A, Bates DW, Wright A, Cresswell K, editors. Key advances in clinical informatics: Transforming health care through health information technology. London: Academic Press; 2018. p. 183–96.

World Health Organization. mHealth: New horizons for health through mobile technologies. 2011. Available from: https://www.who.int/goe/publications/goe_mhealth_web.pdf .

Mangkunegara C, Azzahro F, Handayani P. Analysis of factors affecting user's intention in using mobile health application: a case study of halodoc. 2018. p. 87–92

ISO9241-11. Ergonomics of human–system interaction—Part 11: usability: definitions and concepts 2018. Available from: https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en .

Weichbroth P. Usability of mobile applications: a systematic literature study. IEEE Access. 2020;8:55563–77.

Swaid S. Usability of mobile apps: an integrated approach. AHFE; July 17-212017.

Seddon PB. A respecification and extension of the DeLone and McLean model of IS success. Inf Syst Res. 1997;8(3):240–53.

Cambridge Dictionary-Cambridge University Press. Acceptance 2020 [Available from: https://dictionary.cambridge.org/dictionary/english/acceptance .

Nadal C, Sas C, Doherty G. Technology acceptance in mobile health: scoping review of definitions, models, and measurement. J Med Internet Res. 2020;22(7):e17256.

Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR mHealth uHealth. 2015;3(1):e27.

Jake-Schoffman DE, Silfee VJ, Waring ME, Boudreaux ED, Sadasivam RS, Mullen SP, et al. Methods for evaluating the content, usability, and efficacy of commercial mobile health apps. JMIR mHealth uHealth. 2017;5(12):e190-e.

Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform. 2019;126:95–104.

Alanzi T, Istepanian R, Philip N. Design and usability evaluation of social mobile diabetes management system in the gulf region. JMIR Res Protoc. 2016;5(3):e93.

Bakogiannis C, Tsarouchas A, Mouselimis D, Lazaridis C, Theofillogianakos EK, Billis A, et al. A patient-oriented app (ThessHF) to improve self-care quality in heart failure: from evidence-based design to pilot study. JMIR mHealth uHealth. 2021;9(4):e24271.

Zhou L, Bao J, Setiawan IMA, Saptono A, Parmanto B. The mHealth app usability questionnaire (MAUQ): development and validation study. JMIR mHealth uHealth. 2019;7(4):e11500.

Lewis JR. The system usability scale: past, present, and future. Int J Hum Comput Interact. 2018;34(7):577–90.

Brooke J. Sus: a “quick and dirty’usability. J Usability Eval Ind. 1996;189:189–94.

Terhorst Y, Philippi P, Sander LB, Schultchen D, Paganini S, Bardus M, et al. Validation of the mobile application rating scale (MARS). PLoS ONE. 2020;15(11):e0241480.

Article   CAS   Google Scholar  

Lewis JR, editor. Psychometric evaluation of the post-study system usability questionnaire: the PSSUQ. In: Proceedings of the human factors society annual meeting. Los Angeles: Sage Publications; 1992.

Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and validation of the user version of the mobile application rating scale (uMARS). JMIR mHealth uHealth. 2016;4(2):e72.

Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. J MIS Q. 1989;13:319–40.

Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum Comput Interact. 1995;7(1):57–78.

Reichheld FF. The one number you need to grow. Harv Bus Rev. 2003;81(12):46–54.

PubMed   Google Scholar  

Yen P-Y, Wantland D, Bakken S. Development of a customizable health it usability evaluation scale. In: AMIA annual symposium proceedings/AMIA symposium, vol. 2010. 2010. p. 917–21.

Lund A. Measuring usability with the use questionnaire. Usability and user experience newsletter of the STC usability SIG. Usability Interface. 2001;8:3–6.

Gao M, Kortum P, Oswald F, editors. Psychometric evaluation of the use (usefulness, satisfaction, and ease of use) questionnaire for reliability and validity. In: Proceedings of the human factors and ergonomics society annual meeting. Los Angeles: SAGE Publications; 2018

Larsen DL, Attkisson CC, Hargreaves WA, Nguyen TD. Assessment of client/patient satisfaction: development of a general scale. Eval Program Plan. 1979;2(3):197–207.

Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. 2003;27:425–78.

Chin JP, Diehl VA, Norman KL, editors. Development of an instrument measuring user satisfaction of the human-computer interface. In: Proceedings of the SIGCHI conference on human factors in computing systems; 1988.

Laugwitz B, Held T, Schrepp M, editors. Construction and evaluation of a user experience questionnaire. HCI and usability for education and work. Berlin: Springer; 2008.

Lewis J. Psychometric evaluation of an after-scenario questionnaire for computer usability studies: the ASQ. SIGCHI Bull. 1991;23:78–81.

Poels K, de Kort YAW, IJsselsteijn WA. D3.3: Game Experience Questionnaire. Eindhoven: Technische Universiteit Eindhoven; 2007.

Hajesmaeel-Gohari S, Bahaadinbeigy K. The most used questionnaires for evaluating telemedicine services. BMC Med Inform Decis Mak. 2021;21(1):36.

Ammenwerth E. Technology acceptance models in health informatics: TAM and UTAUT. Stud Health Technol Inform. 2019;263:64–71.

De Cock N, Van Lippevelde W, Vangeel J, Notebaert M, Beullens K, Eggermont S, et al. Feasibility and impact study of a reward-based mobile application to improve adolescents’ snacking habits. Public Health Nutr. 2018;21(12):2329–44.

Lawitschka A, Buehrer S, Bauer D, Peters K, Silbernagl M, Zubarovskaya N, et al. A web-based mobile app (INTERACCT App) for adolescents undergoing cancer and hematopoietic stem cell transplantation aftercare to improve the quality of medical information for clinicians: observational study. JMIR mHealth uHealth. 2020;8(6):e18781.

Melin J, Bonn SE, Pendrill L, Lagerros YT. A questionnaire for assessing user satisfaction with mobile health apps: development using rasch measurement theory. JMIR mHealth uHealth. 2020;8(5):e15909.

Ansaar MZ, Hussain J, Bang J, Lee S, Shin KY, Woo KY, editors. The mHealth applications usability evaluation review. In: 2020 International conference on information networking (ICOIN). IEEE; 2020.

Download references

Acknowledgements

We would like to express our gratitude to the Institute for Future Studies in Health of Kerman University of Medical Sciences for providing the research environment.

This study was funded by Kerman University of Medical Sciences with the research ID 400000266.

Author information

Authors and affiliations.

Medical Informatics Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran

Sadrieh Hajesmaeel-Gohari

Department of Medical Informatics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran

Firoozeh Khordastan

Centre for Health Services Research, The University of Queensland, Brisbane, Australia

Farhad Fatehi

School of Psychological Sciences, Monash University, Melbourne, Australia

Department of Health Information Sciences, Faculty of Management and Medical Information Sciences, Kerman University of Medical Sciences, Kerman, Iran

Hamidreza Samzadeh

Gastroenterology and Hepatology Research Center, Institute of Basic and Clinical Physiology Sciences, Kerman University of Medical Sciences, Kerman, Iran

Kambiz Bahaadinbeigy

You can also search for this author in PubMed   Google Scholar

Contributions

SH, FF, and KB contributed to designing the study. The selection and evaluation of the papers and data extraction were done by SH and FKh. SH, HS, and KB participated in drafting the manuscript. All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Kambiz Bahaadinbeigy .

Ethics declarations

Ethics approval and consent to participate.

This research was approved by the Ethics Committee of Kerman University of Medical Sciences with the Ethical ID IR.KMU.REC.1400.197.

Consent to publish

Competing interests.

The authors declare that there are no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

. The extracted data from the included papers.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hajesmaeel-Gohari, S., Khordastan, F., Fatehi, F. et al. The most used questionnaires for evaluating satisfaction, usability, acceptance, and quality outcomes of mobile health. BMC Med Inform Decis Mak 22 , 22 (2022). https://doi.org/10.1186/s12911-022-01764-2

Download citation

Received : 03 August 2021

Accepted : 21 January 2022

Published : 27 January 2022

DOI : https://doi.org/10.1186/s12911-022-01764-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Questionnaire

BMC Medical Informatics and Decision Making

ISSN: 1472-6947

sample questionnaire for medical research

Questionnaire surveys in medical research

Affiliation.

  • 1 Gastrointestinal Research Unit, Leicester General Hospital, UK.
  • PMID: 11133122
  • DOI: 10.1046/j.1365-2753.2000.00263.x
  • Health Care Surveys / methods*
  • Interviews as Topic
  • Research Design
  • Surveys and Questionnaires / standards*

Examples

Medical Questionnaires

sample questionnaire for medical research

Are you stuck between creating a questionnaire template from scratch and downloading a pre-designed template? Or, maybe you don’t have enough time to prepare the document and you just need a quick solution? Whichever are the reasons, here is a list of 10+ sample questionnaires example templates that you can download for free and use straight away.

Best Medical Questionnaires Examples & Templates

1. new patient medical questionnaire template.

New Patient Questionnaire

Size: 42 KB

If you are a new patient to some hospital, it is important to note that a physician in that healthcare facility will require you to present a medical report summary .   You can ask a doctor from the hospital that you visit frequently to write you the report. However, they may take hours, days, maybe even weeks to provide the summary. That’s a very long time before the new doctor commences consultation, testing, and treatments. The best thing to do would be to fill out a new patient questionnaire form and present it to the new doctor. This form will help them to understand your medical history and give them the confidence to go ahead with evaluation and treatments.

2. Patient Medical Health History Questionnaire Template

Patient Health History Example

Size: 127 KB

Every healthcare facility always asks patients to present their medical summary report . Doctors not only want to know what you are going through right now. They are also interested in knowing your past diagnosis and recommended treatments. The primary goal of medical history is to enable the physicians to understand your health statistics, do accurate tests, and recommend the best treatments. The best way to present your medical history to a physician is by filling out a medical questionnaire. You can download this PDF file and use it to present the relevant medical information. Click the link above to get the document for free.

3. Childhood Medical Health Assessment Questionnaire

Childhood Health Assessment

Before you take your child to a healthcare facility for examination and treatment, it is important to prepare their medical history and bring it with you to the hospital. This document is extremely important. So, you need to make sure the information you present is as accurate as possible. You could write a letter about your child’s health and present it to your doctor. But this isn’t as effective as using a questionnaire template . A questionnaire is always easy to fill out. And a pre-built one is a good option to get you started. Download this DOC file and use it to fill out answers to questions regarding the health of your child. Then, print the document and present it to your doctor.

4. Children Health Sample Medical Questionnaire

ChildrenS Health Sample

Size: 136 KB

Every parent wants to raise a happy, healthy child. Unfortunately, not many parents meet this goal. And the reason is very simple: they lack a healthy nutrition program for their young ones. If you are one such parent who doesn’t have a good diet plan template for your child yet, this is the best time to seek the help of a nutritionist. You will need to fill out a questionnaire form that will help a nutritionist to understand the diet structure that you have in place. If there is a need for adjustment, the nutritionist will advise you accordingly. Here’s a sample questionnaire template that you can download and use.

5. Student Health Assessment Medical Questionnaire Template

Student Health Assessment

Size: 12 KB

The purpose of this sample evaluation template is to help your doctor to understand your current health status. So, you need to make sure you give an accurate answer to every question asked. Once your medical doctor understands your past and current health records as well as status, they’ll be able to recommend the best treatment procedure and medication for your current condition. Now, it is important to note that this questionnaire contains confidential information that an unauthorized person shouldn’t access. Also, make sure that the information you present to your doctor is accurate because they will use that to recommend the best treatment options for you.

6. Sample Health Risk Assessment Questionnaire

Health Risk Assessment Questionnaire

Size: 15 KB

Answering a risk assessment questionnaire regarding your health has two benefits. First, it helps you to understand yourself as a person. By examining your mental wellness, emotional awareness, intellectual abilities, spiritual beliefs, personal value, fitness goals, and nutritional habits, you are able to develop a healthy lifestyle for the good of your very own health. Second, the information can help your medical doctor to determine whether you are healthy or susceptible to illnesses. It is important to answer the questionnaire carefully and the information you provide must be precise and true. You can download this questionnaire template and use it to save on time.

7. Personal Medical Questionnaire Template

Personal Health Assessment Example

Size: 93 KB

Just how well do you know your health as a person? Do you ever sit down to do a self-evaluation to determine if you are okay? If not, this may be the best time to start doing just that. At the end of the day, a personal medical evaluation enables you to determine whether you are physically fit and mentally upright or in need of medical assistance.  Use this sample template to do the analysis.

8. Pre Employment Medical Questionnaire Template

Pre Employment Health Screening

Size: 407 KB

Before a potential employer hires you, they’ll ask that you provide a medical report summary from your doctor . It is in your best interest that you provide this report because it will help a potentials employer to determine whether you are fit for the job or not. You can download and use this questionnaire template for this.

9. Printable Medical Questionnaire Template

Printable Medical Questionnaire Template

Size: 27 KB

Are you looking for a very simple medical questionnaire template that you can download and use straight away? Or, maybe you don’t have time to create one from scratch and a pre-built option can work just fine for you? Here is a print-ready sample that you can download.

10. Free Health Medical Questionnaire Template

Free Health Medical Questionnaire Template

Size: 82 KB

You can create a medical questionnaire template from scratch. But that will take a lot of your time. The best option is to download and use a pre-designed template. This not only saves you time but also ensures that your questionnaires are ready in time.

Questionnaire Generator

Text prompt

  • Instructive
  • Professional

Create a fun quiz to find out which historical figure you're most like in your study habits

Design a survey to discover students' favorite school subjects and why they love them.

U.S. flag

An official website of the Department of Health & Human Services

AHRQ: Agency for Healthcare Research and Quality

  • Search All AHRQ Sites
  • Email Updates

Informing Improvement in Care Quality, Safety, and Efficiency

  • Contact DHR

Home Icon

Questionnaire/Survey

National survey of physicians on practice experience.

This is a questionnaire designed to be completed by physicians in ambulatory and inpatient settings. The tool includes questions to assess the current state of clinical decision support systems, electronic health records, practice management systems, and secure messaging.

2009 International Survey of Primary Care Doctors

This is a questionnaire designed to be completed by physicians in an ambulatory setting. The tool includes questions to assess the usability of electronic health records and electronic prescribing.

Massachusetts Survey of Physicians and Computer Technology

This is a questionnaire designed to be completed by physicians in an ambulatory setting. The tool includes questions to assess user's perceptions of electronic health records.

Sharing Electronic Behavior Health Records: A Nebraska Perspective

This is a questionnaire designed to be completed by physicians, implementers, and nurses across a health care system setting. The tool includes questions to assess benefit, the current state, usability, perception, and attitudes of users electronic health records and health information exchange.

Canada Health Infoway System And Use Assessment Survey

This is a questionnaire designed to be completed by administrators, clinical staff, and pharmacists across a health care system. The tool includes questions to assess the usability of clinical decision support systems, electronic health records, and enterprise systems.

Community Chronic Care Network (CCCN) Online Publication and Education: User Needs Survey

This is a questionnaire designed to be completed by clinical staff in an ambulatory setting. The tool includes questions to assess the usability of disease registries.

Community Chronic Care Network (CCCN) Stakeholder Survey

This is a questionnaire designed to be completed by administrators, clinical staff, and office staff in an ambulatory setting. The tool includes questions to assess user's perceptions of disease registry.

Community Chronic Care Network (CCCN) Online Registry: User Interviews and Survey Questions

This is a questionnaire designed to be completed by administrators, clinical staff, and office staff in an ambulatory setting. The tool includes questions to assess functionality of disease registries.

Clinical Portal Survey: Mt. Ascutney Hospital and Health Center

This is a questionnaire designed to be completed by nurses, physicians, and hospital staff in an inpatient setting. The tool includes questions to assess user's needs of electronic health records.

Clinician Survey on Quality Improvement, Best Practice Guidelines and Information Technology

This is a questionnaire designed to be completed by physicians, clinical staff, and nurses across a health care system. The tool includes questions to assess user's perceptions and the current state of electronic health records.

  • Director's Corner
  • Current Priorities
  • Executive Summary
  • Research Spotlight
  • Research Themes and Findings
  • Research Dissemination
  • Research Overview
  • 2020 Year in Review
  • 2019 Year in Review
  • Engaging and Empowering Patients
  • Optimizing Care Delivery for Clinicians
  • Supporting Health Systems in Advancing Care Delivery
  • Our Experts
  • Search AHRQ-Funded Projects
  • AHRQ-Funded Projects Map
  • AHRQ Digital Healthcare Research Publications Database
  • A Practical Guide for Implementing the Digital Healthcare Equity Framework
  • ePROs in Clinical Care
  • Guide to Integrate Patient-Generated Digital Health Data into Electronic Health Records in Ambulatory Care Settings
  • Health IT Survey Compendium
  • Time and Motion Studies Database
  • Health Information Security and Privacy Collaboration Toolkit
  • Implementation in Independent Pharmacies
  • Implementation in Physician Offices
  • Children's Electronic Health Record (EHR) Format
  • Project Resources Archives
  • Archived Tools & Resources
  • National Webinars
  • Funding Opportunities
  • Digital Healthcare Research Home
  • 2018 Year in Review Home
  • Research Summary
  • Research Spotlights
  • 2019 Year in Review Home
  • Annual Report Home

50+ SAMPLE Medical Questionnaires in PDF | MS Word

Medical questionnaires | ms word, 50+ sample medical questionnaires, what is a medical questionnaire, parts of a medical questionnaire, how to create a medical questionnaire, who prepares the medical questionnaire, who uses medical questionnaires, why is a medical questionnaire important.

Complete Medical Questionnaire

Sample Complete Medical Questionnaire

Basic Medical Questionnaire Template

Basic Medical Questionnaire Template

Health and Medical Questionnaire

Health and Medical Questionnaire

General Medical Questionnaire

General Medical Questionnaire

Medical Group Questionnaire

Medical Group Questionnaire

Medical History Questionnaiire Template

Medical History Questionnaire Template

Physician Medical Questionnaire

Physician Medical Questionnaire

Initial Medical Questionnaire

Initial Medical Questionnaire

Medical Exam Questionnaire

Medical Exam Questionnaire

Workers Medical Status Questionnaire

Workers Medical Status Questionnaire

Diabeties Medical Questionnaire

Diabeties Medical Questionnaire

Detailed Medical Questionnaire

Detailed Medical Questionnaire

Employee and Family Medical Questionnaire

Family Medical Questionnaire

Travel Guard Medical Questionnaire

Travel Guard Medical Questionnaire

Pre Employment Medical Questionnaire

Pre-Employment Medical Questionnaire

Confidential Medical Questionnaire

Confidential Medical Questionnaire

Corporate Medical Questionnaire

Corporate Medical Questionnaire

Medical Screening Questionnaire

Medical Screening Questionnaire

Driver Health Questionnaire Template

Driver Health Questionnaire Template

Individual Medical Questionnaire Template

Individual Medical Questionnaire Template

Employment Medical Evaluation Questionnaire

Employment Medical Evaluation Questionnaire

Medical Questionnaire Format

Medical Questionnaire Format

New Employee Medical Questionnaire

New Employee Medical Questionnaire

International Travel Medical Questionnaire

International Travel Medical Questionnaire

Simple Medical Questionnaire Template

Simple Medical Questionnaire Template

Occupational Medical Health Questionnaire

Occupational Medical Health Questionnaire

Student Medical Questionnaire

Student Medical Questionnaire

Transat Medical Questionnaire

Transat Medical Questionnaire

Medical Business Questionnaire

Medical Business Questionnaire

Trail Medical Questionnaire

Trail Medical Questionnaire

Respirator Medical Evaluation Questionnaire

Respirator Medical Evaluation Questionnaire

Exposure and Medical Questionnaire

Exposure and Medical Questionnaire

Adventures Medical Questionnaire

Adventures Medical Questionnaire

Confidential Medical Questionnaire Template

Confidential Medical Questionnaire Template

Group Employer Medical Questionnaire

Group Employer Medical Questionnaire

Formal Medical Questionnaire Template

Formal Medical Questionnaire Template

Respirator Medical Questionnaire Template

Respirator Medical Questionnaire Template

Medical and Dental Questionnaire

Medical and Dental Questionnaire

Adult New Patient Questionnaire

Adult New Patient Questionnaire

Adult New Patient Questionnaire Example

Adult New Patient Questionnaire Example

Post Offer Medical Questionnaire

Post Offer Medical Questionnaire

Medical Questionnaire for Respiratory Equipment

Medical Questionnaire for Respiratory Equipment

Family Medical History Questionnaire

Family Medical History Questionnaire

Respiratory Medical Evaluation Questionnaire

Respiratory Medical Evaluation Questionnaire

Coronavirus Medical Questionnaire

Coronavirus Medical Questionnaire

Pre Admission Medical Questionnaire

Pre-Admission Medical Questionnaire

Accessibility and Medical questionnaire

Accessibility and Medical Questionnaire

Staff Medical Questionnaire

Staff Medical Questionnaire

Medical and Disability Questionnaire

Medical and Disability Questionnaire

Medical Surveillance Questionnaire

Medical Surveillance Questionnaire

Medical Information Questionnaire

Medical Information Questionnaire

Step 1: consider your purpose, step 2: insert the medical questionnaire’s parts, step 3: prepare clear and direct questions, step 4: use an easy-to-answer questionnaire, share this post on your network, file formats, word templates, google docs templates, excel templates, powerpoint templates, google sheets templates, google slides templates, pdf templates, publisher templates, psd templates, indesign templates, illustrator templates, pages templates, keynote templates, numbers templates, outlook templates, you may also like these articles, 51+ sample food questionnaire templates in pdf | ms word | google docs | apple pages.

sample food questionnaire templates

A food questionnaire can be used for a lot of purposes by a variety of businesses in the food service, hospitality, catering, and restaurant industry. Developing a food questionnaire is…

42+ SAMPLE Audit Questionnaire Templates in PDF | MS Word

sample audit questionnaire

Whether you come from a startup business to a long-time respected company, any work contains inevitable problems. Indeed, every success, even the smallest ones, deserves to be celebrated. But…

browse by categories

  • Questionnaire
  • Description
  • Reconciliation
  • Certificate
  • Spreadsheet

Information

  • privacy policy
  • Terms & Conditions
  • Open access
  • Published: 03 April 2024

Perception, practice, and barriers toward research among pediatric undergraduates: a cross-sectional questionnaire-based survey

  • Canyang Zhan 1 &
  • Yuanyuan Zhang 2  

BMC Medical Education volume  24 , Article number:  364 ( 2024 ) Cite this article

175 Accesses

Metrics details

Scientific research activities are crucial for the development of clinician-scientists. However, few people pay attention to the current situation of medical research in pediatric medical students in China. This study aims to assess the perceptions, practices and barriers toward medical research of pediatric undergraduates.

This cross-sectional study was conducted among third-year, fourth-year and fifth-year pediatric students from Zhejiang University School of Medicine in China via an anonymous online questionnaire. The questionnaires were also received from fifth-year students majoring in other medicine programs [clinical medicine (“5 + 3”) and clinical medicine (5-year)].

The response rate of pediatric undergraduates was 88.3% (68/77). The total sample of students enrolled in the study was 124, including 36 students majoring in clinical medicine (“5 + 3”) and 20 students majoring in clinical medicine (5-year). Most students from pediatrics (“5 + 3”) recognized that research was important. Practices in scientific research activities are not satisfactory. A total of 51.5%, 35.3% and 36.8% of the pediatric students participated in research training, research projects and scientific article writing, respectively. Only 4.4% of the pediatric students contributed to publishing a scientific article, and 14.7% had attended medical congresses. None of them had given a presentation at a congress. When compared with fifth-year students in the other medicine program, the frequency of practices toward research projects and training was lower in the pediatric fifth-year students. Lack of time, lack of guidance and lack of training were perceived as the main barriers to scientific work. Limited English was another obvious barrier for pediatric undergraduates. Pediatric undergraduates preferred to participate in clinical research (80.9%) rather than basic research.

Conclusions

Although pediatric undergraduates recognized the importance of medical research, interest and practices in research still require improvement. Lack of time, lack of guidance, lack of training and limited English were the common barriers to scientific work. Therefore, research training and English improvement were recommended for pediatric undergraduates.

Peer Review reports

Medical education includes the learning of basic clinical medical knowledge and the cultivation of scientific research abilities. Scientific research, an essential part of medical education, is increasingly important, as it can greatly improve medical care [ 1 , 2 ]. Scientific research activities are crucial for the development of clinician-scientists, who have key roles in clinical research and translational medicine. Therefore, medical education is increasingly emphasizing the cultivation of scientific research abilities. Strengthening scientific research training helps students to develop independent critical thinking, improve the ability of observation, and foster the problem-solving skills. It is suggested that developing undergraduate research benefits the students, the faculty mentors, the university or institution, and eventually society [ 2 , 3 ]. As a result, there is a growing trend to integrate scientific research training into undergraduate medical education. Early exposure to scientific research was recommended in undergraduate medical students [ 4 , 5 ]. In fact, an international questionnaire study showed that among 1625 responses collected from 38 countries, less than half (42.7%) agree/strongly agree that their medical schools provided “sufficient training in medical research” [ 6 ]. The training or practices about medical research in undergraduates is not universal. In China, few people pay attention to the current situation of medical research in undergraduates, especially for pediatric medical students.

Due to changes in China’s birth policy (two-child policy in 2016 and the three-child policy in 2021), child health needs are increasing [ 7 ]. The shortage of pediatricians is alarming in China. Therefore, numerous policies have been implemented to meet the challenges of the shortage of pediatricians, including reinstating pediatrics as an independent discipline in medical school enrollment and increasing the enrollment of pediatrics. The number of pediatricians has increased year by year. The number of pediatricians in China increased from 118,500 in 2015 (0.52 pediatricians per 1000 children under the age of 14) to 206,000 in 2021 (0.78 pediatricians per 1000 children under the age of 14). With the increase in pediatric enrollment, pediatric medical education is facing new challenges. It is urgent to study the current situation of cultivation of pediatric medical students, one of which is the scientific research abilities [ 8 , 9 ]. However, as the particular background of pediatrics, very little is known about the perception, practice and barriers toward medical research in pediatric undergraduates. The purpose of this study was to address the gap by assessing the practices, perceptions and barriers toward medical research of pediatric undergraduates at Zhejiang University. The results can help to improve the mode of cultivating scientific research abilities among pediatric medical students.

The study was conducted from March to April 2023. The study was approved by the Ethics Review Committee of the Children’s Hospital of Zhejiang University School of Medicine and was undertaken according to the Helsinki declaration. Participants provided written informed consent upon applying to participate in the study.

Study design and setting

This is a cross-sectional study conducted via an online questionnaire and the questionnaire was done simultaneously in all students. The study aimed to investigate the perception, practices and barriers toward research in pediatric undergraduates from Zhejiang University School of Medicine, and to investigate the differences in research among undergraduate students from clinical medicine (“5 + 3” integrated program, pediatrics) [pediatrics (“5 + 3”)], clinical medicine (“5 + 3” integrated program) [clinical medicine (“5 + 3”)] and clinical medicine (5-year).

The clinical medicine of Zhejiang University School of Medicine (ZUSM) includes a 5-year program, a “5 + 3” integrated program, and a 8-year MD. Program. The clinical medicine (5-year) program is the basis of clinical medicine education.Graduates need to complete 3 years of standardized residency training to become doctors. The clinical medicine (“5 + 3”) model combines the 5-year medical undergraduate education, 3-year standardized residency training and postgraduate education. Since 2015, 20 to 30 students who are interested in pediatrics were selected from second-year undergraduate students of clinical medicine (“5 + 3”) to continue studies as pediatrics (“5 + 3”) every year. Since 2019, ZUSM established pediatrics (“5 + 3”) program. 20–30 students have been enrolled independently every year.

Participants

All of the third-, fourth-, and fifth-year undergraduate students in pediatrics (“5 + 3”) and some of the fifth-year undergraduate students from clinical medicine (“5 + 3”) and clinical medicine (5-year) who expressed an interest in participating in the study were enrolled.

Data collection

The questionnaire was self-designed after reviewing the literature and consulting senior faculty. For the purpose of testing its clarity and reliability, the questionnaire was pilot tested among 36 undergraduate students. Their feedback was mainly related to the structure of the questionnaire. To address these comments, the questionnaire was modified to reach the final draft, which was distributed to the student sample included in the study. The reliability coefficient was assessed by Cronbach’s alpha, and the validity was evaluated by Kaiser-Meyer-Olkin (KMO).

There are four sections of the questionnaire used in this study:

The first part covered 3 statements (gender, grade and major).

The second part examined the participants’ perceptions of medical research, including 5 statements (importance, enhancement of competitiveness, practising thinking ability, solving clinical problems, and being interesting).

The third part examined practices in medical research, including 6 statements (project, training, write paper, publish paper, attend academic conference and conference communication).

The barriers to medical research were assessed in the last part, including 7 statements.

Perception and barriers toward medical research were evaluated using a five-point Likert scale ranging from 1 to 5 (1 = strongly disagree; 2 = disagree, 3 = uncertain, 4 = agree, 5 = strongly agree).

Statistical analysis

Categorical data are represented as numbers and frequencies. For ease of reporting and analyzing data, the responses of “agree” and “strongly agree” were grouped and reported as agreements, and “disagree” and “strongly disagree” were grouped as disagreements. The chi-square test was used to test the difference in the frequency of participation in research practices. The student’s perception score based on grades was analyzed using Fisher’s exact test, and attitude between the year of study was analyzed by ANOVA or a nonparametric test (Kruskal-Wallis H test). The statistical analysis was performed using IBM SPSS version 26. P  < 0.05 was considered significant.

The reliability coefficient of the questionnaire was assessed by Cronbach’s alpha; it was 0.73 for perception and 0.78 for barriers. KMO was 0.80 for perception (Bartlett’s sphericity test: χ2 = 200.4, p  < 0.001) and 0.73 for barriers (Bartlett’s sphericity test: χ2 = 278.4, p  < 0.001), indicating the appropriateness of the factor analysis. The factor analysis was carried out using the principal component analysis with varimax rotation. For perception, one factor explains 58.2% of the variance. For barriers, two-factor solution explains 60.2% of the variance.

The response rate was 79.2% (19/24) in the third year, 88% (22/25) in the fourth year and 96.4% (27/28) in the fifth year students in pediatrics (“5 + 3”), and the total response rate was 88.3% (68/77). The number of fifth-year students majoring in clinical medicine (“5 + 3”) and clinical medicine (5-year) was 36 and 20, respectively. Thus, a total of 124 students participated in the questionnaire. Among the participants, approximately 46% were male and 54% were female.

Perception regarding scientific research among the students majoring in pediatrics (“5 + 3”)

The majority of students in pediatrics (“5 + 3”) recognized that research was important (92.6%), such as increasing competitiveness, solving clinical problems and improving thinking (Fig.  1 ). Approximately half of the students in pediatrics (“5 + 3”) were interested in the research.

figure 1

Perception regarding scientific research among the students majoring in pediatrics

Among the third-, fourth-, and fifth-year students in pediatrics (“5 + 3”), there was a significant difference in the effect of research on thinking ability (Table  1 ). A stronger understanding of the importance of research for thinking abilities was found in students from the fifth year.

Comparing the perception of medical research among the fifth-year students from the different medicine programs, there was a significant difference in the interest in research (Table  2 ). The fifth-year undergraduates from clinical medicine (5-year) received the highest score for interest in scientific research, followed by pediatrics (“5 + 3”).

Practices regarding scientific research among students majoring in pediatrics (“5 + 3”)

More than half of the students in pediatrics (“5 + 3”) participated in research training. Approximately 36.8% of them were involved in writing scientific articles, and 35.3% participated in research projects (Table  3 ). Only 4.4% of the students in pediatrics (“5 + 3”) contributed to publishing a scientific article, and 14.7% of the students in pediatrics (“5 + 3”) had attended medical congresses. However, none of the students had made a presentation at congresses.

A statistically significant difference was observed among different grades in the pediatrics (“5 + 3”) program, with fifth-year students having a much higher rate of participation in conferences. However, no significant differences were observed in other forms of medical research practices.

When compared with fifth-year students from other programs (clinical medicine “5 + 3” or 5-year), the students in pediatrics (“5 + 3”) had a lower rate of participation in the projects (Table  4 ). The rate of participation in the research training of the pediatric students was lower than that of clinical medicine (5-year) (44.44% vs. 75%). There were no significant differences in other research practices, such as writing articles and attending congress.

Barriers regarding scientific research among the students majoring in pediatrics (“5 + 3”)

The most common barriers to research work for pediatric students were lack of training (85.3%), lack of time (83.9%), and lack of mentorship (82.4%).

However, the top three barriers to research work in fifth-year pediatric students were lack of training (96.3%), limited English (88.89%) and lack of time (88.89%). We found that the barrier of “lack of training” became increasingly apparent with grade, which was significantly obvious in fifth-year pediatric students compared with other grades (Table  5 ). The other barriers had no significant differences among the three grades from the pediatrics (“5 + 3”) program.

When compared with fifth-year students from other programs (clinical medicine “5 + 3” or 5-year), the rate of agreement about the barrier of “limited English” was significantly higher in fifth-year students from the pediatrics (“5 + 3”) program. There were no significant differences in other barriers among fifth-year students from different majors (Table  6 ).

The type of research activities willing to involve in the future among the students majoring in pediatrics (“5 + 3”)

A total of 88.2% of students in pediatrics (“5 + 3”) wanted to participate in the training of scientific research activities. Furthermore, when asked about the type of future scientific research activities, 80.9% of students wanted to participate in clinical research, and only 19.1% of students wanted to be involved in basic research. There was no significant difference in the different grades of the students from the pediatrics (“5 + 3”) program (Fig.  2 A).

figure 2

Types of research activities that students majoring in pediatrics are willing to be involved with in the future ( A ). Types of research activities that the students from different programs are willing to be involved with in the future ( B ). When compared with students in clinical medicine (“5 + 3”), fifth-year students in pediatrics (“5 + 3”) were significantly less likely to participate in basic research (* P  = 0.001)

Compared with students in clinical medicine (“5 + 3”), fifth-year students in pediatrics (“5 + 3”) were significantly less likely to participate in basic research (Fig.  2 B).

In China, to solve the shortage of pediatricians, pediatric programs have resumed in some medical schools, including Zhejiang University, in recent years. In this study, we focused on the perceptions, practices and barriers to scientific research in pediatric undergraduates from Zhejiang University.

With global progress, more research is required to advance knowledge and innovation in all fields. Likewise, at the present time, research activities are a highly important skill for medical practitioner. Medical students were encouraged to take active part in scientific research and prepare for today’s knowledge-driven world [ 2 ]. In the current study, we found an overall positive perception of scientific research in pediatric undergraduates. More than 90% of pediatric students agreed (“strongly agree” and “agree”) that scientific research was important, which could make them more competitive and improve their thinking.

Although the students had a positive perception of medical research, their practice of conducting research remained unsatisfactory. When compared with the fifth-year undergraduates from clinical medicine (“5 + 3”) (66.67%) and clinical medicine (5-year) (75%), only 33.33% of the fifth-year undergraduates in pediatrics (“5 + 3”) have participated in scientific research projects. The number of paper publications was very small (third-year of Pediatric (“5 + 3”) 0, fourth-year 4.5% and fifth-year 7.4%). It was significantly less than the publication rate of final-year students in the United States (46.5%) and Australia (roughly one-third) [ 10 , 11 ]. In another study in Romania, 31% of fifth-year students declared that they had prepared a scientific presentation for a medical congress at least once [ 12 ]. Moreover, none of the students in the study presented their paper in the scientific forum. A study in India also found that the undergraduate students’ experience of presenting paper in scientific forums was only 5% and publication 5.6% [ 13 ]. As part of the curriculum, some Indian universities require postgraduates to present papers and submit manuscripts for publication. Nevertheless, the practices regarding scientific research of undergraduates is still relatively poor. Lack of time, lack of guidance and lack of training for research careers were found to be the major obstacles in medical research for both pediatric students and others, which is consistent with previous reports [ 5 , 14 , 15 ]. The questionnaire in residents also found that lack of time was a critical problem for scientific research [ 16 ]. There is no common practice about how to solve this difficulty. In the literature, it was usually recommended that integration of scientific research training into the curricular requirements for undergraduates or residency programs for residents should be implemented [ 7 , 14 , 17 , 18 ]. An increasing number of medical schools have individual projects as a component of their curriculum or mandatory medical research projects to develop research competencies [ 19 , 20 ].

Interestingly, in fifth-year pediatric undergraduates (“5 + 3”), English limitations were found to be one of the most common barriers. The barrier of the limitation of English was increasingly better as the grades increased in pediatric students. We speculated that this was related to the increasing awareness of the importance of scientific research and participation in scientific research activities, increasing demand for reading English literature and writing English articles. Furthermore, the English limitation barrier for pediatric students was more obvious than that for students from clinical medicine (“5 + 3”) and clinical medicine (5-year). They are worried about academic English. Horwitz et al. first proposed “foreign language anxiety” [ 21 ]. Deng and Zhou explored medical students’ medical English anxiety in Sichuan, China. They found that 85.2% of the students surveyed suffered moderate above medical English anxiety [ 22 ]. In the questionnaire, 88.89% of the fifth-year pediatric students believed that limited English was one of the most important barriers for scientific research. Currently, English is the chief language of communication in the field of medical science, including correspondence, conferences, writing scientific articles, and reading literature. Ma Y noted that medical English should be the most important component of college English teaching for medical students [ 23 ]. At Zhejiang University, all of the students, including those majoring in pediatrics (“5 + 3”), clinical medicine (“5 + 3”) and clinical medicine (5-year), had a medical English course during the undergraduate period. Thus, the course could not satisfy the demands for scientific research, such as reading English literature, writing English paper and oral presentation in English. To solve this barrier, it was suggested to understand the requirements of pediatric students for medical English learning and offer more courses about medical English or English writing training for pediatric students. Furthermore, undergraduates should be encouraged to participate in local, regional or national conferences that are not in English but in Chinese language, which can increase the interest in participating in scientific research.

Most of the pediatric students tended to choose clinical research, while only 19.1% wanted to attend basic research. The proportion of fifth-year students in pediatrics (“5 + 3”) choosing basic research was much lower than the students from the clinical medicine (“5 + 3”) program. It is speculated that pediatrics usually have heavier clinical work with relative poor scientific practice in China, compare with doctors from other clinical department. They are likely to concern the clinical research. The students in pediatrics might not obtain sufficient scientific guidance from their clinician teachers compared with those from other medicine program. According to the data, the Pediatric College could conduct more scientific research training directed at clinical research, such as the design, conduct and administration of clinical trials. The simulation-based clinical research curriculum is considered to be a better approach training of clinician-scientists compared with traditional clinical research teaching [ 24 ]. On the other hand, we might need to do more to improve the interest in basic research for pediatric undergraduates.

The major limitation of the present study is the small sample size. Only 20 to 30 students have been enrolled in pediatrics (“5 + 3”) of ZUSM every year. Therefore, multicenter studies (multiple medical schools) might be better to understand the perception, practice, and barriers of medical research among pediatric undergraduates. Even so, the findings in this study indicate that lack of time, lack of guidance, lack of training and limited English might be the common barriers to scientific work for pediatric undergraduates. Furthermore, the questionnaire for teachers and administrators would be performed to offer some concrete solutions in future.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Zhejiang University School of Medicine

Kaiser-Meyer-Olkin

Hanney SR, González-Block MA. Health research improves healthcare: now we have the evidence and the chance to help the WHO spread such benefits globally. Health Res Policy Syst. 2015;13:12.

Article   Google Scholar  

Adebisi YA. Undergraduate students’ involvement in research: values, benefits, barriers and recommendations. Ann Med Surg (Lond). 2022;81:104384.

Google Scholar  

Petrella JK, Jung AP. Undergraduate research: importance, benefits, and challenges. Int J Exerc Sci. 2008;1(3):91–5.

Stone C, Dogbey GY, Klenzak S, Van Fossen K, Tan B, Brannan GD. Contemporary global perspectives of medical students on research during undergraduate medical education: a systematic literature review. Med Educ Online. 2018;23(1):1537430.

El Achi D, Al Hakim L, Makki M, Mokaddem M, Khalil PA, Kaafarani BR, et al. Perception, attitude, practice and barriers towards medical research among undergraduate students. BMC Med Educ. 2020;17(1):195.

Funston G, Piper RJ, Connell C, Foden P, Young AM, O’Neill P. Medical student perceptions of research and research-orientated careers: an international questionnaire study. Med Teach. 2016;38(10):1041–8.

Tatum M. China’s three-child policy. Lancet. 2021;397:2238.

Rivkees SA, Kelly M, Lodish M, Weiner D. The Pediatric Medical Student Research Forum: fostering interest in Pediatric Research. J Pediatr. 2017;188:3–4.

Barrett KJ, Cooley TM, Schwartz AL, Hostetter MK, Clapp DW, Permar SR. Addressing gaps in Pediatric Scientist Development: the Department Chair View of 2 AMSPDC-Sponsored Programs. J Pediatr. 2020;222:7–e124.

Jacobs CD, Cross PC. The value of medical student research: the experience at Stanford University School of Medicine. Med Educ. 1995;29(5):342–6.

Muhandiramge J, Vu T, Wallace MJ, Segelov E. The experiences, attitudes and understanding of research amongst medical students at an Australian medical school. BMC Med Educ. 2021;21(1):267.

Pop AI, Lotrean LM, Buzoianu AD, Suciu SM, Florea M. Attitudes and practices regarding Research among Romanian Medical Undergraduate Students. Int J Environ Res Public Health. 2022;19(3):1872.

Pallamparthy S, Basavareddy A. Knowledge, attitude, practice, and barriers toward research among medical students: a cross-sectional questionnaire-based survey. Perspect Clin Res. 2019;10:73–8.

Assar A, Matar SG, Hasabo EA, Elsayed SM, Zaazouee MS, Hamdallah A, et al. Knowledge, attitudes, practices and perceived barriers towards research in undergraduate medical students of six arab countries. BMC Med Educ. 2022;22(1):44.

Kharraz R, Hamadah R, AlFawaz D, Attasi J, Obeidat AS, Alkattan W, et al. Perceived barriers towards participation in undergraduate research activities among medical students at Alfaisal University-College of Medicine: a Saudi Arabian perspective. Med Teach. 2016;38(Suppl 1):S12–8.

Fournier I, Stephenson K, Fakhry N, Jia H, Sampathkumar R, Lechien JR, et al. Barriers to research among residents in Otolaryngology - Head & Neck surgery around the world. Eur Ann Otorhinolaryngol Head Neck Dis. 2019;136(3S):S3–7.

Abu-Zaid A, Alkattan K. Integration of scientific research training into undergraduate medical education: a reminder call. Med Educ Online. 2013;18:22832.

Eyigör H, Kara CO. Otolaryngology residents’ attitudes, experiences, and barriers regarding the Medical Research. Turk Arch Otorhinolaryngol. 2021;59(3):215–22.

Möller R, Shoshan M. Medical students’ research productivity and career preferences; a 2-year prospective follow-up study. BMC Med Educ. 2017;17(1):51.

Laidlaw A, Aiton J, Struthers J, Guild S. Developing research skills in medical students: AMEE Guide 69. Med Teach. 2012;34(9):e754–71.

Horwitz EK, Horwitz MBH, Cope J. Foreign Language Classroom anxiety. Mod Lang J. 1986;70(2):125–32.

Deng J, Zhou K, Al-Shaibani GKS. Medical English anxiety patterns among medical students in Sichuan, China. Front Psychol. 2022;13:895117.

Ma Y. Exploring medical English curriculum and teaching from the perspective of ESP-A case study of a medical English teaching. Technol Enhan Lang Educ. 2009;125(1):60–3.

Yan S, Huang Q, Huang J, Wang Y, Li X, Wang Y, et al. Clinical research capability enhanced for medical undergraduates: an innovative simulation-based clinical research curriculum development. BMC Med Educ. 2022;22(1):543.

Download references

Acknowledgements

The authors thank all the students who participated as volunteers for their contribution to the study.

This work was supported by grants from the “14th Five-Year Plan” teaching reform project of an ordinary undergraduate university in Zhejiang Province (jg20220041) and project of graduate education research in Zhejiang University (20210317).

Author information

Authors and affiliations.

Department of Neonatology, Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China

Canyang Zhan

Department of Pulmonology, Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China

Yuanyuan Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

CZ designed and supervised the study progress. CZ and YZ wrote the manuscript and collected and analyzed the questionnaire data. All the authors have read and approved the manuscript prior to submission.

Corresponding author

Correspondence to Yuanyuan Zhang .

Ethics declarations

Ethics approval and consent to participate.

Our study was approved by the Ethics Review Committee of the Children’s Hospital of Zhejiang University School of Medicine and was undertaken according to the Helsinki declaration. Written informed consent was obtained from each participant upon their application to the work.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zhan, C., Zhang, Y. Perception, practice, and barriers toward research among pediatric undergraduates: a cross-sectional questionnaire-based survey. BMC Med Educ 24 , 364 (2024). https://doi.org/10.1186/s12909-024-05361-x

Download citation

Received : 14 October 2023

Accepted : 27 March 2024

Published : 03 April 2024

DOI : https://doi.org/10.1186/s12909-024-05361-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Undergraduate research
  • Medical research

BMC Medical Education

ISSN: 1472-6920

sample questionnaire for medical research

ORIGINAL RESEARCH article

Do future healthcare professionals advocate for pharmacogenomics a study on medical and health sciences undergraduate students.

Hanan Al-Suhail

  • 1 College of Medicine, University of Sharjah, Sharjah, United Arab Emirates
  • 2 Department of Pharmacy, Laboratory of Pharmacogenomics and Individualized Therapy, School of Health Sciences, University of Patras, Patras, Greece
  • 3 United Arab Emirates University, College of Medicine and Health Sciences, Department of Genetics and Genomics, Abu Dhabi, United Arab Emirates
  • 4 United Arab Emirates University, Zayed Center for Health Sciences, Abu Dhabi, United Arab Emirates
  • 5 Erasmus University Medical Center, Faculty of Medicine and Health Sciences, Department of Pathology, Clinical Bioinformatics Unit, Rotterdam, Netherlands

Pharmacogenomics (PGx) is a rapidly changing field of genomics in which healthcare professionals play an important role in its implementation in the clinical setting, however PGx level of adoption remains low. This study aims to investigate the attitude, self-confidence, level of knowledge, and their impact on health sciences undergraduate students’ intentions to adopt PGx in clinical practice using a questionnaire developed based on the Theory of Planned Behavior (TPB). A model was proposed and a questionnaire was developed that was distributed to 467 undergraduate students of all academic years from four different departments of the University of Sharjah (UoS) including medical, dental, nursing, and pharmacy students from September 2022 to November 2022. Descriptive statistics along with factor analysis and regression analysis were conducted. The proposed model had a good internal consistency and fit. Attitude was the factor with the greatest impact on student’s intentions followed by self-confidence and barriers. The level of knowledge had a meaningless impact. The majority of students shared a positive attitude and were aware of PGx benefits. Almost 60% of the respondents showed a high level of knowledge, while 50% of them were confident of implementing PGx in their clinical practice. Many students were prone to adopt PGx in their future careers. PGx testing cost and the lack of reimbursement were the most important barriers. Overall, students shared a positive intention and were prone to adopt PGx. In the future, it would be important to investigate the differences between gender, year of studies, and area of studies studies and their impact on students’ intentions.

Introduction

Pharmacogenomics (PGx) is a scientific discipline that merges pharmacology and genomics ( Abdela et al., 2017 ). PGx testing is becoming more and more widely available in healthcare settings, and a growing body of actionable high-level evidence for clinical utility mandates the provision of sustainable PGx education to healthcare professionals. As a component of personalized medicine, PGx uses genetic information to optimize therapeutic benefits, enhance clinical outcomes, and reduce drug adverse reactions (ADR) ( Klein et al., 2017 ; Barker et al., 2022 ). Thus, PGx can potentially improve the therapeutic strategy of a patient by selecting the appropriate medication at the correct dosage ( Verbelen et al., 2017 ). Although, sub-optimal response to medication along with the presence of ADRs can be partially explained by other factors including sub-dosing, drug allergy, drug-drug interactions, or lack of patient compliance, a person’s genetic makeup remains an important factor to consider ( Rollinson et al., 2020 ). Many clinical studies in adult patients have demonstrated the clinical utility of PGx in drug management ( Klein et al., 2017 ; Verbelen et al., 2017 ; Swen et al., 2023 ).

PGx applications have gained momentum in the last decades thanks to the completion of the Whole Genome Sequence and AllOfUs program in the United States, and led to an increased popularity and the launch of other clinical projects such as the Emirati Genome Project in the United Arab Emirates (UAE) ( Klein et al., 2017 ; Al-Ali et al., 2018 ). Healthcare professionals including physicians, pharmacists, dentists, and nurses are welcoming the PGx concept and are trying to incorporate it into their clinical practice but at a slow pace ( Hansen et al., 2022 ). Their role in PGx clinical application has been thoroughly investigated in many studies ( Albassam et al., 2018 ; Algahtani, 2020 ; Alhaddad et al., 2022 ). Indeed, it was shown that healthcare professionals had a rather positive attitude and were willing to adopt it but they lacked the proper knowledge and training along with self-confidence ( Abdela et al., 2017 ; Albassam et al., 2018 ; Algahtani, 2020 ; Smith et al., 2020 ; Albitar and Alchamat, 2021 ; Alhaddad et al., 2022 ; Hansen et al., 2022 ; Hayashi and Bousman, 2022 ).

Several challenges and barriers impede PGx’s widespread implementation despite the proven clinical and economic effectiveness of PGx ( Chenoweth et al., 2020 ; Koufaki et al., 2021 ). The lack of healthcare knowledge and training in the field, the high cost of PGx testing, the lack of reimbursement, and moral and bioethical concerns are some examples. This slow PGx adoption rate in clinical practice must change. To do so, future healthcare professionals must receive a proper education in PGx aiming to overcome their concerns and reluctance. There is another publication, by Rahma and coworkers, 2020, about UAE undergraduate and postgraduate students’ attitudes and level of knowledge related to genomics and PGx ( Rahma et al., 2020 ). In contrast to Rahma and coworkers, 2020 study, our project concentrated on undergraduate medical and health science students and it investigates the impact of four different factors in their intention to adopt PGx and not only two ( Rahma et al., 2020 ). In addition, the survey instrument used in this project was developed based on a behavioral theory and not only based on literature.

In this study, we investigated the attitudes, self-confidence, level of knowledge, barriers, and intentions of health sciences undergraduate students from different Colleges of the University of Sharjah (UoS) to adopt PGx applications in clinical practice using a questionnaire developed based on Theory of Planned Behavior (TPB). Our objectives were to evaluate the impact of different factors on students’ intentions and to highlight any correlations or relations among factors.

Materials and methods

Research framework.

Based on the theory of TPB, we created a modified framework for assessing the effect of several variables on health science students’ intention to adopt PGx testing in their clinical practice. This behavioral theory enables to investigate of the correlation of beliefs, attitudes, and intentions to a behavior since it assumes behavioral intention is the key determinant of behavior ( U.S. Department of Health and, Health, 2012 ). TBP pinpoints that three main factors affect a person’s intention; normative beliefs (attitudes), social influence and beliefs (subjective norm), along with control beliefs (perceived behavior control) ( Bosnjak et al., 2020 ). In parallel, it is assumed that other external factors do not independently affect a person’s behavior ( Godin and Kok, 1996 ; Bosnjak et al., 2020 ). In our case, we included four factors including attitude (attitudes, compatibility of PGx, PGx clinical benefits), level of knowledge, self-confidence/self-efficacy, barriers, and concerns along with one moderator (demographics). The proposed model along with the factors’ relationships are depicted in Figure 1 .

www.frontiersin.org

Figure 1 . The framework of the proposed model on which the study’s survey was based.

Study design

A descriptive cross-sectional survey was conducted from September 2022 to November 2022. This study used a validated 41-item questionnaire developed by the Laboratory of Pharmacogenomics and Individualized Therapy at the Department of Pharmacy, University of Patras, Greece, and previously published ( Siamoglou et al., 2021a ; Siamoglou et al., 2021b ; Koufaki et al., 2022 ). The questionnaire was written in English. It consisted of six main sections that included demographics (5 questions), general knowledge related to PGx interventions (11 questions), attitudes (6 questions), self-confidence in applying PGx in a professional setting (6 questions), barriers and concerns (7 questions), willingness to adopt PGx in clinical practice (6 questions). All items were measured on a seven-point Likert scale, with one being “totally disagree” and seven being “totally agree”. Only the knowledge section was measured on a three-point scale ranging from (agree, disagree, and not sure). The study was approved by the UoS Research Ethics Committee (REC-22-06–06-01-S). An informed consent was provided, and participants had to give their approval before proceeding with the questionnaire’s distribution.

Study sample

The study sample consisted of 467 undergraduate students from four different departments of UoS including medical, dental, nursing, and pharmacy students. An online questionnaire was distributed via Google Forms to all enrolled undergraduate health science students of all academic years. Students could participate only one time using their academic email, while they could update their answers before final submission. Almost two-thirds of the sample were female students as expected based on students’ representation in each department, since almost 70% are women. Moreover, 35% of participants derived from the Department of Medicine. Participants were dispersed equally, although representation from first and fifth year students was low. The multinational environment of UoS was illustrated in the examined cohort as well. Indeed, students from 35 different nationalities were included in the study, with 17% derived from Syria, 15.4% from Jordan, 14.3% from Egypt, 11% from the United Arab Emirates, and 13% from other countries. Countries with students’ representation of less than 10 students per country were grouped into one named “others”. The following countries were included: Afghanistan, Algeria, Australia, Bahrain, Bangladesh, Canada, Comoros, Djibouti, Dominica, Finland, India, Iran, Japan, Kenya, Kuwait, Lebanon, Mauritania, Morocco, Nigeria, Oman, Pakistan, Philippines, Saudi Arabia, Spain, Sri Lanka, Sweden, USA. Finally, only 135 out of 467 students had attended a PGx lecture in the past. Table 1 summarizes the demographics of the sample.

www.frontiersin.org

Table 1 . Students’ demographics.

Data analysis

SPSS statistical tool (version 28; IBM, NY, USA) was used. Frequencies, the proportion of correct replies, descriptive statistics (mean value, standard deviation (SD)), and regression analysis were included in the data analysis. Factor analysis using Cronbach’s Alpha Analysis was used to assess the integrity of the scale of our five factors questionnaire including demographics, level of knowledge, attitudes, self-confidence, and willingness to adopt PGx in clinical practice. All these are illustrated in graphs. Goodness-of-fit tests such as the Chi-square test, Comparative Fit Index (CFI), Goodness of Fit Index (GFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA) were also used to confirm the survey’s validity and reproducibility.

This study’s results are shown in Tables 2 ; Figures 1 – 7 . Cronbach analysis was performed. As demonstrated in Table 2 four out of the five factors of this study’s instrument had a Cronbach’s alpha value above 0.8. Only level of knowledge had a Cronbach’s alpha value of 0.509. Given the fact that Cronbach’s alpha coefficient measures the internal consistency of a scale and a coefficient of 0.7 or higher is generally considered acceptable, it is indicated that the overall internal consistency of the questionnaire scale is acceptable. The level of knowledge as a factor is less consistent probably due to the different measuring scales used.

www.frontiersin.org

Table 2 . Cronbach’s alpha values for each section.

www.frontiersin.org

Figure 2 . Students’ attitudes towards PGx clinical application. As it is shown in the bar plot, the majority of students have a positive attitude towars PGx application in the clinical practice and were aware of PGx benefits.

www.frontiersin.org

Figure 3 . Differences in attitudes based on students’ gender. Female students has slightly more positive attitude regarding PGx testing in clinical practice. Almost 75% of female students claimed that counseling patients for their PGx results is relevant of their profesion and they believed that patients will do a PGx testing in the future.

www.frontiersin.org

Figure 4 . Level of knowledge as reported by participants expressed in three-level Likert scale. Overall respondents had a good level of knowledge especially the theoretical ones. 83% of the students were aware that PGx will optimise drug dosing and improves drug efficacy.

www.frontiersin.org

Figure 5 . Main barriers and concerns about PGx clinical application. Respondents agreed with most noted barriers and concerns. It was shown that there were not important moral and religious issues related to PGx implementation.

www.frontiersin.org

Figure 6 . Students’ self-confidence towards PGx clinical application expressed in percentages. Respondents showed a good self-confidence and were prepared to implement I in ther future career.

www.frontiersin.org

Figure 7 . Students’ willingness to adopt PGx in the future. The majority of respondents were prone to recommend PGx testing in their family or patient while 67% will implement PGx testing in their future career. However, one-third of students were not interested in continue their studies in PGx.

Upon confirming instrument validity, a confirmatory factor analysis was performed, as shown in Tables 3 , 4 , 5 , close to 0.5 indicating that the proposed model has an acceptable fit. However, GFI, NFI, and RFI indexes were close to 0.8 but not close to 0.9, a fact that highlights that the model proposed is not the perfect fit. The model is applied in a highly diverse cohort in terms of nationalities, scientific backgrounds, and years of studies, and thus, having at least a few parameters within acceptable ranges, it can be concluded that the model is of good fit ( Doll et al., 1994 ; Baumgartner and Homburg, 1996 ).

www.frontiersin.org

Table 3 . Results of factor analysis.

www.frontiersin.org

Table 4 . Results of factor analysis (RMSEA).

www.frontiersin.org

Table 5 . Model’s fitness check.

Regression analysis

Moreover, to investigate the interaction between items and factors, a multiple regression analysis was conducted. Most of the item estimates (items related to demographics were not included in the analysis) were found to be close to 0.7. The coefficient of determination (R 2 )

was 0.769 and R 2 was close to 0.6, signifying that the model was a good fit. Items about the level of knowledge were less than 0.6. A positive R-value indicates that a linear relationship exists between the dependent variable and the independent variables, while a negative R-value pinpointed an inverse relationship between them. The absolute value of R indicates the strength of the relationship. R 2 is an indicator of the proportion of the variability of the dependent variable that is inevitably explained by the independent variables in multiple regression analysis.

In the current regression analysis, the standardized beta coefficients for factors level of knowledge, attitudes, self-confidence, barriers, and concerns were found to be 0.050, 0.514, 0.243, and 0.147, respectively. A positive beta coefficient indicates that the dependent variable will increase as the independent variable increases. Therefore, it was found that attitudes exert the greatest effect on students’ intentions to adopt PGx, followed by barriers and concerns in PGx adoption and their level of self-confidence. The level of knowledge appeared not to exert a significant impact. It is not clear whether this observation is confirmed since this factor was measured on a different scale. Furthermore, the effect of attitude is the most per the standardized regression weight suggesting that there is a strong positive impact of the factor on students’ intentions. Barriers and self-confidence shared a similar effect and less than that of attitudes, while the level of knowledge had a very low effect. Correlation between items and factors was very high showing a great fit (see Supplementary Tables S1–3 ). The level of knowledge is positively correlated with attitudes with an estimate of 0.541 and barriers are positively correlated with attitudes as a factor with an estimate of 0.679.

Students shared an overall positive attitude about PGx testing in the clinical practice and it was demonstrated that they were aware of its clinical benefits. More precisely, 82% of students agreed that PGx testing will improve drug efficacy and optimize drug dosage, while the majority of them (around 70%) agreed that will lead to a significant decrease in the incidence rate of ADRs and improve patients’ quality of life during a drug therapy. The same trend was observed when respondents were asked about PGx’s role in medication expenditures since almost 80% pointed out that agreed with this item and 17% had a neutral opinion. Finally, two-thirds of respondents considered PGx relevant to their professional setting and the same proportion believed that part of their professional role is to counsel patients regarding PGx information ( Figure 2 ). When respondents’ answers were analyzed based on their gender, it was demonstrated that both male and female students demonstrated a similar positive attitude and there were only slight differences in the two items. Indeed, female students presented a more positive attitude by 12% compared to their male counterparts when they were asked about the relevance of PGx in their profession and they were more convinced that more patients will undergo PGx testing in the future ( Figure 3 ).

Level of knowledge

Furthermore, the level of knowledge was also investigated. Students were found to have a moderate to good level of knowledge. Almost 60% of the respondents answered correctly the relevant questions especially those related to general theoretical knowledge. This trend is not followed when the question is more specific and lab-based ( Figure 4 ). Two-thirds of participants gave the wrong answer when they were asked if genetic determinants of drug response change over a person’s lifetime. Almost half of the participants were not sure if PGx testing is available for all medications and that the gene is involved in warfarin metabolization. More precisely, 23% were neutral and only 9% found the correct answer.

Barriers and concerns

Participants were aware of the most cited barriers and concerns related to PGx testing implementation as shown in Figure 5 ; Table 6 . Indeed, the most important barriers based on the respondents’ feedback were the lack of trained personnel (79%), followed by the cost of PGx testing (73%), data privacy concerns (64%), and the lack of reimbursement. A great percentage of participants (54%) believed that PGx can cause psychological distress to patients (mean = 4.63 and SD = 1.63). Finally, students did not consider the existence of moral and religious concerns as a very significant barrier (mean 4.38 and SD = 1.77). Only 44% agreed on that while 29% were neutral and 27% were negative.

www.frontiersin.org

Table 6 . Mean and Standard Deviation of Items used in the study’s questionnaire.

Self-confidence

When students were asked to characterize their self-confidence in other words, to describe their readiness and capability in practicing PGx in their future clinical practice, it was stated as moderate as it is illustrated in Figure 6 . Indeed, it was found that around 50% of participants were competent to identify therapeutic areas or medications with PGx recommendations and almost half of the respondents stated to be comfortable to formulate a patient’s treatment scheme based on PGx results (mean = 4.34 and SD = 1.84). The vast majority believed that they would efficiently discuss PGx testing information with their healthcare colleagues. Their level of readiness dropped when it came to implementing PGx results in drug therapy selection, dosing, and monitoring whereas a third of students did not feel well-prepared to inform patients about the benefits and risks of PGx testing (mean = 4.09 and SD = 1.91). Finally, students were not shown to be confident about their educational training to identify the proper source of clinical information about this topic. Indeed 43% claimed not to be well trained, 20% had a neutral position and only 37% were positive (mean = 3.83 and SD = 2.05).

Intention to adopt PGx in the clinical practice

As shown in Figure 7 , participants were willing to incorporate PGx testing in their clinical practice in the future. There was an interest in expanding their knowledge and expertise in the topic either by pursuing a postgraduate program or attending a seminar. Admittedly, almost 60% of the students (mean = 5.54 and SD = 1.53) would like to attend a workshop or a PGx training in the future while 41% were prone to pursue a PGx-related postgraduate program (mean = 4.02 and SD = 1.95). The majority of students (almost 70%) were willing to conduct a PGx test for themselves (mean 5.21 and SD = 1.67), and 67% were positive in recommending it to a relative or a friend. (mean = 5.20 and SD = 1.50). It was also noticed that respondents had a positive tendency to apply PGx testing in their clinical routine because more than half of them (67% and 70% respectively) answered that they would implement it in their professional setting in the future and they would recommend it to a patient (mean = 5.11 and SD = 1.51 and mean = 5.32 and SD = 1.56).

PGx is an emerging field of personalized medicine that can offer a series of advantages in drug management. Besides the plenty and proven clinical benefits, the adoption rate of PGx applications in clinical practice remains low across the globe. The main way to boost PGx implementation is by investing in having adequately trained future generations of healthcare professionals. For this reason, we aimed to investigate the attitudes, beliefs, and level of knowledge of health science undergraduate students of UoS.

According to our findings, the proposed model had a good internal consistency since four out of five independent factors had a Cronbach α that was over 0.8. The model had also a good fit (CMIN/DF was almost three while RMSEA was found at 0.65), a fact that is highly important since in social studies it is not common to find high consistency scores. In addition, the proposed model managed to fit the multinational and heterogeneous character of the sample, a fact that is also significant. Based on the regression analysis, attitude was the factor with the greatest impact on students’ intentions to adopt PGx which also correlated with barriers and level of knowledge. The level of knowledge did not fit well in the model and had a meaningless impact on students’ intentions. Self-confidence and barriers were shown to contribute to students’ willingness to adopt, while barriers were positively correlated with attitudes as well.

Moreover, UoS students had a positive attitude, a moderate level of knowledge along with good level of self-confidence. Most of the respondents confirmed that PGx is relevant to their profession whereas, as it was indicated, a great percentage of students were aware of PGx clinical applications and its benefits. There is a slight difference between students’ attitudes based on their gender, while, they considered that lack of reimbursement, high PGx testing cost, shortage of trained personnel, and data privacy concerns were the main obstacles to slow PGx adoption. The majority of students were willing to broaden their knowledge in the field via a postgraduate course or a seminar. In addition, they intend to apply PGx testing in their clinical routine in the future and most of them would recommend a relevant test to a patient.

Based on the literature, undergraduate students who attend health-related courses in medicine, pharmacy, or nursing share a positive attitude toward PGx applications. In Wen and coworkers, 2022 study, it was shown that the vast majority of first-year pharmacy students in the United States considered PGx as a useful tool and 57% agreed that it is relevant to their profession while 22% totally agreed that PGx will be relevant to their clinical practice ( Wen et al., 2022 ). Siamoglou and coworkers, 2021 also concluded with similar results. Students from Malaysia and Greece shared a positive attitude towards genetic testing and were aware of the benefits and relative advantages of preemptive testing ( Siamoglou et al., 2021a ). Finally, according to Shah and coworkers, 2022, female pharmacy students in Pakistan demonstrated a better attitude towards PGx testing compared to their male counterparts, an observation that comes in agreement with our results ( Shah et al., 2022 ).

Most of the available publications indicated that undergraduate students showed a weak or moderate level of knowledge ( Siamoglou et al., 2021b ; Koufaki et al., 2022 ; Makrygianni et al., 2023 ). According to Makrygianni and coworkers, 2023, this factor did not exert any impact on students’ intentions ( Makrygianni et al., 2023 ). Koufaki and coworkers, 2022 concluded that Malaysian and Greek pharmacy students had a rather low level of knowledge ( Koufaki et al., 2022 ). Furthermore, Arafah and coworkers, 2022 mentioned that the overall level of knowledge of Saudi Arabian pharmacy students was low and that students lacked practical skills, an observation that was also made by Makrygianni and coworkers, 2023 ( Arafah et al., 2022 ; Makrygianni et al., 2023 ). Makrygianni and coworkers, 2023, also presented that students were reluctant to answer advanced technical questions, and that had an impact on their self-confidence ( Makrygianni et al., 2023 ). Finally, graduate pharmacy students expressed that gaining in-depth knowledge was a key to their future career advancement based on Koufaki and coworkers, 2023 ( Koufaki et al., 2023 ).

Moreover, the level of knowledge was positively correlated with attitudes, an observation that agrees with the existing literature ( Makrygianni et al., 2023 ). However, it was demonstrated that the moderate level of knowledge has not negatively affected students’ attitudes or self-confidence. Students have a high level of self-confidence about implementing PGx in the future and only items related to training readiness received less positive feedback, an observation that might be due to the level of knowledge. This is congruent with other studies ( Mehtar et al., 2022 and Domnic et al., 2022 ). For instance, based on Mehtar and coworkers 2022, in Lebanon, approximately, 73% of all pharmacy students stated that they should be able to identify patients that might benefit from any type of genetic testing and they could use PGx, in their future practice ( Mehtar et al., 2022 ). Regarding students’ reluctance in being able to adjust or alter a patient’s treatment following PGx testing, Domnic, and coworkers, 2022 study, pointed out that 36% of medical students agreed that they were confident to use the PGx results to stratify a patient’s treatment, whereas 40% agreed that they needed better knowledge ( Domnic et al., 2022 ). This result comes per our findings.

Furthermore, barriers and concerns were indicated to be an influential factor and may determine students’ intentions to adopt PGx in their future clinical profession, especially those related to PGx testing cost, lack of reimbursement, and data privacy issues. Based on the literature, data privacy and results’ confidentiality are the most cited issues. Indeed, some studies pinpointed that the main concerns refer to data privacy and results’ confidentiality and others focused more on PGx logistics including costs, lack of trained personnel, and lack of complete clinical guidelines ( Bank et al., 2018 ; Cheung et al., 2021 ). In the Netherlands, Bank and coworkers, 2018 showed that 72% of the participants expressed their concern about data use and the chance of being provided to unauthorized individuals while 88% believed that PGx testing results could provoke psychological distress to patients ( Bank et al., 2018 ).

A study conducted by Cheung and coworkers, in 2021 had also come up with relevant results in Hong Kong ( Cheung et al., 2021 ). Nonetheless, Koufaki and coworkers, 2022 stated that Greek students worried more about PGx cost and the lack of complete clinical guidelines while their Malaysian counterparts were concerned about data privacy ( Koufaki et al., 2022 ). In the aforementioned study, it was implied that the difference between the two students’ cohorts was the cultural context because the local legislation and directives had affected students’ perceptions. In the present analysis, though, we did not notice extreme differences even if we investigated a highly diverse and multinational environment. The UAE is a cosmopolitan with diverse ethnicities from almost all over the world, working and studying together in an inclusive environment and respecting high standards of understanding and tolerance.

Finally, as far as respondents’ willingness to implement PGx in their professional lives, our findings come following the literature. Based on Arafah and coworkers, 2022 study, 61.2% of pharmacy students were interested in a PGx-related course or seminar ( Arafah et al., 2022 ). The vast majority expressed an interest in participating in genetic research, and they were willing to undergo PGx testing, too ( Arafah et al., 2022 ). Jarrar and coworkers, 2019, concluded with similar results; around 93% of pharmacy students were willing to learn more about PGx testing, whereas 31% opted to pursue a postgraduate program in the field ( Jarrar et al., 2019 ). Finally, in a study that was conducted among professional pharmacists and pharmacy students in Lebanon, 62% of participants were interested in learning more about PGx while in Croatia, the majority of students (dental, medicine, pharmacy) were willing to undergo a PGx test, a finding that is close to our results ( Bukic et al., 2022 ; Mehtar et al., 2022 ).

Limitations

This study has a few limitations. The survey was conducted among undergraduate students of the University of Sharjah and it did not include any other UAE university. To overcome this limitation, surveys were distributed to four different colleges to broaden our research sample. Questionnaires were distributed online and not via direct contact. This fact might lead to response bias but it is not shown to have such case in our analysis. Furthermore, the response rate was estimated at 30%. This rate was low but the total sample is sufficient to get results with significant statistical power. Finally, students came from different academic background and this was not taken into consideration in the scope of this analysis to identify any differences.

PGx is a hot topic of personalized medicine with great clinical applications. Implementation of PGx in healthcare systems remains a major challenge. The adoption rate of PGx is quite low worldwide and also in the UAE. A key factor for expanding PGx application in the clinical practice is based on the involvement of healthcare professionals. The future generations of UAE healthcare professionals in this study were shown to be aware of PGx and had a good level of knowledge. Their attitude towards PGx was positive and a great percentage of them planned to incorporate PGx testing in the clinical routine, while they were more than willing to undergo a relevant test for themselves. Moreover, the respondents expressed their opinions and concerns about the most commonly shared barriers and challenges related to PGx testing. The cost of PGx testing, lack of specialized personnel, and data confidentiality were found to be the most important challenges for PGx clinical implementation. In future research, the impact of demographics including gender, academic background, and year of study on students’ intentions to adopt PGx in clinical practice will be investigated further.

Data availability statement

The original contributions presented in the study are included in the article/ Supplementary Material , further inquiries can be directed to the corresponding author.

Ethics statement

The studies involving humans were approved by the University of Sharjah Research and Ethics Committee. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

HA-S: Data curation, Methodology, Writing–original draft. MOA: Data curation, Methodology, Writing–original draft. MR: Data curation, Methodology, Writing–original draft. TM: Data curation, Methodology, Writing–original draft. M-IK: Conceptualization, Methodology, Data curation, Formal Analysis, Validation, Visualisation, Writing–review and editing. IK: Software, Formal Analysis, Validation, Writing–original draft. FM: Software, Formal Analysis, Validation, Writing–original draft. GPP: Conceptualization, Supervision, Writing–review and editing. MS-A: Conceptualization, Methodology, Funding acquisition, Resources, Supervision, Writing–review and editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. MS-A is funded by a collaborative grant provided by the University of Sharjah (Project No. (#2001090279).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fphar.2024.1377420/full#supplementary-material

Abbreviations

ADR: Adverse Drug Reaction; AGFI: Adjusted Goodness of Fit Index; CFA: Comparative Factor Analysis; CFI: Comparative Fit Index; CMIN/DF: Chi-square statistics/degree of freedom; GFI: Goodness of Fit Index; NFI: Normed Fit Index; PGx: Pharmacogenomics; PGFI: Parsimony Goodness of Fit Index; RFI: Relative Fit Index; RMSEA: Root Mean Square Error of Approximation; SD: Standard Deviation; TLI: Tucker–Lewis index; TPB: Theory of Planned Behavior; UAE = United Arab Emirates; UoS = University of Sharjah.

Abdela, O. A., Bhagavathula, A. S., Gebreyohannes, E. A., and Tegegn, H. G. (2017). Ethiopian health care professionals' knowledge, attitude, and interests toward pharmacogenomics. Pharmacogenomics personalized Med. 10, 279–285. doi:10.2147/PGPM.S145336

PubMed Abstract | CrossRef Full Text | Google Scholar

Al-Ali, M., Osman, W., Tay, G. K., and AlSafar, H. S. (2018). A 1000 Arab genome project to study the Emirati population. J. Hum. Genet. 63 (4), 533–536. doi:10.1038/s10038-017-0402-y

Albassam, A., Alshammari, S., Ouda, G., Koshy, S., and Awad, A. (2018). Knowledge, perceptions and confidence of physicians and pharmacists towards pharmacogenetics practice in Kuwait. PloS one 13 (9), e0203033. doi:10.1371/journal.pone.0203033

Albitar, L., and Alchamat, G. A. (2021). Pharmacogenetics: knowledge assessment amongst Syrian pharmacists and physicians. BMC Health Serv. Res. 21 (1), 1031. doi:10.1186/s12913-021-07040-9

Algahtani, M. (2020). Knowledge, perception, and application of pharmacogenomics among hospital pharmacists in Saudi Arabia. Risk Manag. Healthc. Policy 13, 1279–1291. doi:10.2147/RMHP.S267492

Alhaddad, Z. A., AlMousa, H. A., and Younis, N. S. (2022). Pharmacists' knowledge, and insights in implementing pharmacogenomics in Saudi Arabia. Int. J. Environ. Res. public health 19 (16), 10073. doi:10.3390/ijerph191610073

Arafah, A., Rehman, M. U., Syed, W., Babelghaith, S. D., Alwhaibi, A., and Al Arifi, M. N. (2022). Knowledge, attitude and perception of pharmacy students towards pharmacogenomics and genetics: an observational study from king saud university. Genes 13 (2), 269. doi:10.3390/genes13020269

Bank, P. C., Swen, J. J., and Guchelaar, H. J. (2018). A nationwide cross-sectional survey of pharmacy students on pharmacogenetic testing in The Netherlands. Pharmacogenomics 19 (4), 311–319. doi:10.2217/pgs-2017-0175

Barker, C. I. S., Groeneweg, G., Maitland-van der Zee, A. H., Rieder, M. J., Hawcutt, D. B., Hubbard, T. J., et al. (2022). Pharmacogenomic testing in paediatrics: clinical implementation strategies. Br. J. Clin. Pharmacol. 88 (10), 4297–4310. doi:10.1111/bcp.15181

Baumgartner, H., and Homburg, C. (1996). Applications of structural equation modeling in marketing and consumer research: a review. Int. J. Res. Mark. 13 (2), 139–161. doi:10.1016/0167-8116(95)00038-0

CrossRef Full Text | Google Scholar

Bosnjak, M., Ajzen, I., and Schmidt, P. (2020). The theory of planned behavior: selected recent advances and applications. Europe's J. Psychol. 16 (3), 352–356. doi:10.5964/ejop.v16i3.3107

Bukic, J., Rusic, D., Leskur, D., Perisin, A. S., Cohadzic, T., Kumric, M., et al. (2022). Investigation of biomedical students' attitudes toward pharmacogenomics and personalized medicine: a cross-sectional study. Pharm. Basel. Switz. 10 (4), 73. doi:10.3390/pharmacy10040073

Chenoweth, M. J., Giacomini, K. M., Pirmohamed, M., Hill, S. L., van Schaik, R. H. N., Schwab, M., et al. (2020). Global pharmacogenomics within precision medicine: challenges and opportunities. Clin. Pharmacol. Ther. 107 (1), 57–61. doi:10.1002/cpt.1664

Cheung, N. Y. C., Fung, J. L. F., Ng, Y. N. C., Wong, W. H. S., Chung, C. C. Y., Mak, C. C. Y., et al. (2021). Perception of personalized medicine, pharmacogenomics, and genetic testing among undergraduates in Hong Kong. Hum. genomics 15 (1), 54. doi:10.1186/s40246-021-00353-0

Doll, W. J., Xia, W., and Torkzadeh, G. (1994). A confirmatory factor analysis of the end-user computing satisfaction instrument. MIS Q. 18 (4), 453–369. doi:10.2307/249524

Domnic, I., Fahad, F., Otaibi, A., Elbadawi, N., Bhaskaran, P. M., Rashikh, M. A., et al. (2022). Knowledge and awareness about pharmacogenomics and personalized medicine among the students of college of medicine, shaqra university in dawadmi, kingdom of Saudi Arabia. Pharmacogn. J. 14, 367–372. doi:10.5530/pj.2022.14.109

Godin, G., and Kok, G. (1996). The theory of planned behavior: a review of its applications to health-related behaviors. Am. J. health Promot. 11 (2), 87–98. doi:10.4278/0890-1171-11.2.87

Hansen, J. M., Nørgaard, J. D. S. V., and Kälvemark Sporrong, S. (2022). A systematic review of pharmacogenetic testing in primary care: attitudes of patients, general practitioners, and pharmacists. Res. Soc. Adm. Pharm. 18 (8), 3230–3238. doi:10.1016/j.sapharm.2021.12.002

Hayashi, M., and Bousman, C. A. (2022). Experience, knowledge, and perceptions of pharmacogenomics among pharmacists and nurse practitioners in alberta hospitals. Pharm. Basel. Switz. 10 (6), 139. doi:10.3390/pharmacy10060139

Jarrar, Y., Mosleh, R., Hawash, M., and Jarrar, Q. (2019). Knowledge and attitudes of pharmacy students towards pharmacogenomics among universities in Jordan and west bank of Palestine. Pharmacogenomics Personalized Med. 12, 247–255. doi:10.2147/pgpm.s222705

Klein, M. E., Parvez, M. M., and Shin, J. G. (2017). Clinical implementation of pharmacogenomics for personalized precision medicine: barriers and solutions. J. Pharm. Sci. 106 (9), 2368–2379. doi:10.1016/j.xphs.2017.04.051

Koufaki, M. I., Karamperis, K., Vitsa, P., Vasileiou, K., Patrinos, G. P., and Mitropoulou, C. (2021). Adoption of pharmacogenomic testing: a marketing perspective. Front. Pharmacol. 12, 724311. doi:10.3389/fphar.2021.724311

Koufaki, M. I., Makrygianni, D., Patrinos, G. P., and Vasileiou, K. Z. (2023). How do pharmacy students make career choices in genomics? Gender and other key determinants of pharmacy senior students' intentions to pursue postgraduate training in pharmacogenomics. Omics a J. Integr. Biol. 27 (10), 474–482. doi:10.1089/omi.2023.0153

Koufaki, M. I., Siamoglou, S., Patrinos, G. P., and Vasileiou, K. (2022). Examining key factors impact on health science students' intentions to adopt genetic and pharmacogenomics testing: a comparative path analysis in two different healthcare settings. Hum. genomics 16 (1), 9. doi:10.1186/s40246-022-00382-3

Makrygianni, D., Koufaki, M. I., Patrinos, G. P., and Vasileiou, K. Z. (2023). Pharmacy students' attitudes and intentions of pursuing postgraduate studies and training in pharmacogenomics and personalised medicine. Hum. genomics 17 (1), 27. doi:10.1186/s40246-023-00474-8

Mehtar, M., Hammoud, S. H., and Amin, M. E. K. (2022). An objective evaluation of fundamental pharmacogenomics knowledge among pharmacists and pharmacy students. J. Saudi Pharm. Soc. 30 (12), 1765–1772. doi:10.1016/j.jsps.2022.10.005

Rahma, A. T., Elsheik, M., Elbarazi, I., Ali, B. R., Patrinos, G. P., Kazim, M. A., et al. (2020). Knowledge and attitudes of medical and health science students in the United Arab Emirates toward genomic medicine and pharmacogenomics: a cross-sectional study. J. Personalized Med. 10 (4), 191. doi:10.3390/jpm10040191

Rollinson, V., Turner, R., and Pirmohamed, M. (2020). Pharmacogenomics for primary care: an overview. Genes 11 (11), 1337. doi:10.3390/genes11111337

Shah, S., Hanif, M. A., Khan, H. U., Khan, F. U., Abbas, G., Khurram, H., et al. (2022). Knowledge, attitudes and practices of pharmacogenomics among senior pharmacy students: a cross sectional study from Punjab, Pakistan. Pharmacogenomics Personalized Med. 15, 429–439. doi:10.2147/pgpm.s359920

Siamoglou, S., Koromina, M., Moy, F.-M., Mitropoulou, C., Patrinos, G. P., and Vasileiou, K. (2021a). What do students in pharmacy and medicine think about pharmacogenomics and personalized medicine education? Awareness, attitudes, and perceptions in Malaysian health sciences. OMICS A J. Integr. Biol. 25 (1), 52–59. doi:10.1089/omi.2020.0178

Siamoglou, S., Koromina, M., Politopoulou, K., Samiou, C. G., Papadopoulou, G., Balasopoulou, A., et al. (2021b). Attitudes and awareness toward pharmacogenomics and personalized medicine adoption among health sciences trainees: experience from Greece and lessons for europe. Omics a J. Integr. Biol. 25 (3), 190–199. doi:10.1089/omi.2020.0230

Smith, D. M., Namvar, T., Brown, R. P., Springfield, T. B., Peshkin, B. N., Walsh, R. J., et al. (2020). Assessment of primary care practitioners' attitudes and interest in pharmacogenomic testing. Pharmacogenomics 21 (15), 1085–1094. doi:10.2217/pgs-2020-0064

Swen, J. J., van der Wouden, C. H., Manson, L. E., Abdullah-Koolmees, H., Blagec, K., Blagus, T., et al. (2023). A 12-gene pharmacogenetic panel to prevent adverse drug reactions: an open-label, multicentre, controlled, cluster-randomised crossover implementation study. Lancet 401 (10374), 347–356. doi:10.1016/S0140-6736(22)01841-4

U.S. Department of Health and, Health, National Institutes (2012). Theory at a glance: a guide for health promotion practice: institute, national cancer, human services . 2nd edn.

Google Scholar

Verbelen, M., Weale, M. E., and Lewis, C. M. (2017). Cost-effectiveness of pharmacogenetic-guided treatment: are we there yet? pharmacogenomics J. 17 (5), 395–402. doi:10.1038/tpj.2017.21

Wen, Y. F., Jacobson, P. A., Oetting, W. S., Pereira, C., and Brown, J. T. (2022). Knowledge and attitudes of incoming pharmacy students toward pharmacogenomics and survey reliability. Pharmacogenomics 23 (16), 873–885. doi:10.2217/pgs-2022-0094

Keywords: ADR, adverse drug reaction pharmacogenomics, undergraduate students, questionnaire, attitudes, intentions to adopt

Citation: Al-Suhail H, Omar M, Rubaeih M, Mubarak T, Koufaki M-I, Kanaris I, Mounaged F, Patrinos GP and Saber-Ayad M (2024) Do future healthcare professionals advocate for pharmacogenomics? A study on medical and health sciences undergraduate students. Front. Pharmacol. 15:1377420. doi: 10.3389/fphar.2024.1377420

Received: 27 January 2024; Accepted: 26 March 2024; Published: 11 April 2024.

Reviewed by:

Copyright © 2024 Al-Suhail, Omar, Rubaeih, Mubarak, Koufaki, Kanaris, Mounaged, Patrinos and Saber-Ayad. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Maha Saber-Ayad, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Informa Healthcare Open Access

Logo of informaopen

Developing questionnaires for educational research: AMEE Guide No. 87

Anthony r. artino, jr..

1 Uniformed Services University of the Health Sciences, USA

Jeffrey S. La Rochelle

Kent j. dezee, hunter gehlbach.

2 Harvard Graduate School of Education, USA

In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards. Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales. These seven steps do not address all aspects of survey design, nor do they represent the only way to develop a high-quality questionnaire. Instead, these steps synthesize multiple survey design techniques and organize them into a cohesive process for questionnaire developers of all levels. Addressing each of these steps systematically will improve the probabilities that survey designers will accurately measure what they intend to measure.

Introduction: Questionnaires in medical education research

Surveys are used throughout medical education. Examples include the ubiquitous student evaluation of medical school courses and clerkships, as well as patient satisfaction and student self-assessment surveys. In addition, survey instruments are widely employed in medical education research. In our recent review of original research articles published in Medical Teacher in 2011 and 2012, we found that 37 articles (24%) included surveys as part of the study design. Similarly, surveys are commonly used in graduate medical education research. Across the same two-year period (2011–2012), 75% of the research articles published in the Journal of Graduate Medical Education used surveys.

Despite the widespread use of surveys in medical education, the medical education literature provides limited guidance on the best way to design a survey (Gehlbach et al. 2010 ). Consequently, many surveys fail to use rigorous methodologies or “best practices” in survey design. As a result, the reliability of the scores that emerge from surveys is often inadequate, as is the validity of the scores’ intended interpretation and use. Stated another way, when surveys are poorly designed, they may fail to capture the essence of what the survey developer is attempting to measure due to different types of measurement error. For example, poor question wording, confusing question layout and inadequate response options can all affect the reliability and validity of the data from surveys, making it extremely difficult to draw useful conclusions (Sullivan 2011 ). With these problems as a backdrop, our purpose in this AMEE Guide is to describe a systematic process for developing and collecting reliability and validity evidence for survey instruments used in medical education and medical education research. In doing so, we hope to provide medical educators with a practical guide for improving the quality of the surveys they design for evaluation and research purposes.

A systematic, seven-step process for survey scale design

The term “survey” is quite broad and could include the questions used in a phone interview, the set of items employed in a focus group and the questions on a self-administered patient survey (Dillman et al. 2009 ). Although the processes described in this AMEE Guide can be used to improve all of the above, we focus primarily on self-administered surveys, which are often referred to as questionnaires. For most questionnaires, the overarching goals are to develop a set of items that every respondent will interpret the same way, respond to accurately and be willing and motivated to answer. The seven steps depicted in Table 1 , and described below, do not address all aspects of survey design nor do they represent the only way to develop a high-quality questionnaire. Rather, these steps consolidate and organize the plethora of survey design techniques that exist in the social sciences and guide questionnaire developers through a cohesive process. Addressing each step systematically will optimize the quality of medical education questionnaires and improve the chances of collecting high-quality survey data.

A seven-step, survey scale design process for medical education researchers.

Adapted with permission from Lippincott Williams and Wilkins/Wolters Kluwer Health: Gehlbach et al. ( 2010 ). AM last page: Survey development guidance for medical education researchers. Acad Med 85:925.

Questionnaires are good for gathering data about abstract ideas or concepts that are otherwise difficult to quantify, such as opinions, attitudes and beliefs. In addition, questionnaires can be useful for collecting information about behaviors that are not directly observable (e.g. studying at home), assuming respondents are willing and able to report on those behaviors. Before creating a questionnaire, however, it is imperative to first decide if a survey is the best method to address the research question or construct of interest. A construct is the model, idea or theory that the researcher is attempting to assess. In medical education, many constructs of interest are not directly observable – student satisfaction with a new curriculum, patients’ ratings of their physical discomfort, etc. Because documenting these phenomena requires measuring people’s perceptions, questionnaires are often the most pragmatic approach to assessing these constructs.

In medical education, many constructs are well suited for assessment using questionnaires. However, because psychological, non-observable constructs such as teacher motivation, physician confidence and student satisfaction do not have a commonly agreed upon metric, they are difficult to measure with a single item on a questionnaire. In other words, for some constructs such as weight or distance, most everyone agrees upon the units and the approach to measurement, and so a single measurement may be adequate. However, for non-observable, psychological constructs, a survey scale is often required for more accurate measurement. Survey scales are groups of similar items on a questionnaire designed to assess the same underlying construct (DeVellis 2003). Although scales are more difficult to develop and take longer to complete, they offer researchers many advantages. In particular, scales more completely, precisely and consistently assess the underlying construct (McIver & Carmines 1981 ). Thus, scales are commonly used in many fields, including medical education, psychology and political science. As an example, consider a medical education researcher interested in assessing medical student satisfaction. One approach would be to simply ask one question about satisfaction (e.g. How satisfied were you with medical school?). A better approach, however, would be to ask a series of questions designed to capture the different facets of this satisfaction construct (e.g. How satisfied were you with the teaching facilities? How effective were your instructors? and How easy was the scheduling process?). Using this approach, a mean score of all the items within a particular scale can be calculated and used in the research study.

Because of the benefits of assessing these types of psychological constructs through scales, the survey design process that we now turn to will focus particularly on the development of scales.

Step 1: Conduct a literature review

The first step to developing a questionnaire is to perform a literature review. There are two primary purposes for the literature review: (1) to clearly define the construct and (2) to determine if measures of the construct (or related constructs) already exist. A review of the literature helps to ensure the construct definition aligns with related theory and research in the field, while at the same time helping the researcher identify survey scales or items that could be used or adapted for the current purpose (Gehlbach et al. 2010 ).

Formulating a clear definition of the construct is an indispensable first step in any validity study (Cook & Beckman 2006 ). A good definition will clarify how the construct is positioned within the existing literature, how it relates to other constructs and how it is different from related constructs (Gehlbach & Brinkworth 2011 ). A well-formulated definition also helps to determine the level of abstraction at which to measure a given construct (the so-called “grain size”, as defined by Gehlbach & Brinkworth 2011 ). For example, to examine medical trainees’ confidence to perform essential clinical skills, one could develop scales to assess their confidence to auscultate the heart (at the small-grain end of the spectrum), to conduct a physical exam (at the medium-grain end of the spectrum) or to perform the clinical skills essential to a given medical specialty (at the large-grain end of the spectrum).

Although many medical education researchers prefer to develop their own surveys independently, it may be more efficient to adapt an existing questionnaire – particularly if the authors of the existing questionnaire have collected validity evidence in previous work – than it is to start from scratch. When this is the case, a request to the authors to adapt their questionnaire will usually suffice. It is important to note, however, that the term “previously validated survey” is a misnomer. The validity of the scores that emerge from a given questionnaire or survey scale is sensitive to the survey’s target population, the local context and the intended use of the scale scores, among other factors. Thus, survey developers collect reliability and validity evidence for their survey scales in a specified context, with a particular sample, and for a particular purpose.

As described in the Standards for Educational and Psychological Testing , validity refers to the degree to which evidence and theory support a measure’s intended use (AERA, APA, & NCME 1999 ). The process of validation is the most fundamental consideration in developing and evaluating a measurement tool, and the process involves the accumulation of evidence across time, settings and samples to build a scientifically sound validity argument. Thus, establishing validity is an ongoing process of gathering evidence (Kane 2006 ). Furthermore, it is important to acknowledge that reliability and validity are not properties of the survey instrument, per se , but of the survey’s scores and their interpretations (AERA, APA, & NCME 1999 ). For example, a survey of trainee satisfaction might be appropriate for assessing aspects of student well-being, but such a survey would be inappropriate for selecting the most knowledgeable medical students. In this example, the survey did not change, only the score interpretation changed (Cook & Beckman 2006 ).

Many good reasons exist to use, or slightly adapt, an existing questionnaire. By way of analogy, we can compare this practice to a physician who needs to decide on the best medical treatment. The vast majority of clinicians do not perform their own comparative research trials to determine the best treatments to use for their patients. Rather, they rely on the published research, as it would obviously be impractical for clinicians to perform such studies to address every disease process. Similarly, medical educators cannot develop their own questionnaires for every research question or educational intervention. Just like clinical trials, questionnaire development requires time, knowledge, skill and a fair amount of resources to accomplish correctly. Thus, an existing, well-designed questionnaire can often permit medical educators to put their limited resources elsewhere.

Continuing with the clinical research analogy, when clinicians identify a research report that is relevant to their clinical question, they must decide if it applies to their patient. Typically, this includes determining if the relationships identified in the study are causal (internal validity) and if the results apply to the clinician’s patient population (external validity). In a similar way, questionnaires identified in a literature search must be reviewed critically for validity evidence and then analyzed to determine if the questionnaire could be applied to the educator’s target audience. If survey designers find scales that closely match their construct, context and proposed use, such scales might be useable with only minor modification. In some cases, the items themselves might not be well written, but the content of the items might be helpful in writing new items (Gehlbach & Brinkworth 2011 ). Making such determinations will be easier the more the survey designer knows about the construct (through the literature review) and the best practices in item writing (as described in Step 4).

Step 2: Conduct interviews and/or focus groups

Once the literature review has shown that it is necessary to develop a new questionnaire, and helped to define the construct, the next step is to ascertain whether the conceptualization of the construct matches how prospective respondents think about it (Gehlbach & Brinkworth 2011 ). In other words, do respondents include and exclude the same features of the construct as those described in the literature? What language do respondents use when describing the construct? To answer these questions and ensure the construct is defined from multiple perspectives, researchers will usually want to collect data directly from individuals who closely resemble their population of interest.

To illustrate this step, another clinical analogy might be helpful. Many clinicians have had the experience of spending considerable time developing a medically appropriate treatment regimen but have poor patient compliance with that treatment (e.g. too expensive). The clinician and patient then must develop a new plan that is acceptable to both. Had the patient’s perspective been considered earlier, the original plan would likely have been more effective. Many clinicians have also experienced difficulty treating a patient, only to have a peer reframe the problem, which subsequently results in a better approach to treatment. A construct is no different. To this point, the researcher developing the questionnaire, like the clinician treating the patient, has given a great deal of thought to defining the construct. However, the researcher unavoidably brings his/her perspectives and biases to this definition, and the language used in the literature may be technical and difficult to understand. Thus, other perspectives are needed. Most importantly, how does the target population (the patient from the previous example) conceptualize and understand the construct? Just like the patient example, these perspectives are sometimes critical to the success of the project. For example, in reviewing the literature on student satisfaction with medical school instruction, a researcher may find no mention of the instructional practice of providing students with video or audio recordings of lectures (as these practices are fairly new). However, in talking with students, the researcher may find that today’s students are accustomed to such practices and consider them when forming their opinions about medical school instruction.

In order to accomplish Step 2 of the design process, the survey designer will need input from prospective respondents. Interviews and/or focus groups provide a sensible way to get this input. Irrespective of the approach taken, this step should be guided by two main objectives. First, researchers need to hear how participants talk about the construct in their own words, with little to no prompting from the researcher. Following the collection of unprompted information from participants, the survey designers can then ask more focused questions to evaluate if respondents agree with the way the construct has been characterized in the literature. This procedure should be repeated until saturation is reached; this occurs when the researcher is no longer hearing new information about how potential respondents conceptualize the construct (Gehlbach & Brinkworth 2011 ). The end result of these interviews and/or focus groups should be a detailed description of how potential respondents conceptualize and understand the construct. These data will then be used in Steps 3 and 4.

Step 3: Synthesize the literature review and interviews/focus groups

At this point, the definition of the construct has been shaped by the medical educator developing the questionnaire, the literature and the target audience. Step 3 seeks to reconcile these definitions. Because the construct definition directs all subsequent steps (e.g. development of items), the survey designer must take care to perform this step properly.

One suitable way to conduct Step 3 is to develop a comprehensive list of indicators for the construct by merging the results of the literature review and interviews/focus groups (Gehlbach & Brinkworth 2011 ). When these data sources produce similar lists, the process is uncomplicated. When these data are similar conceptually, but the literature and potential respondents describe the construct using different terminology, it makes sense to use the vocabulary of the potential respondents. For example, when assessing teacher confidence (sometimes referred to as teacher self-efficacy), it is probably more appropriate to ask teachers about their “confidence in trying out new teaching techniques” than to ask them about their “efficaciousness in experimenting with novel pedagogies” (Gehlbach et al. 2010 ). Finally, if an indicator is included from one source but not the other, most questionnaire designers will want to keep the item, at least initially. In later steps, designers will have opportunities to determine, through expert reviews (Step 5) and cognitive interviews (Step 6), if these items are still appropriate to the construct. Whatever the technique used to consolidate the data from Steps 1 and 2, the final definition and list of indicators should be comprehensive, reflecting both the literature and the opinions of the target audience.

It is worth noting that scholars may have good reasons to settle on a final construct definition that differs from what is found in the literature. However, when this occurs, it should be clear exactly how and why the construct definition is different. For example, is the target audiences’ perception different from previous work? Does a new educational theory apply? Whatever the reason, this justification will be needed for publication of the questionnaire. Having an explicit definition of the construct, with an explanation of how it is different from other versions of the construct, will help peers and researchers alike decide how to best use the questionnaire both in comparison with previous studies and with the development of new areas of research.

Step 4: Develop items

The goal of this step is to write survey items that adequately represent the construct of interest in a language that respondents can easily understand. One important design consideration is the number of items needed to adequately assess the construct. There is no easy answer to this question. The ideal number of items depends on several factors, including the complexity of the construct and the level at which one intends to assess it (i.e. the grain size). In general, it is good practice to develop more items than will ultimately be needed in the final scale (e.g. developing 15 potential items in the hopes of ultimately creating an eight-item scale), because some items will likely be deleted or revised later in the design process (Gehlbach & Brinkworth 2011 ). Ultimately, deciding on the number of items is a matter of professional judgment, but for most narrowly defined constructs, scales containing from 6 to 10 items will usually suffice in reliably capturing the essence of the phenomenon in question.

The next challenge is to write a set of clear, unambiguous items using the vocabulary of the target population. Although some aspects of item-writing remain an art form, an increasingly robust science and an accumulation of best practices should guide this process. For example, writing questions rather than statements, avoiding negatively worded items and biased language, matching the item stem to the response anchors and using response anchors that emphasize the construct being measured rather than employing general agreement response anchors (Artino et al. 2011 ) are all well-documented best practices. Although some medical education researchers may see these principles as “common sense”, experience tells us that these best practices are often violated.

Reviewing all the guidelines for how best to write items, construct response anchors and visually design individual survey items and entire questionnaires is beyond the scope of this AMEE Guide. As noted above, however, there are many excellent resources on the topic (e.g. DeVillis 2003; Dillman et al. 2009 ; Fowler 2009). To assist readers in grasping some of the more important and frequently ignored best practices, Table 2 presents several item-writing pitfalls and offers solutions.

Item-writing “best practices” based on scientific evidence from questionnaire design research.

Adapted with permission from Lippincott Williams and Wilkins/Wolters Kluwer Health: Artino et al. 2011 . AM last page: Avoiding five common pitfalls in survey design. Acad Med 86:1327.

Another important part of the questionnaire design process is selecting the response options that will be used for each item. Closed-ended survey items can have unordered (nominal) response options that have no natural order or ordered (ordinal) response options. Moreover, survey items can ask respondents to complete a ranking task (e.g. “rank the following items, where 1 = best and 6 = worst”) or a rating task that asks them to select an answer on a Likert-type response scale. Although it is outside the scope of this AMEE Guide to review all of the response options available, questionnaire designers are encouraged to tailor these options to the construct(s) they are attempting to assess (and to consult one of the many outstanding resources on the topic; e.g. Dillman et al. 2009 ; McCoach et al. 2013 ). To help readers understand some frequently ignored best practices Table 2 and Figure 1 present several common mistakes designers commit when writing and formatting their response options. In addition, because Likert-type response scales are by far the most popular way of collecting survey responses – due, in large part, to their ease of use and adaptability for measuring many different constructs (McCoach et al. 2013 ) – Table 3 provides several examples of five- and seven-point response scales that can be used when developing Likert-scaled survey instruments.

An external file that holds a picture, illustration, etc.
Object name is MTE-36-463-g001.jpg

Visual-design “best practices” based on scientific evidence from questionnaire design research.

Examples of various Likert-type response options.

Once survey designers finish drafting their items and selecting their response anchors, there are various sources of evidence that might be used to evaluate the validity of the questionnaire and its intended use. These sources of validity have been described in the Standards for Educational and Psychological Testing as evidence based on the following: (1) content, (2) response process, (3) internal structure, (4) relationships with other variables and (5) consequences (AERA, APA & NCME 1999 ). The next three steps of the design process fit nicely into this taxonomy and are described below.

Step 5: Conduct expert validation

Once the construct has been defined and draft items have been written, an important step in the development of a new questionnaire is to begin collecting validity evidence based on the survey’s content (so-called content validity ) (AERA, APA & NCME 1999 ). This step involves collecting data from content experts to establish that individual survey items are relevant to the construct being measured and that key items or indicators have not been omitted (Polit & Beck 2004 ; Waltz et al. 2005 ). Using experts to systematically review the survey’s content can substantially improve the overall quality and representativeness of the scale items (Polit & Beck 2006 ).

Steps for establishing content validity for a new survey instrument can be found throughout the literature (e.g. McKenzie et al. 1999 ; Rubio et al. 2003 ). Below, we summarize several of the more important steps. First, before selecting a panel of experts to evaluate the content of a new questionnaire, specific criteria should be developed to determine who qualifies as an expert. These criteria are often based on experience or knowledge of the construct being measured, but, practically speaking, these criteria also are dependent on the willingness and availability of the individuals being asked to participate (McKenzie et al. 1999 ). One useful approach to finding experts is to identify authors from the reference lists of the articles reviewed during the literature search. There is no consensus in the literature regarding the number of experts that should be used for content validation; however, many of the quantitative techniques used to analyze expert input will be impacted by the number of experts employed. Rubio et al. ( 2003 ) recommends using 6–10 experts, while acknowledging that more experts (up to 20) may generate a clearer consensus about the construct being assessed, as well as the quality and relevance of the proposed scale items.

In general, the key domains to assess through an expert validation process are representativeness, clarity, relevance and distribution. Representativeness is defined as how completely the items (as a whole) encompass the construct, clarity is how clearly the items are worded and relevance refers to the extent each item actually relates to specific aspects of the construct. The distribution of an item is not always measured during expert validation as it refers to the more subtle aspect of how “difficult” it would be for a respondent to select a high score on a particular item. In other words, an average medical student may find it very difficult to endorse the self-confidence item, “How confident are you that you can get a 100% on your anatomy exam”, but that same student may find it easier to strongly endorse the item, “How confident are you that you can pass the anatomy exam”. In general, survey developers should attempt to have a range of items of varying difficulty (Tourangeau et al. 2000 ).

Once a panel of experts has been identified, a content validation form can be created that defines the construct and gives experts the opportunity to provide feedback on any or all of the aforementioned topics. Each survey designer’s priorities for a content validation may differ; as such, designers are encouraged to customize their content validation forms to reflect those priorities.

There are a variety of methods for analyzing the quantitative data collected on an expert validation form, but regardless of the method used, criterion for the acceptability of an item or scale should be determined in advanced (Beck & Gable 2001 ). Common metrics used to make inclusion and exclusion decisions for individual items are the content validity ratio, the content validity index and the factorial validity index. For details on how to calculate and interpret these indices, see McKenzie et al. ( 1999 ) and Rubio et al. ( 2003 ). For a sample content validation form, see Gehlbach & Brinkworth ( 2011 ).

In addition to collecting quantitative data, questionnaire designers should provide their experts with an opportunity to provide free-text comments. This approach can be particularly effective for learning what indicators or aspects of the construct are not well-represented by the existing items. The data gathered from the free-text comments and subsequent qualitative analysis often reveal information not identified by the quantitative data and may lead to meaningful additions (or subtractions) to items and scales (McKenzie et al. 1999 ).

There are many ways to analyze the content validity of a new survey through the use of expert validation. The best approach should look at various domains where the researchers have the greatest concerns about the scale (relevance, clarity, etc.) for each individual item and for each set of items or scale. The quantitative data combined with qualitative input from experts is designed to improve the content validity of the new questionnaire or survey scale and, ultimately, the overall functioning of the survey instrument.

Step 6: Conduct cognitive interviews

After the experts have helped refine the scale items, it is important to collect evidence of response process validity to assess how prospective participants interpret your items and response anchors (AERA, APA & NCME 1999 ). One means of collecting such evidence is achieved through a process known as cognitive interviewing or cognitive pre-testing (Willis 2005 ). Similar to how experts are utilized to determine the content validity of a new survey, it is equally important to determine how potential respondents interpret the items and if their interpretation matches what the survey designer has in mind (Willis 2005 ; Karabenick et al. 2007 ). Results from cognitive interviews can be helpful in identifying mistakes respondents make in their interpretation of the item or response options (Napoles-Springer et al. 2006 ; Karabenick et al. 2007 ). As a qualitative technique, analysis does not rely on statistical tests of numeric data but rather on coding and interpretation of written notes from the interview. Thus, the sample sizes used for cognitive interviewing are normally small and may involve just 10–30 participants (Willis & Artino 2013 ). For small-scale medical education research projects, as few as five to six participants may suffice, as long as the survey designer is sensitive to the potential for bias in very small samples (Willis & Artino 2013 ).

Cognitive interviewing employs techniques from psychology and has traditionally assumed that respondents go through a series of cognitive processes when responding to a survey. These steps include comprehension of an item stem and answer choices, retrieval of appropriate information from long-term memory, judgment based on comprehension of the item and their memory and finally selection of a response (Tourangeau et al. 2000 ). Because respondents can have difficulty at any stage, a cognitive interview should be designed and scripted to address any and all of these potential problems. An important first step in the cognitive interview process is to create coding criteria that reflects the survey creator’s intended meaning for each item (Karabenick et al. 2007 ), which can then be used to help interpret the responses gathered during the cognitive interview.

The two major techniques for conducting a cognitive interview are the think-aloud technique and verbal probing . The think-aloud technique requires respondents to verbalize every thought that they have while answering each item. Here, the interviewer simply supports this activity by encouraging the respondent to keep talking and to record what is said for later analysis (Willis & Artino 2013 ). This technique can provide valuable information, but it tends to be unnatural and difficult for most respondents, and it can result in reams of free-response data that the survey designer then needs to cull through.

A complementary procedure, verbal probing, is a more active form of data collection where the interviewer administers a series of probe questions designed to elicit specific information (Willis & Artino 2013 ; see Table 4 for a list of commonly used verbal probes). Verbal probing is classically divided into concurrent and retrospective probing. In concurrent probing, the interviewer asks the respondent specific questions about their thought processes as the respondent answers each question. Although disruptive, concurrent probing has the advantage of allowing participants to respond to questions while their thoughts are recent. Retrospective probing, on the other hand, occurs after the participant has completed the entire survey (or section of the survey) and is generally less disruptive than concurrent probing. The downside of retrospective probing is the risk of recall bias and hindsight effects (Drennan 2003 ). A modification to the two verbal probing techniques is defined as immediate retrospective probing, which allows the interviewer to find natural break points in the survey. Immediate retrospective probing allows the interviewer to probe the respondent without interrupting between each item (Watt et al. 2008 ). This approach has the potential benefit of reducing the recall bias and hindsight effects while limiting the interviewer interruptions and decreasing the artificiality of the process. In practice, many cognitive interviews will actually use a mixture of think-aloud and verbal probing techniques to better identify potential errors.

Examples of commonly used verbal probes.

Adapted with permission from the Journal of Graduate Medical Education : Willis & Artino 2013 . What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ 5:353–356.

Once a cognitive interview has been completed, there are several methods for analyzing the qualitative data obtained. One way to quantitatively analyze results from a cognitive interview is through coding. With this method, pre-determined codes are established for common respondent errors (e.g. respondent requests clarification), and the frequency of each type of error is tabulated for each item (Napoles-Springer et al. 2006 ). In addition, codes may be ranked according to the pre-determined severity of the error. Although the quantitative results of this analysis are often easily interpretable, this method may miss errors not readily predicted and may not fully explain why the error is occurring (Napoles-Springer et al. 2006 ). As such, a qualitative approach to the cognitive interview can also be employed through an interaction analysis. Typically, an interaction analysis attempts to describe and explain the ways in which people interpret and interact during a conversation, and this method can be applied during the administration of a cognitive interview to determine the meaning of responses (Napoles-Springer et al. 2006 ). Studies have demonstrated that the combination of coding and interaction analysis can be quite effective, providing more information about the “cognitive validity” of a new questionnaire (Napoles-Springer et al. 2006 ).

The importance of respondents understanding each item in a similar fashion is inherently related to the overall reliability of the scores from any new questionnaire. In addition, the necessity for respondents to understand each item in the way it was intended by the survey creator is integrally related to the validity of the survey and the inferences that can be made with the resulting data. Taken together, these two factors are critically important to creating a high-quality questionnaire, and each factor can be addressed through the use of a well-designed cognitive interview. Ultimately, regardless of the methods used to conduct the cognitive interviews and analyze the data, the information gathered should be used to modify and improve the overall questionnaire and individual survey items.

Step 7: Conduct pilot testing

Despite the best efforts of medical education researchers during the aforementioned survey design process, some survey items may still be problematic (Gehlbach & Brinkworth 2011 ). Thus, the next step is to pilot test the questionnaire and continue collecting validity evidence. Two of the most common approaches are based on internal structure and relationships with other variables (AERA, APA & NCME 1999 ). During pilot testing, members of the target population complete the survey in the planned delivery mode (e.g. web-based or paper-based format). The data obtained from the pilot test is then reviewed to evaluate item range and variance, assess score reliability of the whole scale and review item and composite score correlations. During this step, survey designers should also review descriptive statistics (e.g. means and standard deviations) and histograms, which demonstrate the distribution of responses by item. This analysis can aid in identifying items that may not be functioning in the way the designer intended.

To ascertain the internal structure of the questionnaire and to evaluate the extent to which items within a particular scale measure a single underlying construct (i.e. the scale’s uni-dimensionality), survey designers should consider using advanced statistical techniques such as factor analysis. Factor analysis is a statistical procedure designed to evaluate “the number of distinct constructs needed to account for the pattern of correlations among a set of measures” (Fabrigar & Wegener 2012, p. 3). To assess the dimensionality of a survey scale that has been deliberately constructed to assess a single construct (e.g. using the processes described in this study), we recommend using confirmatory factor analysis techniques; that said, other scholars have argued that exploratory factor analysis is more appropriate when analyzing new scales (McCoach et al. 2013 ). Regardless of the specific analysis employed, researchers should know that factor analysis techniques are often poorly understood and poorly implemented; fortunately, the literature is replete with many helpful guides (see, for example, Pett et al. 2003 ; McCoach et al. 2013 ).

Conducting a reliability analysis is another critical step in the pilot testing phase. The most common means of assessing scale reliability is by calculating a Cronbach’s alpha coefficient. Cronbach’s alpha is a measure of the internal consistency of the item scores (i.e. the extent to which the scores for the items on a scale correlate with one another). It is a function of the inter-item correlations and the total number of items on a particular scale. It is important to note that Cronbach’s alpha is not a good measure of a scale’s uni-dimensionality (measuring a single concept) as is often assumed (Schmitt 1996 ). Thus, in most cases, survey designers should first run a factor analysis, to assess the scale’s uni-dimensionality and then proceed with a reliability analysis, to assess the internal consistency of the item scores on the scale (Schmitt 1996 ). Because Cronbach’s alpha is sensitive to scale length, all other things being equal, a longer scale will generally have a higher Cronbach’s alpha. Of course, scale length and the associated increase in internal consistency reliability must be balanced with over-burdening respondents and the concomitant response errors that can occur when questionnaires become too long and respondents become fatigued. Finally, it is critical to recognize that reliability is a necessary but insufficient condition for validity (AERA, APA & NCME 1999 ). That is, to be considered valid, survey scores must first be reliable. However, scores that are reliable are not necessarily valid for a given purpose.

Once a scale’s uni-dimensionality and internal consistency have been assessed, survey designers often create composite scores for each scale. Depending on the research question being addressed, these composite scores can then be used as independent or dependent variables. When attempting to assess hard-to-measure educational constructs such as motivation, confidence and satisfaction, it usually makes sense to create a composite score for each survey scale than it does to use individual survey items as variables (Sullivan & Artino 2013 ). A composite score is simply a mean score (either weighted or unweighted) of all the items within a particular scale. Using mean scores has several distinct advantages over summing the items within a particular scale or subscale. First, mean scores are usually reported using the same response scale as the individual items; this approach facilitates more direct interpretation of the mean scores in terms of the response anchors. Second, the use of mean scores makes it clear how big (or small) measured differences really are when comparing individuals or groups. As Colliver et al. ( 2010 ) warned, “the sums of ratings reflect both the ratings and the number of items, which magnifies differences between scores and makes differences appear more important than they are” (p. 591).

After composite scores have been created for each survey scale, the resulting variables can be examined to determine their relations to other variables that have been collected. The goal in this step is to determine if these associations are consistent with theory and previous research. So, for example, one might expect the composite scores from a scale designed to assess trainee confidence for suturing to be positively correlated with the number of successful suture procedures performed (since practice builds confidence) and negatively correlated with procedure-related anxiety (as more confident trainees also tend to be less anxious). In this way, survey designers are assessing the validity of the scales they have created in terms of their relationships to other variables (AERA, APA & NCME 1999 ). It is worth noting that in the aforementioned example, the survey designer is evaluating the correlations between the newly developed scale scores and both an objective measure (number of procedures) and a subjective measure (scores on an anxiety scale). Both of these are reasonable approaches to assessing a new scale’s relationships with other variables.

Concluding thoughts

In this AMEE Guide, we described a systematic, seven-step design process for developing survey scales. It should be noted that many important topics related to survey implementation and administration fall outside our focus on scale design and thus were not discussed in this guide. These topics include, but are not limited to, ethical approval for research questionnaires, administration format (paper vs. electronic), sampling techniques, obtaining high response rates, providing incentives and data management. These topics, and many more, are reviewed in detail elsewhere (e.g. Dillman et al. 2009 ). We also acknowledge that the survey design methodology presented here is not the only way to design and develop a high-quality questionnaire. In reading this Guide, however, we hope medical education researchers will come to appreciate the importance of following a systematic, evidence-based approach to questionnaire design. Doing so not only improves the questionnaires used in medical education but it also has the potential to positively impact the overall quality of medical education research, a large proportion of which employs questionnaires.

Closed-ended question – A survey question with a finite number of response categories from which the respondent can choose.

Cognitive interviewing (or cognitive pre-testing) – An evidence-based qualitative method specifically designed to investigate whether a survey question satisfies its intended purpose.

Concurrent probing – A verbal probing technique wherein the interviewer administers the probe question immediately after the respondent has read aloud and answered each survey item.

Construct – A hypothesized concept or characteristic (something “constructed”) that a survey or test is designed to measure. Historically, the term “construct” has been reserved for characteristics that are not directly observable. Recently, however, the term has been more broadly defined.

Content validity – Evidence obtained from an analysis of the relationship between a survey instrument’s content and the construct it is intended to measure.

Factor analysis – A set of statistical procedures designed to evaluate the number of distinct constructs needed to account for the pattern of correlations among a set of measures.

Open-ended question – A survey question that asks respondents to provide an answer in an open space (e.g. a number, a list or a longer, in-depth answer).

Reliability – The extent to which the scores produced by a particular measurement procedure or instrument (e.g. a survey) are consistent and reproducible. Reliability is a necessary but insufficient condition for validity.

Response anchors – The named points along a set of answer options (e.g. not at all important, slightly important, moderately important, quite important and extremely important ).

Response process validity – Evidence of validity obtained from an analysis of how respondents interpret the meaning of a survey scale’s specific survey items.

Retrospective probing – A verbal probing technique wherein the interviewer administers the probe questions after the respondent has completed the entire survey (or a portion of the survey).

Scale – Two or more items intended to measure a construct.

Think-aloud interviewing – A cognitive interviewing technique wherein survey respondents are asked to actively verbalize their thoughts as they attempt to answer the evaluated survey items.

Validity – The degree to which evidence and theory support the proposed interpretations of an instrument’s scores.

Validity argument – The process of accumulating evidence to provide a sound scientific basis for the proposed uses of an instrument’s scores.

Verbal probing – A cognitive interviewing technique wherein the interviewer administers a series of probe questions specifically designed to elicit detailed information beyond that normally provided by respondents.

Notes on contributors

ANTHONY R. ARTINO, Jr., PhD, is an Associate Professor of Preventive Medicine and Biometrics. He is the Principal Investigator on several funded research projects and co-directs the Long-Term Career Outcome Study (LTCOS) of Uniformed Services University (USU) trainees. His research focuses on understanding the role of academic motivation, emotion and self-regulation in a variety of settings. He earned his PhD in educational psychology from the University of Connecticut.

JEFFREY S. LA ROCHELLE, MD, MPH, is an Associate Program Director for the Internal Medicine residency at Walter Reed National Military Medical Center and is the Director of Integrated Clinical Skills at USU where he is an Associate Professor of Medicine. His research focuses on the application of theory-based educational methods and assessments and the development of observed structured clinical examinations (OSCE). He earned his MD and MPH from USU.

KENT J. DEZEE, MD, MPH, is the General Medicine Fellowship Director and an Associate Professor of Medicine at USU. His research focuses on understanding the predictors of medical student success in medical school, residency training and beyond. He earned his MD from The Ohio State University and his MPH from USU.

HUNTER GEHLBACH, PhD, is an Associate Professor at Harvard’s Graduate School of Education. He teaches a course on the construction of survey scales, and his research includes experimental work on how to design better scales as well as scale development projects to develop better measures of parents’ and students’ perceptions of schools. In addition, he has a substantive interest in bringing social psychological principles to bear on educational problems. He earned his PhD from Stanford’s Psychological Studies in Education program.

Declaration of interest : Several of the authors are military service members. Title 17 U.S.C. 105 provides that “Copyright protection under this title is not available for any work of the United States Government”. Title 17 U.S.C. 101 defines a United States Government work as a work prepared by a military service member or employee of the United States Government as part of that person’s official duties.

The views expressed in this article are those of the authors and do not necessarily reflect the official views of the Uniformed Services University of the Health Sciences, the U.S. Navy, the U.S. Army, the U.S. Air Force, or the Department of Defense.

Portions of this AMEE Guide were previously published in the Journal of Graduate Medical Education and Academic Medicine and are used with the express permission of the publishers (Gehlbach et al. 2010 ; Artino et al. 2011 ; Artino & Gehlbach 2012 ; Rickards et al. 2012 ; Magee et al. 2013; Willis & Artino 2013 ).

  • American Educational Research Association (AERA), American Psychological Association (APA) & National Council on Measurement in Education (NCME) Standards for education and psychological testing. Washington, DC: American Educational Research Association; 1999. [ Google Scholar ]
  • Artino AR, Gehlbach H, Durning SJ. AM last page: Avoiding five common pitfalls of survey design. Acad Med. 2011; 86 :1327. [ PubMed ] [ Google Scholar ]
  • Artino AR, Gehlbach H. AM last page: Avoiding four visual-design pitfalls in survey development. Acad Med. 2012; 87 :1452. [ PubMed ] [ Google Scholar ]
  • Beck CT, Gable RK. Ensuring content validity: An illustration of the process. J Nurs Meas. 2001; 9 :201–215. [ PubMed ] [ Google Scholar ]
  • Christian LM, Parsons NL, Dillman DA. 2009. Designing scalar questions for web surveys. Sociol Method Res 37:393–425. [ Google Scholar ]
  • Colliver JA, Conlee MJ, Verhulst SJ, Dorsey JK. Reports of the decline of empathy during medical education are greatly exaggerated: A reexamination of the research. Acad Med. 2010; 85 :588–593. [ PubMed ] [ Google Scholar ]
  • Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: Theory and application. Am J Med. 2006; 119 :166.e7–166.e16. [ PubMed ] [ Google Scholar ]
  • DeVellis RF. 2003. Scale development: Theory and applications. 2nd ed. Newbury Park, CA: Sage. [ Google Scholar ]
  • Dillman D, Smyth J, Christian L. Internet, mail, and mixed-mode surveys: The tailored design method. 3rd. Hoboken, NJ: Wiley; 2009. [ Google Scholar ]
  • Drennan J. Cognitive interviewing: Verbal data in the design and pretesting of questionnaires. J Adv Nurs. 2003; 42 (1):57–63. [ PubMed ] [ Google Scholar ]
  • Fabrigar LR, Wegener DT. 2012. Exploratory factor analysis. New York: Oxford University Press. [ Google Scholar ]
  • Fowler FJ. 2009. Survey research methods. 4th ed. Thousand Oaks, CA: Sage. [ Google Scholar ]
  • Gehlbach H, Artino AR, Durning S. AM last page: Survey development guidance for medical education researchers. Acad Med. 2010; 85 :925. [ PubMed ] [ Google Scholar ]
  • Gehlbach H, Brinkworth ME. Measure twice, cut down error: A process for enhancing the validity of survey scales. Rev Gen Psychol. 2011; 15 :380–387. [ Google Scholar ]
  • Kane MT. Validation in educational measurement. 4th. Westport, CT: American Council on Education/Praeger; 2006. [ Google Scholar ]
  • Karabenick SA, Woolley ME, Friedel JM, Ammon BV, Blazevski J, Bonney CR, De Groot E, Gilbert MC, Musu L, Kempler TM, Kelly KL. Cognitive processing of self-report items in educational research: Do they think what we mean? Educ Psychol. 2007; 42 (3):139–151. [ Google Scholar ]
  • Krosnick JA. 1999. Survey research. Annu Rev Psychol 50:537–567. [ PubMed ] [ Google Scholar ]
  • Magee C, Byars L, Rickards G, Artino AR. Tracing the steps of survey design: A graduate medical education research example. J Grad Med Educ. 2013; 5 (1):1–5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • McCoach DB, Gable RK, Madura JP. Instrument development in the affective domain: School and corporate applications. 3rd. New York: Springer; 2013. [ Google Scholar ]
  • McIver JP, Carmines EG. Unidimensional scaling. Beverly Hills, CA: Sage; 1981. [ Google Scholar ]
  • McKenzie JF, Wood ML, Kotecki JE, Clark JK, Brey RA. Establishing content validity: Using qualitative and quantitative steps. Am J Health Behav. 1999; 23 (4):311–318. [ Google Scholar ]
  • Napoles-Springer AM, Olsson-Santoyo J, O’Brien H, Stewart AL. Using cognitive interviews to develop surveys in diverse populations. Med Care. 2006; 44 (11):s21–s30. [ PubMed ] [ Google Scholar ]
  • Pett MA, Lackey NR, Sullivan JJ. Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Thousand Oaks, CA: Sage Publications; 2003. [ Google Scholar ]
  • Polit DF, Beck CT. Nursing research: Principles and methods. 7th. Philadelphia: Lippincott, Williams, & Wilkins; 2004. [ Google Scholar ]
  • Polit DF, Beck CT. The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006; 29 :489–497. [ PubMed ] [ Google Scholar ]
  • Rickards G, Magee C, Artino AR. You can’t fix by analysis what you’ve spoiled by design: developing survey instruments and collecting validity evidence. J Grad Med Educ. 2012; 4 (4):407–410. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rubio DM, Berg-Weger M, Tebb SS, Lee ES, Rauch S. Objectifying content validity: Conducting a content validity study in social work research. Soc Work Res. 2003; 27 (2):94–104. [ Google Scholar ]
  • Schmitt N. Uses and abuses of coefficient alpha. Psychol Assess. 1996; 8 :350–353. [ Google Scholar ]
  • Schwarz N. 1999. Self-reports: How the questions shape the answers. Am Psychol 54:93–105. [ Google Scholar ]
  • Sullivan G. A primer on the validity of assessment instruments. J Grad Med Educ. 2011; 3 (2):119–120. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sullivan GM, Artino AR. Analyzing and interpreting data from Likert-type scales. J Grad Med Educ. 2013; 5 (4):541–542. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tourangeau R, Rips LJ, Rasinski KA. The psychology of survey response. New York: Cambridge University Press; 2000. [ Google Scholar ]
  • Waltz CF, Strickland OL, Lenz ER. Measurement in nursing and health research. 3rd. New York: Springer Publishing Co; 2005. [ Google Scholar ]
  • Watt T, Rasmussen AK, Groenvold M, Bjorner JB, Watt SH, Bonnema SJ, Hegedus L, Feldt-Rasmussen U. Improving a newly developed patient-reported outcome for thyroid patients, using cognitive interviewing. Quality of Life Research. 2008; 17 :1009–1017. [ PubMed ] [ Google Scholar ]
  • Weng LJ. 2004. Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educ Psychol Meas 64:956–972. [ Google Scholar ]
  • Willis GB, Artino AR. What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ. 2013; 5 (3):353–356. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Willis GB. Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage Publications; 2005. [ Google Scholar ]

This paper is in the following e-collection/theme issue:

Published on 10.4.2024 in Vol 26 (2024)

Effectiveness of a Web-Based Individual Coping and Alcohol Intervention Program for Children of Parents With Alcohol Use Problems: Randomized Controlled Trial

Authors of this article:

Author Orcid Image

Original Paper

  • Håkan Wall 1 , PhD   ; 
  • Helena Hansson 2 , PhD   ; 
  • Ulla Zetterlind 3 , PhD   ; 
  • Pia Kvillemo 1 , PhD   ; 
  • Tobias H Elgán 1 , PhD  

1 Stockholm Prevents Alcohol and Drug Problems, Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Stockholm, Sweden

2 School of Social Work, Faculty of Social Sciences, Lund University, Lund, Sweden

3 Clinical Health Promotion Centre, Department of Health Sciences, Lund University, Lund, Sweden

Corresponding Author:

Tobias H Elgán, PhD

Stockholm Prevents Alcohol and Drug Problems, Centre for Psychiatry Research

Department of Clinical Neuroscience

Karolinska Institutet, & Stockholm Health Care Services

Norra Stationsgatan 69

Stockholm, 11364

Phone: 46 700011003

Email: [email protected]

Background: Children whose parents have alcohol use problems are at an increased risk of several negative consequences, such as poor school performance, an earlier onset of substance use, and poor mental health. Many would benefit from support programs, but the figures reveal that only a small proportion is reached by existing support. Digital interventions can provide readily accessible support and potentially reach a large number of children. Research on digital interventions aimed at this target group is scarce. We have developed a novel digital therapist-assisted self-management intervention targeting adolescents whose parents had alcohol use problems. This program aims to strengthen coping behaviors, improve mental health, and decrease alcohol consumption in adolescents.

Objective: This study aims to examine the effectiveness of a novel web-based therapist-assisted self-management intervention for adolescents whose parents have alcohol use problems.

Methods: Participants were recruited on the internet from social media and websites containing health-related information about adolescents. Possible participants were screened using the short version of the Children of Alcoholics Screening Test-6. Eligible participants were randomly allocated to either the intervention group (n=101) or the waitlist control group (n=103), and they were unblinded to the condition. The assessments, all self-assessed, consisted of a baseline and 2 follow-ups after 2 and 6 months. The primary outcome was the Coping With Parents Abuse Questionnaire (CPAQ), and secondary outcomes were the Center for Epidemiological Studies Depression Scale, Alcohol Use Disorders Identification Test (AUDIT-C), and Ladder of Life (LoL).

Results: For the primary outcome, CPAQ, a small but inconclusive treatment effect was observed (Cohen d =–0.05 at both follow-up time points). The intervention group scored 38% and 46% lower than the control group on the continuous part of the AUDIT-C at the 2- and 6-month follow-up, respectively. All other between-group comparisons were inconclusive at either follow-up time point. Adherence was low, as only 24% (24/101) of the participants in the intervention group completed the intervention.

Conclusions: The findings were inconclusive for the primary outcome but demonstrate that a digital therapist-assisted self-management intervention may contribute to a reduction in alcohol consumption. These results highlight the potential for digital interventions to reach a vulnerable, hard-to-reach group of adolescents but underscore the need to develop more engaging support interventions to increase adherence.

Trial Registration: ISRCTN Registry ISRCTN41545712; https://www.isrctn.com/ISRCTN41545712?q=ISRCTN41545712

International Registered Report Identifier (IRRID): RR2-10.1186/1471-2458-12-35

Introduction

Children who grow up with parents who have substance use problems or disorders face extraordinary challenges. Approximately 20% of all children have parents with alcohol problems [ 1 - 5 ], while approximately 5% have parents with alcohol use disorders [ 4 , 6 , 7 ]. Children growing up with parental substance abuse are at an increased risk of several negative outcomes, such as psychiatric morbidity [ 8 - 12 ]; poor intellectual, cognitive, and academic achievement [ 13 - 15 ]; domestic physical abuse [ 16 ]; and early drinking onset and the development of substance use problems [ 9 , 17 , 18 ]. Thus, children exposed to parental substance abuse comprise a target group for selective interventions and prevention strategies [ 19 - 22 ].

In Sweden, municipalities account for most of the support offered to these children. An annual survey by the junior association of the Swedish branch of Movendi International (ie, an international temperance movement) reported that 97% of all municipalities provided support resources [ 23 ]. However, estimates from the same survey showed that approximately 2% of the children in the target group received support. Hence, an overwhelming majority never receives support, mainly because of difficulties in identifying and attracting them to intervention programs [ 22 , 24 ].

The internet has become an appealing way to reach and support a large number of people [ 25 , 26 ]. Web-based interventions seem particularly attractive to adolescents, as they generally use digital technology and social media. Furthermore, research has shown that adolescents regard the internet as inviting because it is a readily accessible, anonymous way of seeking help [ 27 ]. Web-based interventions can reduce the stigma associated with face-to-face consultations in health care settings [ 28 ], and young people appreciate the flexibility of completing web-based sessions to fit their own schedules [ 29 ]. The positive effects of web-based interventions have been detected across a broad range of conditions. A recent review by Hedman-Lagerlöf et al [ 30 ] concluded that therapist-supported internet-based cognitive behavioral therapy for adults yielded similar effects as face-to-face therapy. To date, most web-based interventions have been designed for adults. Although the number of web-based interventions targeting children or adolescents is increasing [ 25 , 31 - 33 ], the number of digital interventions aimed at children of substance-abusing parents is still scarce [ 22 , 34 - 38 ]. Those described in the literature, however, all have in common that they are quite extensive, with a duration over several weeks, and a brief digital intervention could complement these more extended interventions. For instance, our research group initiated a study on a web-based group chat for 15- to 25-year-old individuals who have parents with mental illness or substance use problems [ 35 ]. The duration of the program is 8 weeks, and it is a translated version of a program from the Netherlands [ 34 ], which has been shown to have inconclusive treatment effects [ 39 ]. In Sweden, 2 other programs with inconclusive treatment effects have been tested that target significant others and their children [ 37 , 38 ]. Finally, a digital intervention developed in Australia for 18- to 25-year-old individuals with parents with mental illness or substance use disorder [ 36 ] was tested in a pilot study demonstrating positive findings [ 40 ].

To meet the need for a brief, web-based intervention that targets adolescents having parents with alcohol problems and build on the evidence base of digital interventions targeting this vulnerable group, we developed a novel internet-delivered therapist-assisted self-management intervention called “Alcohol and Coping.” Our program originated from a manual-based face-to-face intervention called the “Individual Coping and Alcohol Intervention Program” (ICAIP) [ 41 , 42 ]. Previous studies on both the ICAIP, which aimed at college students having parents with alcohol problems, and a coping skills intervention program, which aimed at spouses of partners with alcohol dependency [ 43 ], have demonstrated positive effects regarding decreased alcohol consumption and improved mental health and coping behaviors [ 41 - 44 ]. Furthermore, the results from these studies underscore the importance of improving coping skills [ 42 , 44 ]. Among college students, those who received a combination of coping skills and an alcohol intervention program had better long-term outcomes [ 42 ].

The aim of this study was to test the effectiveness of Alcohol and Coping among a sample of adolescents aged 15-19 years with at least 1 parent with alcohol use problems. We hypothesized that the intervention group would be superior to the control group in improving coping skills. Secondary research questions concerned the participants’ improvement in (1) depression, (2) alcohol consumption, and (3) quality of life.

This study was a parallel-group randomized controlled trial in which participants were randomized to either the intervention or waitlist control group in a 1:1 allocation ratio. The trial design is illustrated in Figure 1 .

sample questionnaire for medical research

Recruitment and Screening

The participants were recruited from August 2012 to December 2013 through advertisements on social media (Facebook). The advertisements targeted individuals aged 15-19 years with Facebook accounts. Participants were recruited on the internet through advertisements on websites containing health-related information about adolescents. The advertisements included the text, “Do your parents drink too much? Participate in a study.” The advertisement contained an invitation to perform a web-based, self-assessed screening procedure. In addition to questions about age and sex, participants were screened for having parents with alcohol problems using the short version of the Children of Alcoholics Screening Test-6 (CAST-6), developed from a 30-item original version [ 45 ]. The CAST-6 is a 6-item true-false measure designed to assess whether participants perceive their parents’ alcohol consumption to be problematic. The CAST-6 has demonstrated high internal consistency ( r =0.92-0.94), test-retest reliability ( r =0.94), and high validity as compared to the 30-item version ( r =0.93) using the recommended threshold score of 3 or higher [ 45 , 46 ]. We previously translated the CAST-6 into Swedish and validated the translated version among 1450 adolescents, showing good internal consistency (α=.88), excellent test-retest reliability (intraclass correlation coefficient=0.93), and loading into 1 latent factor [ 47 ]. Additional inclusion criteria included having access to a computer and the internet and being sufficiently fluent in Swedish. Participants were excluded from the study and were referred to appropriate care if there were indications of either suicidal or self-inflicted harmful behaviors. Individuals eligible for inclusion received further information about the study and were asked to provide consent to participate by providing an email address.

Data Collection and Measures

All assessments were administered through email invitations containing a hyperlink to the web-based self-reported assessments. Up to 3 reminders were sent through email at 5, 10, and 15 days after the first invitation. A baseline assessment (t 0 ) was collected before randomization, and follow-up assessments were conducted at 2 and 6 months (t 1 and t 2 , respectively) after the initial assessment.

Participants were asked for age, sex, whether they lived with a parent (mother and father, mother or father, mother or father and stepparent, or alternate between mother and father), where their parents were born (Sweden or a Nordic country excluding Sweden or outside of the Nordic countries), parental status (employed, student, on parental leave, or unemployed), and any previous or present participation in support activities for children having parents with alcohol use problems. The primary outcome was coping, measured using the Coping With Parents Abuse Questionnaire (CPAQ) based on the Coping Behavior Scale developed by Orford et al [ 48 ]. Secondary outcomes were the Center for Epidemiological Studies Depression Scale (CES-DC) [ 49 ], the 3-question Alcohol Use Disorders Identification Test (AUDIT-C) [ 50 ], and the Ladder of Life (LoL), which measures the overall quality of life by asking about the participants’ past, present, and future ratings of their overall life satisfaction [ 50 ]. CPAQ has been shown to be reliable [ 41 , 42 ]. For this study, this scale was factor-analyzed to reduce the number of questions from 37 to 20. The resulting scale measures 6 coping typologies (discord, emotion, control, relationship, avoidance, and taking specific action) using a 4-point Likert scale, with a threshold score above 50 points (out of 80) indicating dysfunctional coping behavior. The CES-DC measures depressive symptoms during the past week using a 4-point Likert scale, where a higher total score indicates more depressive symptoms [ 49 ]. A cutoff score of ≥16 indicates symptoms of moderate depression, while a score of ≥30 indicates symptoms of severe depression [ 51 , 52 ]. The scale measures 4 dimensions of depression: depressed mood, tiredness, inability to concentrate, and feelings of being outside and lonely, and has positively stated items [ 52 ]. Additionally, this scale is a general measure of childhood psychopathology [ 53 ] and has been demonstrated to be reliable and valid among Swedish adolescents [ 52 ]. Alcohol consumption was measured using a modified AUDIT-C, which assesses the frequency of drinking, quantity consumed on a typical occasion, and frequency of heavy episodic drinking (ie, binge drinking) [ 50 ] using a 30-day perspective (as opposed to the original 12-month perspective). These questions have previously been translated into Swedish [ 54 ], and a score of ≥4 and ≥5 points for women and men, respectively, was used as a cutoff for risky drinking. This scale has been demonstrated to be reliable and valid for Swedish adolescents [ 55 ]. Furthermore, 2 questions were added concerning whether the participants had ever consumed alcohol to the point of intoxication and their age at the onset of drinking and intoxication. The original version of the LoL was designed for adults and asked the respondents to reflect on their, present, and future life status from a 5-year perspective on a 10-point Visual Analogue Scale representing life status from “worst” to “best” possible life imaginable [ 56 ]. A modified version for children, using a time frame of 1 year, has been used previously in Sweden [ 57 ] and was used in this study.

Randomization

After completing the baseline assessment, each participant was allocated to either the intervention or the control group. An external researcher generated an unrestricted random allocation sequence using random allocation software [ 58 ]. Neither the participants nor the researchers involved in the study were blinded to group allocation.

Based on the order in which participants were included in the study, they were allocated to 1 of the 2 study groups and informed of their allocation by email. Additionally, those who were randomized to the intervention group received a hyperlink to the Alcohol and Coping program, whereas the control group participants received information that they would gain access to Alcohol and Coping after the last follow-up assessment (ie, the waitlist control group). All participants were informed about other information and support available through web pages, notably drugsmart [ 59 ], which contains general information and facts about alcohol and drugs, in addition to more specific information about having substance-abusing parents. Telephone numbers and contact information for other organizations and primary health care facilities were also provided.

The Intervention

As noted previously, Alcohol and Coping is derived from the aforementioned manual-based face-to-face ICAIP intervention program [ 41 , 42 ]. The ICAIP consists of a combination of an alcohol intervention program, which is based on the short version of the Brief Alcohol Screening and Intervention for College Students program [ 60 ], and a coping intervention program developed for the purpose of the ICAIP [ 41 , 42 ]. Like the original ICAIP intervention, Alcohol and Coping builds on psychoeducational principles and includes components such as film-based lectures, various exercises, and both automated and therapist-assisted feedback. Briefly, once the participants logged into the Alcohol and Coping platform, they were introduced to the program, which followed the pattern of a board game ( Figure 2 ). Following the introduction, participants took part in 3 film-based lectures (between 8 and 15 minutes each, Figure 3 ) concerning alcohol problems within the family. The respective lectures included information about (1) dependency in general as well as the genetic and environmental risks for developing dependency, (2) family patterns and how the family adapts to the one having alcohol problems, and (3) attitudes toward alcohol and how they influence drinking and the physiological effects of alcohol. After completing the lectures, the participants were asked to answer 2 questions about their own alcohol consumption (ie, how often they drink and how often they drink to intoxication), followed by an automatic feedback message that depended on their answers. It was then suggested that the participants log out of the intervention for a 1- to 2-day break. The reason for this break was to give the participants a chance to digest all information and impressions. When they logged back into the intervention, they were asked to answer 20 questions about their coping strategies, which were also followed by automatic feedback. This feedback comprised a library covering all the prewritten feedback messages, each of which was tailored to the participants’ specific answers. The participants then participated in a 5-minute–long film-based lecture on emotion and problem-focused coping in relation to family alcohol problems ( Figure 3 ). This was followed by 4 exercises where the participants read through vignette-like stories from 4 fictional persons describing their everyday lives related to coping and alcohol problems in the family. The stories are presented by film-based introductions that are each 1-2 minutes long. Participants were then requested to respond to each story by describing how the fictive person could have coped with their situation. As a final exercise, participants were asked to reflect on their own family situation and how they cope with situations. The participants then had to take a break for a few days.

During the break, a therapist composed individual feedback that covered reflections and confirmation of the participant’s exercises and answers to questions and included suggestions on well-suited coping strategies. Additionally, the therapist encouraged the participants to talk to others in their surroundings, such as friends, teachers, or coaches, and seek further support elsewhere, such as from municipal social services, youth health care centers, or other organizations. Finally, the therapist reflected on the participants’ alcohol consumption patterns and reminded them of increased genetic and environmental risks. Those who revealed patterns of risky alcohol use were encouraged to look at 2 additional film-based lectures with more information about alcohol and intoxication (4 minutes) and alcohol use and dependency (5 minutes). Participants received this feedback once they logged back into the program, but they also had the opportunity to receive feedback through email. The total estimated effective time for completing the program was about 1 hour, but as described above, there was 1 required break when the individualized feedback was written. To keep track of the dose each participant received, each of the 15 components in the program ( Figure 1 ) is equal to completing 6.7% (1/15) of the program in total.

sample questionnaire for medical research

Sample Size

The trial was designed to detect a medium or large effect size corresponding to a standardized mean difference (Cohen d >0.5) [ 61 ]. An a priori calculation of the estimated sample size, using the software G*Power (G*Power Team) [ 62 ], revealed that a total of 128 participants (64 in each group) were required to enroll in the trial (power=0.80; α=.05; 2-tailed). However, to account for an estimated attrition rate of approximately 30% [ 34 ], it was necessary to enroll a minimum of 128/(1 – 0.3) = 183 participants in the trial. After a total of 204 individuals had been recruited and randomized into 2 study arms, recruitment was ended.

Statistical Analysis

Data were analyzed according to the intention-to-treat (ITT) principle, and all randomized participants were included, irrespective of whether they participated in the trial. The 4 research variables were depression (CES-DC), coping (CPAQ), alcohol use (AUDIT-C), and life status (LoL).

Data analysis consisted of comparing outcome measurements at t 1 and t 2 . The baseline measurement t 0 value was added as an adjustment variable in all models. The resulting data from CPAQ, CES-DC, and LoL were normally distributed and analyzed using linear mixed models. The resulting AUDIT-C scores were nonnormally distributed, with an excess of 0 values, and were analyzed using a 2-part model for longitudinal data. This model is sufficiently flexible to account for numerous 0 reports. This was achieved by combining a logistic generalized linear mixed model (GLMM) for the 0 parts and a skewed continuous GLMM for the non-0 alcohol consumption parts. R-package brms (Bayesian regression models using Stan; R Foundation for Statistical Computing) [ 63 ], a higher-level interface for the probabilistic programming language Stan [ 64 ], and a custom brms family for a marginalized 2-part lognormal distribution were used to fit the model [ 65 ]. The logistic part of the model represents the subject-specific effects on the odds of reporting no drinking. The continuous part was modeled using a gamma GLMM with a log link. The exponentiated treatment effect represents the subject-specific ratio of the total AUDIT-C scores between the treatment and waitlist control groups for those who reported drinking during the specific follow-up period.

Handling of Missing Data

GLMMs include all available data and provide unbiased ITT estimates under the assumption that data are missing at random, meaning that the missing data can be explained by existing data. However, it is impossible to determine whether the data are missing at random or whether the missing data are due to unobserved factors [ 66 ]. Therefore, we also assumed that data were not missing at random, and subsequent sensitivity analyses were performed [ 66 ]. We used the pattern mixture method, which assumes not missing at random, to compare those who completed the follow-up at 6 months (t 2 ) with those who did not (but completed the 2-month follow-up). The overall effect of this model is a combination of the effects of each subgroup. We also tested the robustness of the results by performing ANCOVAs at the 2-month follow-up, both using complete cases and with missing values imputed using multilevel multiple imputation.

The effect of the program was estimated using Cohen d , where a value of approximately 0.2 indicates a small effect size and values of approximately 0.5 and 0.8 indicate medium and large effect sizes, respectively [ 61 ].

Ethical Considerations

All procedures were performed in accordance with the ethical standards of the institutional or national research committees, the 1964 Helsinki Declaration and its later amendments, and comparable ethical standards. Informed consent was obtained from all the participants included in the study. This study was approved by the Swedish Ethical Review Authority (formerly the Regional Ethical Review Board in Stockholm, No. 2011/1648-31/5).

To enhance the response rates, participants received a cinema gift certificate corresponding to approximately EUR 11 (US $12) as compensation for completing each assessment. If a participant completed all assessments, an additional gift certificate was provided. The participants could subsequently receive 4 cinema gift certificates totaling EUR 44 (US $48).

The trial profile is depicted in Figure 1 and reveals that 2722 individuals who were aged between 15 and 19 years performed the screening procedure. A total of 1448 individuals did not fulfill the inclusion criteria and were excluded, leaving 1274 eligible participants. Another 1070 individuals were excluded because they did not provide informed consent or complete the baseline assessment, leaving 204 participants who were allocated to 1 of the 2 study groups. A total of 140 (69%) and 131 (64%) participants completed t 1 and t 2 assessments, respectively. Of the participants in the intervention group (n=101), 63% (n=64) registered an account on the Alcohol and Coping website, 35% (n=35) completed the alcohol intervention section, and 24% (n=24) completed both the alcohol and coping intervention sections.

Sample Characteristics

The mean age of the sample was 17.0 (SD 1.23) years, and the vast majority were female, with both parents born in Sweden and currently working ( Table 1 ). Approximately one-third of the participants reported living with both parents. The mean score on the CAST-6 was 5.33 (SD 0.87) out of a total of 6, and the majority of the sample (147/204, 72.1%) perceived their father to have alcohol problems. Approximately 12% (25/204) had never consumed alcohol, whereas approximately 70% (144/204) had consumed alcohol at a level of intoxication. The mean age at onset was 13.7 (SD 2.07) years and the age at first intoxication was 14.8 (SD 1.56) years. The proportion of participants with symptoms of at least moderate depression was 77.5% (158/204), of whom 55.1% (87/158) had symptoms of severe depression and 42.6% (87/204) had symptoms of dysfunctional coping behaviors. The percentage of participants who consumed alcohol at a risky level was 39.7% (81/204). Table 1 provides complete information regarding the study sample.

a Significance levels calculated by Pearson chi-square statistics for categorical variables and 2-tailed t tests for continuous variables.

Treatment Effects

For the primary outcome, coping behavior (CPAQ), we found a small but inconclusive treatment effect in favor of treatment at both 2 (t 1 ) and 6 (t 2 ) months (Cohen d =–0.05 at both t 1 and t 2 ). For the secondary outcome, alcohol use (AUDIT-C), we found a treatment effect in that the intervention group scored 38% less than the control group on the continuous part (ie, drinking when it occurred) at t 1 and 46% less at t 2 . Regarding depression (CES-DC) and life status (LoL), all between-group comparisons of treatment effects were inconclusive at both follow-up time points ( Table 2 ).

a CPAQ: Coping With Parents Abuse Questionnaire.

b CES-DC: Center for Epidemiological Studies Depression Scale.

c LoL: Ladder of Life.

d AUDIT-C: Alcohol Use Disorders Identification Test.

e N/A: not applicable.

Missing Data

In contrast to the ITT analyses, the sensitivity analyses showed that the treatment group, averaged over the levels of dropout, scored higher (ie, a negative effect) on the main outcome, coping behavior (CPAQ), at t 1 (2.44; P =.20). However, the results remain inconclusive.

Dose-Response Effects

We did not find any evidence for greater involvement in the program being linked to improved outcomes with regard to coping behavior.

We did not find any support for the primary hypothesis: the intervention was not superior to the control condition with regard to coping behavior. Inconclusive results with small effect sizes were observed at both follow-up time points. However, for the secondary outcomes, we found that those in the intervention group who drank alcohol drank approximately 40%-50% less than those in the control group at both follow-ups. These results corroborate previous findings on the precursor face-to-face ICAIP intervention program, demonstrating that participants who received a combined alcohol and coping intervention reported superior outcomes with regard to alcohol-related outcomes compared to participants in the other 2 study arms, who received only a coping or alcohol intervention [ 41 , 42 ]. In contrast to this study, Hansson et al [ 42 ] found that all groups improved their coping skills, although the between-group comparisons were inconclusive and the improvements were maintained over time. These differences could be explained by the different settings in which the precursor program was provided (ie, face-to-face to young adults in a university setting), whereas this study targeted young people (15-19 years of age) through a web-based digital intervention. Additionally, the poor adherence in this study may explain the absence of primary results favoring the intervention group. In a recent study, parents without alcohol problems were recruited to participate in a randomized trial evaluating the web-based SPARE (Supportive Parenting and Reinforcement) program to improve children’s mental health and reduce coparents’ alcohol use. In line with our study, the authors did not find the primary outcome of the SPARE program to be superior to that of the active control group (which received written psychoeducation); however, both groups reported decreased coparental alcohol consumption [ 38 ].

Considering that approximately 3600 children in 2022 participated in various forms of support provided by Swedish municipalities [ 23 ], our recruitment activities reached a large number of eligible individuals, pointing to the potential of finding these children on these platforms. There were unexpectedly high levels of depression among the participants in this study. Although the intervention did not target depressive symptoms per se , there was a trend for the intervention group to have decreased depression levels compared to the control group. A large proportion of participants had symptoms of severe depression, which may have aggravated their capacity for improvement at follow-up [ 28 , 67 ]. Targeting dysfunctional coping patterns could affect an individual’s perceived mental health, and studies have shown that healthy coping strategies positively affect depression and anxiety in a positive way [ 68 ]. Using dysfunctional coping strategies, such as negative self-talk and alcohol consumption, can lead to depressive symptoms [ 69 ]. Targeting these symptoms in the context of healthy and unhealthy coping strategies may be a viable route to fostering appropriate coping strategies that work in the long run. Given that the young people who were reached by the intervention in this study displayed high levels of depression, future interventions for this group should include programs targeting depressive symptoms.

Almost 37% (37/101) of the intervention group did not log into the intervention at all, and only 24% (24/101) of the intervention group participants completed all parts of the program. The fact that a high proportion of the participants had symptoms of severe depression could explain the low adherence. Another reason could be that the initial film-based lectures were too long to maintain the participants’ attention, as the lectures ranged from 8-15 minutes. Yet a final reason could be that we had a 1- to 2-day break built into the intervention, and for unknown reasons, some participants did not log back into the intervention. However, we did not find a dose-response relationship indicating favorable outcomes for those who completed more of the program content. High levels of attrition are not uncommon in self-directed programs such as the one in this study; for example, in a study on a smoking cessation intervention, 37% of the participants never logged into the platform [ 70 ], and in a self-directed intervention for problem gamblers, a majority dropped out after 1 week and none completed the entire program [ 71 ]. Increased intervention adherence is a priority when developing new digital interventions, particularly for young people. One method is to use more persuasive technologies, such as primary tasks, dialogue, and social support [ 72 ]. Considering children whose parents have mental disorders, Grové and Reupert [ 73 ] suggested that digital interventions should include components such as providing information about parental mental illness, access to health care, genetic risk, and suggestions for how children might initiate conversations with parents who have the illness. These suggestions should be considered in future studies on interventions for youths whose parents have substance use problems. Representatives of the target group and other relevant stakeholders should also be involved in coproducing new interventions to increase the probability of developing more engaging programs [ 74 ]. Moreover, one cannot expect study participants to return to the program more than once, and for the sake of adherence, briefer interventions should not encourage participants to log-out for a break. To keep adherence at an acceptable level, similar future interventions for this target group should also consider having symptoms of severe depression as an exclusion criterion [ 28 , 67 ]. Further, to improve adherence, strategies of coproduction could be used where all stakeholders, including the target group, are involved in intervention development [ 75 ]. Other important factors identified to improve adherence to digital interventions are to make the content relatable, useful, and even more interactive [ 76 ]. Those participants who have symptoms of severe depression should be referred to other appropriate health care. Finally, it is probably beneficial to develop shorter psychoeducative film-based lectures than ours, lasting up to 15 minutes. Future self-directed digital interventions targeting this population should, therefore, focus on a very brief and focused intervention, which, based on theory, has the potential to foster healthy coping behaviors that can lead to an increased quality of life and improved mental health for this group of young people.

Another concern for future projects would be to use a data-driven approach during the program development phase, where A/B testing can be used to test different setups of the program to highlight which setup works best. Another aspect that must be considered is the fast-changing world of technology, where young people are exposed to an infinite number of different apps that grab their attention, which also calls for interventions to be short and to the point. Furthermore, if the program is to spread and become generally available, one must consider that keeping the program alive for a longer period will require funding and staffing for both product management and technical support.

Strengths and Limitations

This study had several strengths. First, Alcohol and Coping is a web-based intervention program, and it appears as if the internet is a particularly promising way to provide support to adolescents growing up with parents with alcohol problems because it offers an anonymous means of communicating and makes intervention programs readily accessible [ 25 ]. Our recruitment strategies reached a considerable number of interested and eligible individuals, demonstrating the potential for recruiting through social media and other web platforms. Additionally, this program is one of the first brief web-based interventions aimed at adolescents with parents with alcohol-related problems. We used the CAST-6, which has been validated among Swedish adolescents [ 47 ], to screen eligible participants. Another strength is that the intervention program involved personalized, tailored feedback in the form of prewritten automatic messages and therapist-written personalized feedback, both of which have proven to be important components of web-based interventions aimed at adolescents [ 77 , 78 ]. Finally, this study evaluated the effectiveness of the Alcohol and Coping program using a randomized controlled trial design, which is considered the strongest experimental design with regard to allocation bias.

This study had some limitations. First, the design with a passive waitlist control group and an active intervention group, both unblinded to study allocation, may have resulted in biased estimates of treatment effects. Intervention adherence was low, and most of the study participants had symptoms of depression, where 55% (87/158) had symptoms of severe depression. This may have contributed to the small and overall inconclusive effects on the primary outcomes of this study. Many digital interventions have problems with low adherence, and in a review by Välimäki et al [ 79 ], some studies reported adherence rates as low as 10%. A vast proportion of the study participants were women, making the findings difficult to generalize to men. However, another limitation concerns selection bias and external validity. We recruited study participants through social media and other relevant websites containing health-related information, including information about parents with alcohol-related problems. It is, therefore, possible that the study population can be classified as “information-seeking” adolescents, who may have different personality traits relative to other adolescents in the same home situation. Additionally, as an inclusion criterion was having ready access to computers and the internet, it is possible that participants belonging to a lower socioeconomic class were underrepresented in the study. It should also be noted that the data presented here were collected approximately 10 years ago. However, we believe our findings make an important contribution to the field since, like our intervention, many recent web-based interventions use strategies of psychoeducation, films, exercises, questions, and feedback. Further, the number of web-based interventions for this target group remains scarce in the literature, which underscores the need for future research. Finally, the study was powered to detect a medium effect size. However, given the small effect sizes detected in this study, it is plausible that too few participants were recruited to detect differences between the groups.

Implications for Practice

Although growing up with parents who have alcohol problems per se is not sufficient for developing psychosocial disorders, many children need support to manage their situation. Therefore, it is difficult to recruit children to support these groups. In Sweden, not even 2% of all children growing up with parental alcohol problems attend face-to-face support groups provided by municipalities.

Offering support through web-based intervention programs seems particularly attractive to adolescents whose parents have alcohol-related problems. To date, evidence for such programs is scarce, and there is an urgent need to develop and evaluate digital interventions targeting this group of adolescents. This study makes important contributions to this novel field of research. The results provide insight into effective strategies for delivering intervention programs to children of parents with substance abuse issues, highlighting the potential for digital interventions to reach a vulnerable, hard-to-reach group of adolescents. Our findings underscore the need to develop more engaging interventions in coproduction with the target group.

Conclusions

We found that a digital therapist-assisted self-management intervention for adolescents whose parents have alcohol use problems contributed to a reduction in the adolescents’ own alcohol consumption. This result highlights the potential for digital interventions to reach a large, vulnerable, and hard-to-reach group of adolescents with support efforts. Findings were inconclusive for all other outcomes, which may be attributable to low adherence. This points to the need for future research on developing more engaging digital interventions to increase adherence among adolescents.

Acknowledgments

This work was undertaken on behalf of the Swedish Council for Information on Alcohol and Other Drugs (CAN) and was supported by grants from the Swedish National Institute of Public Health and the Swedish Council for Working Life and Social Research.

Conflicts of Interest

HH and UZ developed the study interventions. However, the parties did not derive direct financial income from these interventions. HW, PK, and THE declare no conflicts of interest.

CONSORT-eHEALTH checklist (V 1.6.1).

  • Haugland SH, Elgán TH. Prevalence of parental alcohol problems among a general population sample of 28,047 Norwegian adults: evidence for a socioeconomic gradient. Int J Environ Res Public Health. 2021;18(10):5412. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elgán TH, Leifman H. Prevalence of adolescents who perceive their parents to have alcohol problems: a Swedish national survey using a web panel. Scand J Public Health. 2013;41(7):680-683. [ CrossRef ] [ Medline ]
  • Laslett AM, Ferris J, Dietze P, Room R. Social demography of alcohol-related harm to children in Australia. Addiction. 2012;107(6):1082-1089. [ CrossRef ] [ Medline ]
  • Manning V, Best DW, Faulkner N, Titherington E. New estimates of the number of children living with substance misusing parents: results from UK national household surveys. BMC Public Health. 2009;9:377. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grant BF. Estimates of US children exposed to alcohol abuse and dependence in the family. Am J Public Health. 2000;90(1):112-115. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Raninen J, Elgán TH, Sundin E, Ramstedt M. Prevalence of children whose parents have a substance use disorder: findings from a Swedish general population survey. Scand J Public Health. 2016;44(1):14-17. [ CrossRef ] [ Medline ]
  • Christoffersen MN, Soothill K. The long-term consequences of parental alcohol abuse: a cohort study of children in Denmark. J Subst Abuse Treat. 2003;25(2):107-116. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Martikainen PN, Korhonen K, Moustgaard H, Aaltonen M, Remes H. Substance abuse in parents and subsequent risk of offspring psychiatric morbidity in late adolescence and early adulthood: a longitudinal analysis of siblings and their parents. Soc Sci Med. 2018;217:106-111. [ CrossRef ] [ Medline ]
  • Jääskeläinen M, Holmila M, Notkola IL, Raitasalo K. Mental disorders and harmful substance use in children of substance abusing parents: a longitudinal register-based study on a complete birth cohort born in 1991. Drug Alcohol Rev. 2016;35(6):728-740. [ CrossRef ] [ Medline ]
  • Velleman R, Templeton LJ. Impact of parents' substance misuse on children: an update. BJPsych Adv. Apr 11, 2018;22(2):108-117. [ FREE Full text ] [ CrossRef ]
  • Ohannessian CM, Hesselbrock VM, Kramer J, Bucholz KK, Schuckit MA, Kuperman S, et al. Parental substance use consequences and adolescent psychopathology. J Stud Alcohol. 2004;65(6):725-730. [ CrossRef ] [ Medline ]
  • Johnson JL, Leff M. Children of substance abusers: overview of research findings. Pediatrics. 1999;103(5 Pt 2):1085-1099. [ CrossRef ] [ Medline ]
  • Berg L, Bäck K, Vinnerljung B, Hjern A. Parental alcohol-related disorders and school performance in 16-year-olds-a Swedish national cohort study. Addiction. 2016;111(10):1795-1803. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Casas-Gil MJ, Navarro-Guzman JI. School characteristics among children of alcoholic parents. Psychol Rep. 2002;90(1):341-348. [ CrossRef ] [ Medline ]
  • McGrath CE, Watson AL, Chassin L. Academic achievement in adolescent children of alcoholics. J Stud Alcohol. 1999;60(1):18-26. [ CrossRef ] [ Medline ]
  • Velleman R, Templeton L, Reuber D, Klein M, Moesgen D. Domestic abuse experienced by young people living in families with alcohol problems: results from a cross‐european study. Child Abuse Rev. Nov 24, 2008;17(6):387-409. [ CrossRef ]
  • Rothman EF, Edwards EM, Heeren T, Hingson RW. Adverse childhood experiences predict earlier age of drinking onset: results from a representative US sample of current or former drinkers. Pediatrics. 2008;122(2):e298-e304. [ CrossRef ] [ Medline ]
  • Anda RF, Whitfield CL, Felitti VJ, Chapman D, Edwards VJ, Dube SR, et al. Adverse childhood experiences, alcoholic parents, and later risk of alcoholism and depression. Psychiatr Serv. 2002;53(8):1001-1009. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Straussner SLA, Fewell CH. A review of recent literature on the impact of parental substance use disorders on children and the provision of effective services. Curr Opin Psychiatry. 2018;31(4):363-367. [ CrossRef ] [ Medline ]
  • Calhoun S, Conner E, Miller M, Messina N. Improving the outcomes of children affected by parental substance abuse: a review of randomized controlled trials. Subst Abuse Rehabil. 2015;6:15-24. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Emshoff JG, Price AW. Prevention and intervention strategies with children of alcoholics. Pediatrics. 1999;103(5 Pt 2):1112-1121. [ CrossRef ] [ Medline ]
  • Cuijpers P. Prevention programmes for children of problem drinkers: a review. Drugs Educ Prev Policy. 2009;12(6):465-475. [ CrossRef ]
  • Wannberg H. Plats för barnen—Om kommunernas stöd till barn som växer upp med missbrukande föräldrar [Make room for the children—municipalities and their support for children who grow up with parents with substance abuse]. Stockholm, Sweden. Junis, IOGT-NTO's ungdomsförbund; 2023.
  • Elgán TH, Leifman H. Children of substance abusing parents: a national survey on policy and practice in Swedish schools. Health Policy. 2011;101(1):29-36. [ CrossRef ] [ Medline ]
  • de Sousa D, Fogel A, Azevedo J, Padrão P. The effectiveness of web-based interventions to promote health behaviour change in adolescents: a systematic review. Nutrients. 2022;14(6):1258. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Andersson G. Internet-delivered psychological treatments. Annu Rev Clin Psychol. 2016;12:157-179. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • King R, Bambling M, Lloyd C, Gomurra R, Smith S, Reid W, et al. Online counselling: the motives and experiences of young people who choose the internet instead of face to face or telephone counselling. Couns Psychother Res. 2007;6(3):169-174. [ CrossRef ]
  • Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res. 2021;23(3):e24387. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Boggs JM, Beck A, Felder JN, Dimidjian S, Metcalf CA, Segal ZV. Web-based intervention in mindfulness meditation for reducing residual depressive symptoms and relapse prophylaxis: a qualitative study. J Med Internet Res. 2014;16(3):e87. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hedman-Lagerlöf E, Carlbring P, Svärdman F, Riper H, Cuijpers P, Andersson G. Therapist-supported internet-based cognitive behaviour therapy yields similar effects as face-to-face therapy for psychiatric and somatic disorders: an updated systematic review and meta-analysis. World Psychiatry. 2023;22(2):305-314. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lehtimaki S, Martic J, Wahl B, Foster KT, Schwalbe N. Evidence on digital mental health interventions for adolescents and young people: systematic overview. JMIR Ment Health. 2021;8(4):e25847. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Li J, Theng YL, Foo S. Game-based digital interventions for depression therapy: a systematic review and meta-analysis. Cyberpsychol Behav Soc Netw. 2014;17(8):519-527. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pennant ME, Loucas CE, Whittington C, Creswell C, Fonagy P, Fuggle P, et al. Computerised therapies for anxiety and depression in children and young people: a systematic review and meta-analysis. Behav Res Ther. 2015;67:1-18. [ CrossRef ] [ Medline ]
  • Woolderink M, Smit F, van der Zanden R, Beecham J, Knapp M, Paulus A, et al. Design of an internet-based health economic evaluation of a preventive group-intervention for children of parents with mental illness or substance use disorders. BMC Public Health. 2010;10:470. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elgán TH, Kartengren N, Strandberg AK, Ingemarson M, Hansson H, Zetterlind U, et al. A web-based group course intervention for 15-25-year-olds whose parents have substance use problems or mental illness: study protocol for a randomized controlled trial. BMC Public Health. 2016;16(1):1011. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Maybery D, Reupert A, Bartholomew C, Cuff R, Duncan Z, Foster K, et al. A web-based intervention for young adults whose parents have a mental illness or substance use concern: protocol for a randomized controlled trial. JMIR Res Protoc. 2020;9(6):e15626. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • EÉk N, Romberg K, Siljeholm O, Johansson M, Andreasson S, Lundgren T, et al. Efficacy of an internet-based community reinforcement and family training program to increase treatment engagement for AUD and to improve psychiatric health for CSOs: a randomized controlled trial. Alcohol Alcohol. 2020;55(2):187-195. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Siljeholm O, Lindner P, Johansson M, Hammarberg A. An online self-directed program combining Community Reinforcement Approach and Family Training and parenting training for concerned significant others sharing a child with a person with problematic alcohol consumption: a randomized controlled trial. Addict Sci Clin Pract. 2022;17(1):49. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Woolderink M. Mind the gap: evaluation of an online preventive programme for adolescents with mentally ill or addicted parents. Maastricht University. 2016. URL: https://cris.maastrichtuniversity.nl/ws/portalfiles/portal/7281149/c5531.pdf [accessed 2024-03-14]
  • Maybery D, Reupert A, Bartholomew C, Cuff R, Duncan Z, McAuliffe C, et al. An online intervention for 18-25-year-old youth whose parents have a mental illness and/or substance use disorder: a pilot randomized controlled trial. Early Interv Psychiatry. 2022;16(11):1249-1258. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hansson H, Rundberg J, Zetterlind U, Johnsson KO, Berglund M. An intervention program for university students who have parents with alcohol problems: a randomized controlled trial. Alcohol Alcohol. 2006;41(6):655-663. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hansson H, Rundberg J, Zetterlind U, Johnsson KO, Berglund M. Two-year outcome of an intervention program for university students who have parents with alcohol problems: a randomized controlled trial. Alcohol Clin Exp Res. 2007;31(11):1927-1933. [ CrossRef ] [ Medline ]
  • Zetterlind U, Hansson H, Aberg-Orbeck K, Berglund M. Effects of coping skills training, group support, and information for spouses of alcoholics: a controlled randomized study. Nord J Psychiatry. 2001;55(4):257-262. [ CrossRef ] [ Medline ]
  • Hansson H, Zetterlind U, Aberg-Orbeck K, Berglund M. Two-year outcome of coping skills training, group support and information for spouses of alcoholics: a randomized controlled trial. Alcohol Alcohol. 2004;39(2):135-140. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hodgins DC, Maticka-Tyndale E, El-Guebaly N, West M. The CAST-6: development of a short-form of the Children of Alcoholics Screening Test. Addict Behav. 1993;18(3):337-345. [ CrossRef ] [ Medline ]
  • Hodgins DC, Shimp L. Identifying adult children of alcoholics: methodological review and a comparison of the CAST-6 with other methods. Addiction. 1995;90(2):255-267. [ Medline ]
  • Elgán TH, Berman AH, Jayaram-Lindström N, Hammarberg A, Jalling C, Källmén H. Psychometric properties of the short version of the children of alcoholics screening test (CAST-6) among Swedish adolescents. Nord J Psychiatry. 2021;75(2):155-158. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Orford J, Guthrie S, Nicholls P, Oppenheimer E, Egert S, Hensman C. Self-reported coping behavior of wives of alcoholics and its association with drinking outcome. J Stud Alcohol. 1975;36(9):1254-1267. [ CrossRef ] [ Medline ]
  • Schoenbach VJ, Kaplan BH, Grimson RC, Wagner EH. Use of a symptom scale to study the prevalence of a depressive syndrome in young adolescents. Am J Epidemiol. 1982;116(5):791-800. [ CrossRef ] [ Medline ]
  • Bush K, Kivlahan DR, McDonell MB, Fihn SD, Bradley KA. The AUDIT alcohol consumption questions (AUDIT-C): an effective brief screening test for problem drinking. Ambulatory Care Quality Improvement Project (ACQUIP). Alcohol Use Disorders Identification Test. Arch Intern Med. 1998;158(16):1789-1795. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Myers JK, Weissman MM. Use of a self-report symptom scale to detect depression in a community sample. Am J Psychiatry. 1980;137(9):1081-1084. [ CrossRef ] [ Medline ]
  • Olsson G, von Knorring AL. Depression among Swedish adolescents measured by the self-rating scale Center for Epidemiology Studies-Depression Child (CES-DC). Eur Child Adolesc Psychiatry. 1997;6(2):81-87. [ CrossRef ] [ Medline ]
  • Fendrich M, Weissman MM, Warner V. Screening for depressive disorder in children and adolescents: validating the Center for Epidemiologic Studies Depression Scale for Children. Am J Epidemiol. 1990;131(3):538-551. [ CrossRef ] [ Medline ]
  • Bergman H, Källmen H, Rydberg U, Sandahl C. Tio frågor om alkohol identifierar beroendeproblem. Psykometrisk prövning på psykiatrisk akutmottagning [Ten questions about alcohol as identifier of addiction problems. Psychometric tests at an emergency psychiatric department]. Läkartidningen. 1998;95(43):4731-4735. [ FREE Full text ]
  • Källmén H, Berman AH, Jayaram-Lindström N, Hammarberg A, Elgán TH. Psychometric properties of the AUDIT, AUDIT-C, CRAFFT and ASSIST-Y among Swedish adolescents. Eur Addict Res. 2019;25(2):68-77. [ CrossRef ] [ Medline ]
  • Andrews FM, Withey SB. Developing measures of perceived life quality: results from several national surveys. Soc Indic Res. 1974;1(1):1-26. [ CrossRef ]
  • Nagy E. Barns känsla av sammanhang—En valideringsstudie av BarnKASAM i årskurserna 1-6 (ålder 7-12 år) [Children's sense of coherence—a study validating SOC for children in grades 1-6 (7-12 years old)]. Lunds Universitet [Lund University]. 2004. URL: https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=1358959&fileOId=1358960 [accessed 2024-03-14]
  • Saghaei M. Random allocation software for parallel group randomized trials. BMC Med Res Methodol. 2004;4:26. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • drugsmart. URL: https://www.drugsmart.se/ [accessed 2024-03-19]
  • Dimeff LA, Baer JS, Kivlahan DR, Marlatt GA. Brief Alcohol Screening and Intervention for College Students: A Harm Reduction Approach. New York. The Guilford Press; 1999.
  • Cohen J. A power primer. Psychol Bull. 1992;112(1):155-159. [ CrossRef ] [ Medline ]
  • Faul F, Erdfelder E, Lang A, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175-191. [ CrossRef ] [ Medline ]
  • Bürkner PC. brms: an R package for bayesian multilevel models using stan. J Stat Soft. 2017;80(1):1-28. [ FREE Full text ] [ CrossRef ]
  • Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, et al. Stan: a probabilistic programming language. J Stat Softw. 2017;76(1):1-32. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Magnusson K, Nilsson A, Carlbring P. Modeling longitudinal gambling data: challenges and opportunities. PsyArxiv. Preprint posted online on September 12, 2019. [ FREE Full text ] [ CrossRef ]
  • Coertjens L, Donche V, De Maeyer S, Vanthournout G, Van Petegem P. To what degree does the missing-data technique influence the estimated growth in learning strategies over time? A tutorial example of sensitivity analysis for longitudinal data. PLoS One. 2017;12(9):e0182615. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kunas SL, Lautenbacher LM, Lueken PU, Hilbert K. Psychological predictors of cognitive-behavioral therapy outcomes for anxiety and depressive disorders in children and adolescents: a systematic review and meta-analysis. J Affect Disord. 2021;278:614-626. [ CrossRef ] [ Medline ]
  • Stallman HM, Lipson SK, Zhou S, Eisenberg D. How do university students cope? An exploration of the health theory of coping in a US sample. J Am Coll Health. 2022;70(4):1179-1185. [ CrossRef ] [ Medline ]
  • Stallman HM, Beaudequin D, Hermens DF, Eisenberg D. Modelling the relationship between healthy and unhealthy coping strategies to understand overwhelming distress: a Bayesian network approach. J Affect Disord Rep. 2021;3:100054. [ FREE Full text ] [ CrossRef ]
  • McClure JB, Shortreed SM, Bogart A, Derry H, Riggs K, St John J, et al. The effect of program design on engagement with an internet-based smoking intervention: randomized factorial trial. J Med Internet Res. 2013;15(3):e69. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Humphrey G, Bullen C. Smartphone-based problem gambling evaluation and technology testing initiative ('SPGeTTI'): final report reference 354913/00 for the Ministry of Health. National Institute for Health Innovation (NIHI), Auckland UniServices Ltd, The Univerisy of Auckland. 2019. URL: https:/​/www.​health.govt.nz/​system/​files/​documents/​publications/​20190424-spgetti-354913-00-final-report.​pdf [accessed 2024-03-14]
  • Kelders SM, Kok RN, Ossebaard HC, Van Gemert-Pijnen JEWC. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res. 2012;14(6):e152. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grové C, Reupert A. Moving the field forward: developing online interventions for children of parents with a mental illness. Child Youth Serv Rev. 2017;82:354-358. [ CrossRef ]
  • Moffat BM, Haines-Saah RJ, Johnson JL. From didactic to dialogue: assessing the use of an innovative classroom resource to support decision-making about cannabis use. Drugs Educ Prev Policy. 2016;24(1):85-95. [ FREE Full text ] [ CrossRef ]
  • Bevan Jones R, Stallard P, Agha SS, Rice S, Werner-Seidler A, Stasiak K, et al. Practitioner review: co-design of digital mental health technologies with children and young people. J Child Psychol Psychiatry. 2020;61(8):928-940. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn't work? A systematic review of digital mental health interventions for depression and anxiety in young people. Front Psychiatry. 2019;10:759. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Crutzen R, de Nooijer J, Brouwer W, Oenema A, Brug J, de Vries NK. Strategies to facilitate exposure to internet-delivered health behavior change interventions aimed at adolescents or young adults: a systematic review. Health Educ Behav. 2011;38(1):49-62. [ CrossRef ] [ Medline ]
  • Milward J, Drummond C, Fincham-Campbell S, Deluca P. What makes online substance-use interventions engaging? A systematic review and narrative synthesis. Digit Health. 2018;4:2055207617743354. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Välimäki M, Anttila K, Anttila M, Lahti M. Web-based interventions supporting adolescents and young people with depressive symptoms: systematic review and meta-analysis. JMIR Mhealth Uhealth. 2017;5(12):e180. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by YH Lin; submitted 24.08.23; peer-reviewed by X Zhang, C Asuzu, D Liu; comments to author 28.01.24; revised version received 08.02.24; accepted 27.02.24; published 10.04.24.

©Håkan Wall, Helena Hansson, Ulla Zetterlind, Pia Kvillemo, Tobias H Elgán. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 10.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Read our research on: Gun Policy | International Conflict | Election 2024

Regions & Countries

9 facts about americans and marijuana.

People smell a cannabis plant on April 20, 2023, at Washington Square Park in New York City. (Leonardo Munoz/VIEWpress)

The use and possession of marijuana is illegal under U.S. federal law, but about three-quarters of states have legalized the drug for medical or recreational purposes. The changing legal landscape has coincided with a decades-long rise in public support for legalization, which a majority of Americans now favor.

Here are nine facts about Americans’ views of and experiences with marijuana, based on Pew Research Center surveys and other sources.

As more states legalize marijuana, Pew Research Center looked at Americans’ opinions on legalization and how these views have changed over time.

Data comes from surveys by the Center,  Gallup , and the  2022 National Survey on Drug Use and Health  from the U.S. Substance Abuse and Mental Health Services Administration. Information about the jurisdictions where marijuana is legal at the state level comes from the  National Organization for the Reform of Marijuana Laws .

More information about the Center surveys cited in the analysis, including the questions asked and their methodologies, can be found at the links in the text.

Around nine-in-ten Americans say marijuana should be legal for medical or recreational use,  according to a January 2024 Pew Research Center survey . An overwhelming majority of U.S. adults (88%) say either that marijuana should be legal for medical use only (32%) or that it should be legal for medical  and  recreational use (57%). Just 11% say the drug should not be legal in any form. These views have held relatively steady over the past five years.

A pie chart showing that only about 1 in 10 U.S. adults say marijuana should not be legal at all.

Views on marijuana legalization differ widely by age, political party, and race and ethnicity, the January survey shows.

A horizontal stacked bar chart showing that views about legalizing marijuana differ by race and ethnicity, age and partisanship.

While small shares across demographic groups say marijuana should not be legal at all, those least likely to favor it for both medical and recreational use include:

  • Older adults: 31% of adults ages 75 and older support marijuana legalization for medical and recreational purposes, compared with half of those ages 65 to 74, the next youngest age category. By contrast, 71% of adults under 30 support legalization for both uses.
  • Republicans and GOP-leaning independents: 42% of Republicans favor legalizing marijuana for both uses, compared with 72% of Democrats and Democratic leaners. Ideological differences exist as well: Within both parties, those who are more conservative are less likely to support legalization.
  • Hispanic and Asian Americans: 45% in each group support legalizing the drug for medical and recreational use. Larger shares of Black (65%) and White (59%) adults hold this view.

Support for marijuana legalization has increased dramatically over the last two decades. In addition to asking specifically about medical and recreational use of the drug, both the Center and Gallup have asked Americans about legalizing marijuana use in a general way. Gallup asked this question most recently, in 2023. That year, 70% of adults expressed support for legalization, more than double the share who said they favored it in 2000.

A line chart showing that U.S. public opinion on legalizing marijuana, 1969-2023.

Half of U.S. adults (50.3%) say they have ever used marijuana, according to the 2022 National Survey on Drug Use and Health . That is a smaller share than the 84.1% who say they have ever consumed alcohol and the 64.8% who have ever used tobacco products or vaped nicotine.

While many Americans say they have used marijuana in their lifetime, far fewer are current users, according to the same survey. In 2022, 23.0% of adults said they had used the drug in the past year, while 15.9% said they had used it in the past month.

While many Americans say legalizing recreational marijuana has economic and criminal justice benefits, views on these and other impacts vary, the Center’s January survey shows.

  • Economic benefits: About half of adults (52%) say that legalizing recreational marijuana is good for local economies, while 17% say it is bad. Another 29% say it has no impact.

A horizontal stacked bar chart showing how Americans view the effects of legalizing recreational marijuana.

  • Criminal justice system fairness: 42% of Americans say legalizing marijuana for recreational use makes the criminal justice system fairer, compared with 18% who say it makes the system less fair. About four-in-ten (38%) say it has no impact.
  • Use of other drugs: 27% say this policy decreases the use of other drugs like heroin, fentanyl and cocaine, and 29% say it increases it. But the largest share (42%) say it has no effect on other drug use.
  • Community safety: 21% say recreational legalization makes communities safer and 34% say it makes them less safe. Another 44% say it doesn’t impact safety.

Democrats and adults under 50 are more likely than Republicans and those in older age groups to say legalizing marijuana has positive impacts in each of these areas.

Most Americans support easing penalties for people with marijuana convictions, an October 2021 Center survey found . Two-thirds of adults say they favor releasing people from prison who are being held for marijuana-related offenses only, including 41% who strongly favor this. And 61% support removing or expunging marijuana-related offenses from people’s criminal records.

Younger adults, Democrats and Black Americans are especially likely to support these changes. For instance, 74% of Black adults  favor releasing people from prison  who are being held only for marijuana-related offenses, and just as many favor removing or expunging marijuana-related offenses from criminal records.

Twenty-four states and the District of Columbia have legalized small amounts of marijuana for both medical and recreational use as of March 2024,  according to the  National Organization for the Reform of Marijuana Laws  (NORML), an advocacy group that tracks state-level legislation on the issue. Another 14 states have legalized the drug for medical use only.

A map of the U.S. showing that nearly half of states have legalized the recreational use of marijuana.

Of the remaining 12 states, all allow limited access to products such as CBD oil that contain little to no THC – the main psychoactive substance in cannabis. And 26 states overall have at least partially  decriminalized recreational marijuana use , as has the District of Columbia.

In addition to 24 states and D.C.,  the U.S. Virgin Islands ,  Guam  and  the Northern Mariana Islands  have legalized marijuana for medical and recreational use.

More than half of Americans (54%) live in a state where both recreational and medical marijuana are legal, and 74% live in a state where it’s legal either for both purposes or medical use only, according to a February Center analysis of data from the Census Bureau and other outside sources. This analysis looked at state-level legislation in all 50 states and the District of Columbia.

In 2012, Colorado and Washington became the first states to pass legislation legalizing recreational marijuana.

About eight-in-ten Americans (79%) live in a county with at least one cannabis dispensary, according to the February analysis. There are nearly 15,000 marijuana dispensaries nationwide, and 76% are in states (including D.C.) where recreational use is legal. Another 23% are in medical marijuana-only states, and 1% are in states that have made legal allowances for low-percentage THC or CBD-only products.

The states with the largest number of dispensaries include California, Oklahoma, Florida, Colorado and Michigan.

A map of the U.S. showing that cannabis dispensaries are common along the coasts and in a few specific states.

Note: This is an update of a post originally published April 26, 2021, and updated April 13, 2023.  

sample questionnaire for medical research

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Americans overwhelmingly say marijuana should be legal for medical or recreational use

Religious americans are less likely to endorse legal marijuana for recreational use, four-in-ten u.s. drug arrests in 2018 were for marijuana offenses – mostly possession, two-thirds of americans support marijuana legalization, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Prestigious cancer research institute has retracted 7 studies amid controversy over errors

Dana-Farber Cancer Institute

Seven studies from researchers at the prestigious Dana-Farber Cancer Institute have been retracted over the last two months after a scientist blogger alleged that images used in them had been manipulated or duplicated.

The retractions are the latest development in a monthslong controversy around research at the Boston-based institute, which is a teaching affiliate of Harvard Medical School. 

The issue came to light after Sholto David, a microbiologist and volunteer science sleuth based in Wales, published a scathing post on his blog in January, alleging errors and manipulations of images across dozens of papers produced primarily by Dana-Farber researchers . The institute acknowledged errors and subsequently announced that it had requested six studies to be retracted and asked for corrections in 31 more papers. Dana-Farber also said, however, that a review process for errors had been underway before David’s post. 

Now, at least one more study has been retracted than Dana-Farber initially indicated, and David said he has discovered an additional 30 studies from authors affiliated with the institute that he believes contain errors or image manipulations and therefore deserve scrutiny.

The episode has imperiled the reputation of a major cancer research institute and raised questions about one high-profile researcher there, Kenneth Anderson, who is a senior author on six of the seven retracted studies. 

Anderson is a professor of medicine at Harvard Medical School and the director of the Jerome Lipper Multiple Myeloma Center at Dana-Farber. He did not respond to multiple emails or voicemails requesting comment. 

The retractions and new allegations add to a larger, ongoing debate in science about how to protect scientific integrity and reduce the incentives that could lead to misconduct or unintentional mistakes in research. 

The Dana-Farber Cancer Institute has moved relatively swiftly to seek retractions and corrections. 

“Dana-Farber is deeply committed to a culture of accountability and integrity, and as an academic research and clinical care organization we also prioritize transparency,” Dr. Barrett Rollins, the institute’s integrity research officer, said in a statement. “However, we are bound by federal regulations that apply to all academic medical centers funded by the National Institutes of Health among other federal agencies. Therefore, we cannot share details of internal review processes and will not comment on personnel issues.”

The retracted studies were originally published in two journals: One in the Journal of Immunology and six in Cancer Research. Six of the seven focused on multiple myeloma, a form of cancer that develops in plasma cells. Retraction notices indicate that Anderson agreed to the retractions of the papers he authored.

Elisabeth Bik, a microbiologist and longtime image sleuth, reviewed several of the papers’ retraction statements and scientific images for NBC News and said the errors were serious. 

“The ones I’m looking at all have duplicated elements in the photos, where the photo itself has been manipulated,” she said, adding that these elements were “signs of misconduct.” 

Dr.  John Chute, who directs the division of hematology and cellular therapy at Cedars-Sinai Medical Center and has contributed to studies about multiple myeloma, said the papers were produced by pioneers in the field, including Anderson. 

“These are people I admire and respect,” he said. “Those were all high-impact papers, meaning they’re highly read and highly cited. By definition, they have had a broad impact on the field.” 

Chute said he did not know the authors personally but had followed their work for a long time.

“Those investigators are some of the leading people in the field of myeloma research and they have paved the way in terms of understanding our biology of the disease,” he said. “The papers they publish lead to all kinds of additional work in that direction. People follow those leads and industry pays attention to that stuff and drug development follows.”

The retractions offer additional evidence for what some science sleuths have been saying for years: The more you look for errors or image manipulation, the more you might find, even at the top levels of science. 

Scientific images in papers are typically used to present evidence of an experiment’s results. Commonly, they show cells or mice; other types of images show key findings like western blots — a laboratory method that identifies proteins — or bands of separated DNA molecules in gels. 

Science sleuths sometimes examine these images for irregular patterns that could indicate errors, duplications or manipulations. Some artificial intelligence companies are training computers to spot these kinds of problems, as well. 

Duplicated images could be a sign of sloppy lab work or data practices. Manipulated images — in which a researcher has modified an image heavily with photo editing tools — could indicate that images have been exaggerated, enhanced or altered in an unethical way that could change how other scientists interpret a study’s findings or scientific meaning. 

Top scientists at big research institutions often run sprawling laboratories with lots of junior scientists. Critics of science research and publishing systems allege that a lack of opportunities for young scientists, limited oversight and pressure to publish splashy papers that can advance careers could incentivize misconduct. 

These critics, along with many science sleuths, allege that errors or sloppiness are too common , that research organizations and authors often ignore concerns when they’re identified, and that the path from complaint to correction is sluggish. 

“When you look at the amount of retractions and poor peer review in research today, the question is, what has happened to the quality standards we used to think existed in research?” said Nick Steneck, an emeritus professor at the University of Michigan and an expert on science integrity.

David told NBC News that he had shared some, but not all, of his concerns about additional image issues with Dana-Farber. He added that he had not identified any problems in four of the seven studies that have been retracted. 

“It’s good they’ve picked up stuff that wasn’t in the list,” he said. 

NBC News requested an updated tally of retractions and corrections, but Ellen Berlin, a spokeswoman for Dana-Farber, declined to provide a new list. She said that the numbers could shift and that the institute did not have control over the form, format or timing of corrections. 

“Any tally we give you today might be different tomorrow and will likely be different a week from now or a month from now,” Berlin said. “The point of sharing numbers with the public weeks ago was to make clear to the public that Dana-Farber had taken swift and decisive action with regard to the articles for which a Dana-Farber faculty member was primary author.” 

She added that Dana-Farber was encouraging journals to correct the scientific record as promptly as possible. 

Bik said it was unusual to see a highly regarded U.S. institution have multiple papers retracted. 

“I don’t think I’ve seen many of those,” she said. “In this case, there was a lot of public attention to it and it seems like they’re responding very quickly. It’s unusual, but how it should be.”

Evan Bush is a science reporter for NBC News. He can be reached at [email protected].

IMAGES

  1. Free Medical Questionnaire Template

    sample questionnaire for medical research

  2. Sample questionnaire for clinical scenarios sessions

    sample questionnaire for medical research

  3. FREE 8+Health Questionnaire Forms in PDF

    sample questionnaire for medical research

  4. 30+ Questionnaire Templates (Word) ᐅ TemplateLab

    sample questionnaire for medical research

  5. Health Questionnaire

    sample questionnaire for medical research

  6. FREE 10+ Medical Screening Questionnaire Examples in PDF

    sample questionnaire for medical research

VIDEO

  1. Training Data Entry Using a Sample Questionnaire/ Survey

  2. Questionnaire || Meaning and Definition || Type and Characteristics || Research Methodology ||

  3. Can a Subscription-Based Healthcare Startup Legally Deploy the RN Profitably?

  4. The format of survey questionnaire SAMPLE QUESTIONNAIRE

  5. How to prepare a questionnaire for research

  6. sample questionnaire

COMMENTS

  1. Medical Surveys: Questions & Templates for Patients

    Track patient satisfaction by asking for feedback after office visits or hospital stays. Ask patients to give feedback on their interactions with staff, medical technicians, physicians, and nurses. Send out a survey asking about possible improvements to waiting rooms, check-in procedures, appointment-setting, cleanliness, and more.

  2. 25 Health care surveys| Questionnaire Templates| QuestionPro

    44 questions View Template. Healthcare Opinion Survey Template offers hospital and healthcare organization questions to evaluate the quality, affordability levels, availability, and preferences of customers. You can customize this sample survey questionnaire on the healthcare opinion of patients based on your needs.

  3. (PDF) Surveys and questionnaires in health research

    National Health Survey 2007 - 08. The Australian Bureau of Statistics has conducted cross-sectional National Health Surveys every three years to. track the state of health of the nation and ...

  4. Practical Guidelines to Develop and Evaluate a Questionnaire

    The increasing usage of questionnaires in medical sciences requires rigorous scientific evaluations before finally adopting it for routine use. There are no standard guidelines for questionnaire development, evaluation, and reporting in contrast to guidelines such as CONSORT, PRISMA, and STROBE for treatment development, evaluation, and reporting.

  5. Hands-on guide to questionnaire research: Selecting, designing, and

    The great popularity with questionnaires is they provide a "quick fix" for research methodology. No single method has been so abused. 1 Questionnaires offer an objective means of collecting information about people's knowledge, beliefs, attitudes, and behaviour. 2,3 Do our patients like our opening hours? What do teenagers think of a local antidrugs campaign and has it changed their attitudes?

  6. 20 Amazing health survey questions for questionnaires

    It offers its users a wide variety of ready-to-use forms, surveys, and quizzes. The free template for health survey on forms.app is easy to use. It will be explained step by step how to use the forms.app to create the health questionnaire. 1 - Sign up or log in to forms.app: For health surveys that you can create quickly and easily on forms.app ...

  7. Medical Health Questionnaire & Example

    Medical Health Questionnaire example (sample) ... The evolution of these questionnaires has been heavily influenced by ongoing research in various medical disciplines (Cowley et al., 2022). Evidence-based practices and clinical studies have played a crucial role in shaping the questions included in health assessments, ensuring that they align ...

  8. Questionnaires in clinical trials: guidelines for optimal design and

    A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). The mode of administration can also impact on the cost, quality and completeness of data ...

  9. The questionnaire as a tool for collecting information in support of

    Abstract. In medical and epidemiological research, multi-item questionnaires are often used to assess changes in the health of a particular group of subjects over a certain period. They can target ...

  10. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  11. Selecting, designing, and developing your questionnaire

    Anybody can write down a list of questions and photocopy it, but producing worthwhile and generalisable data from questionnaires needs careful planning and imaginative design The great popularity with questionnaires is they provide a "quick fix" for research methodology. No single method has been so abused. 1 Questionnaires offer an objective means of collecting information about people's ...

  12. 21 Questionnaire Templates: Examples and Samples

    A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. This article enlists 21 questionnaire templates along with samples and examples. It also describes the different types of questionnaires and the question types that are used in these questionnaires.

  13. Research Methods in Healthcare Epidemiology: Survey and Qualitative

    Surveys are a commonly used tool in healthcare epidemiology and antimicrobial stewardship research. Surveys allow selection of a relatively large sample of people from a predetermined population, followed by collection of data from those individuals, and may be exploratory, descriptive, or explanatory. The key considerations for research using ...

  14. The most used questionnaires for evaluating satisfaction, usability

    Background Various questionnaires are used for evaluating satisfaction, usability, acceptance, and quality outcomes of mobile health (mHealth) services. Using the best one to meet the needs of an mHealth study is a challenge for researchers. Therefore, this study aimed to review and determine the frequently used questionnaires for evaluating the mentioned outcomes of mHealth services. Methods ...

  15. 20+ SAMPLE Research Questionnaires Templates in PDF

    Although a questionnaire is often regarded as an integral part of a survey, a survey may not necessarily be the intention of a questionnaire. Research Questionnaire. The instrument used for data collection; Is a tool that is distributed; May contain open- or closed-ended questions; Collects information on a topic; Research Survey

  16. Questionnaire surveys in medical research

    Questionnaire surveys in medical research. Questionnaire surveys in medical research. Questionnaire surveys in medical research J Eval ... J Eaden, M K Mayberry, J F Mayberry. Affiliation 1 Gastrointestinal Research Unit, Leicester General Hospital, UK. PMID: 11133122 DOI: 10.1046/j.1365-2753.2000.00263.x No abstract available ...

  17. Medical Questionnaires

    The purpose of this sample evaluation template is to help your doctor to understand your current health status. So, you need to make sure you give an accurate answer to every question asked. Once your medical doctor understands your past and current health records as well as status, they'll be able to recommend the best treatment procedure and medication for your current condition.

  18. Questionnaire/Survey

    This is a questionnaire designed to be completed by physicians, implementers, and nurses across a health care system setting. The tool includes questions to assess benefit, the current state, usability, perception, and attitudes of users electronic health records and health information exchange. Year of Survey. Created prior to 2011.

  19. 50+ SAMPLE Medical Questionnaires in PDF

    In fact, you have plentiful options of sample medical questionnaires, as seen above. Look for your preferred version of the medical questionnaire until you can slowly write the key details, customize the format, and produce it afterward. Also, here are some important steps to consider in making medical questionnaires: Step 1: Consider Your Purpose

  20. Designing and validating a research questionnaire

    However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and ...

  21. Perception, practice, and barriers toward research among pediatric

    Scientific research activities are crucial for the development of clinician-scientists. However, few people pay attention to the current situation of medical research in pediatric medical students in China. This study aims to assess the perceptions, practices and barriers toward medical research of pediatric undergraduates. This cross-sectional study was conducted among third-year, fourth-year ...

  22. Journal of Medical Internet Research

    In our quantitative second phase, 31% (151/487) of the survey participants were found to have never used a form of digital health, while 10.7% (52/487) were low- to medium-frequency users and 48.5% (236/487) were high-frequency users. High-frequency users were more likely to be interested in digital health and had higher self-efficacy ...

  23. Journal of Medical Internet Research

    Background: The digitalization of public and health sectors worldwide is fundamentally changing health systems. With the implementation of digital health services in health institutions, a focus on digital health literacy and the use of digital health services have become more evident. In Denmark, public institutions use digital tools for different purposes, aiming to create a universal public ...

  24. Frontiers

    An informed consent was provided, and participants had to give their approval before proceeding with the questionnaire's distribution. Study sample. The study sample consisted of 467 undergraduate students from four different departments of UoS including medical, dental, nursing, and pharmacy students.

  25. Developing questionnaires for educational research: AMEE Guide No. 87

    Abstract. In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards.

  26. PDF Developing questionnaires for educational research: AMEE Guide No. 87

    Step 1: Conduct a literature review. The first step to developing a questionnaire is to perform a literature review. There are two primary purposes for the literature review: (1) to clearly define the construct and (2) to determine if measures of the construct (or related constructs) already exist.

  27. Journal of Medical Internet Research

    The primary outcome was the Coping With Parents Abuse Questionnaire (CPAQ), and secondary outcomes were the Center for Epidemiological Studies Depression Scale, Alcohol Use Disorders Identification Test (AUDIT-C), and Ladder of Life (LoL). ... Interactive Journal of Medical Research 360 articles

  28. 9 facts about Americans and marijuana

    Around nine-in-ten Americans say marijuana should be legal for medical or recreational use, according to a January 2024 Pew Research Center survey.An overwhelming majority of U.S. adults (88%) say either that marijuana should be legal for medical use only (32%) or that it should be legal for medical and recreational use (57%).Just 11% say the drug should not be legal in any form.

  29. Cancer research institute retracts studies amid controversy over errors

    April 9, 2024, 2:32 PM PDT. By Evan Bush. Seven studies from researchers at the prestigious Dana-Farber Cancer Institute have been retracted over the last two months after a scientist blogger ...