InterviewPrep

30 Design Researcher Interview Questions and Answers

Common Design Researcher interview questions, how to answer them, and example answers from a certified career coach.

design research expert interview

In the intricate world of design, a Design Researcher plays a critical role in understanding user needs and translating these into innovative solutions. You’ve honed your skills, built an impressive portfolio, and now you’re ready to take on new challenges at your dream job. But before you can start influencing design strategies, you first have to navigate through the interview process.

Interviews for design research positions often go beyond generic questions; they delve deep into your thought processes, creativity, analytical skills, and ability to empathize with users. In this article, we’ll explore common interview questions that you may encounter during your quest for a Design Researcher position. We will also provide insightful tips and sample answers to help you confidently demonstrate your unique approach to design research.

1. Can you describe a project where your research significantly influenced the design direction?

Designers are the architects of user experience, and research plays a key role in ensuring that the design is user-centric. When interviewers ask this question, they are looking to evaluate your ability to effectively translate research findings into actionable design insights. They want to understand how your research methodology, findings, and recommendations have shaped the design process and contributed to the success of a project.

Example: “One significant project was a mobile app redesign for an e-commerce platform. My research involved user interviews, surveys, and usability testing of the existing app.

The findings revealed that users were frustrated with the complex checkout process and lack of personalized recommendations. This insight led to a design overhaul focused on simplifying the checkout process and incorporating AI-driven product suggestions.

This shift in design direction resulted in increased conversions and improved user satisfaction scores post-launch. The success of this project reinforced the value of thorough user research in guiding design decisions.”

2. How do you ensure your research findings are effectively communicated to the design team?

Communication is the key that unlocks the potential of design research. Without effective communication, even the most insightful research findings can become lost or misunderstood. Therefore, hiring managers are eager to understand your communication skills and strategies. They want to ensure that you can not only gather important data but also present it in a way that inspires and informs the design process.

Example: “Effective communication with the design team is crucial. I ensure this by using clear, jargon-free language and visual aids like infographics or flowcharts to present research findings.

I also use collaborative tools for real-time sharing of information and feedback. Regular meetings are scheduled to discuss progress and address any queries.

Lastly, I believe in tailoring my communication style to suit the needs of the design team. This ensures everyone understands the research insights and can apply them effectively in their work.”

3. What is your approach to conducting ethnographic research for design purposes?

Ethnographic research is a critical tool in a design researcher’s toolkit. It provides the insights needed to truly understand user needs, behaviors, and motivations, which in turn, informs the design process. By asking this question, hiring managers are aiming to gauge your familiarity with this method, as well as your ability to practically apply it in a design context.

Example: “In conducting ethnographic research for design purposes, my primary focus is on understanding the user’s context and needs. I start by identifying the target population and defining the scope of the study.

I then engage in participant observation where I immerse myself in their environment to gain a deeper understanding of their behavior, motivations, and challenges. This often involves interviews, surveys, or even shadowing users as they interact with existing designs.

The data collected is analyzed qualitatively to identify patterns and insights that can inform the design process. The goal is to create solutions that are not only functional but also culturally relevant and meaningful to the end-users.”

4. Can you discuss a time when the results of your research were not what you expected? How did you handle it?

This question is rooted in a desire to understand your problem-solving abilities and resilience in the face of unexpected results. Research is not always predictable, especially in design, and the ability to adapt and pivot based on surprising outcomes is a key skill. Employers want to know that you can handle surprises and use them to learn and grow, rather than getting stuck or discouraged.

Example: “In a recent project, I was researching user interaction with a new app interface. The data suggested users were struggling with navigation – contrary to my hypothesis that the design would be intuitive.

I didn’t let this deter me. Instead, I saw it as an opportunity to delve deeper into understanding our users’ needs and behaviors.

I conducted further research through interviews and usability testing. This helped me uncover the root cause of the problem and propose solutions. It was a valuable lesson in not making assumptions about user behavior and always validating ideas with solid research.”

5. How do you incorporate quantitative data into your design research process?

Because design research often involves understanding and interpreting human behavior, it’s important to balance the qualitative (anecdotal, observational) data with quantitative (measurable, statistical) data. Interviewers want to know if you can combine these two types of data to create a comprehensive understanding of user needs and behaviors. This can help inform and validate design decisions, ensuring they are effective and user-centric.

Example: “Incorporating quantitative data into design research is crucial for making informed decisions. I usually start by defining clear, measurable objectives to guide the data collection process.

For instance, if we’re designing a new website layout, we might track metrics like click-through rates or time spent on each page. This provides concrete evidence of user behavior and preferences.

I also use A/B testing to compare different design elements and their impact on user engagement. The results from these tests provide valuable data that can directly influence our design choices.

Quantitative data gives us an objective basis for evaluating design performance and helps in refining designs based on user response. It ensures our design decisions are rooted in fact rather than assumption.”

6. Describe a project where you had to pivot your research strategy midway. What prompted the change?

This question is posed to assess your adaptability and problem-solving skills. In a field like design research, it’s common for initial plans to evolve as new insights and information emerge. Employers want to know that you can navigate these changes effectively, making necessary adjustments without losing sight of the overarching goals.

Example: “In a recent project, we were designing an app for elderly users. Our initial strategy was to focus on simplicity and ease of use. However, after conducting user interviews, we realized that our target audience valued personalization more than simplicity.

This prompted us to pivot our research towards understanding how personalization could be incorporated without compromising usability. We conducted further interviews and surveys, focusing on preferences and needs related to customization.

The change in strategy led to valuable insights that significantly influenced the final design. It reinforced the importance of continuous user feedback and flexibility in research strategies.”

7. How do you ensure the user’s voice is central to your research and the subsequent design process?

In the realm of design research, user-centricity is the golden rule. It’s not just about creating beautiful and functional products, but about creating solutions that truly meet the needs and wants of the users. By asking this question, hiring managers want to see that you have a user-centric mindset, that you value their input, and that you have strategies in place to ensure their feedback is incorporated effectively into the design process.

Example: “To ensure the user’s voice is central to my research and design process, I employ several strategies.

I start with a thorough understanding of the users through methods like surveys, interviews, and observations. This helps in identifying their needs, preferences, and pain points.

Next, I use these insights to create user personas and journey maps which serve as constant reminders of who we are designing for throughout the project.

I also advocate for regular usability testing at various stages of the design process. This allows us to validate our designs against actual user feedback and make necessary adjustments.

Lastly, I believe in maintaining an open line of communication with the users even post-launch. Their ongoing feedback can provide valuable insights for future improvements.”

8. Can you explain your process for conducting competitive analysis in a design context?

The heart of this question lies in your ability to gather, analyze, and apply information about competitors in a way that benefits your company’s design strategies. A comprehensive competitive analysis allows you to understand the strengths and weaknesses of rival products, anticipate market trends, and provide insights that can help shape the direction of your own design work. It’s a critical skill for a design researcher—hence, why interviewers want to hear about your process.

Example: “In conducting competitive analysis in a design context, I start by identifying key competitors and their products. This involves understanding their strengths, weaknesses, opportunities and threats (SWOT).

Next, I analyze the user interface and user experience of these products, focusing on aspects like usability, functionality, aesthetics, and overall user satisfaction.

Then, I gather data through methods such as surveys, interviews, or focus groups to gain insights into users’ perceptions and experiences with these products.

Finally, I synthesize all this information into a comprehensive report that highlights areas where our product can differentiate and excel. The goal is not just to imitate what others are doing but to understand gaps and opportunities for innovation.”

9. How do you handle bias in your research process to ensure objective results?

As a design researcher, your role is to provide reliable, objective data that can guide the design process and ensure the final product effectively meets user needs. However, bias—both your own and that of your participants—can creep into the research process and skew your results. Hiring managers want to know that you’re aware of these potential pitfalls and have strategies in place to avoid them.

Example: “To handle bias in my research process, I use a few key strategies.

I ensure diversity in sample selection to represent various perspectives. This helps prevent skewed data and results.

Triangulation is another method I employ. By using multiple sources of data, any bias inherent in one source can be mitigated by the others.

Blind testing is also useful. It involves withholding information that might lead to bias from those interpreting the data.

Lastly, peer reviews offer an external check on potential biases. Other researchers can spot biases that may have been overlooked.

These methods help maintain objectivity throughout the research process.”

10. How have you used participatory design methods in your research?

Inviting users into the design process helps ensure the final product is user-friendly and meets their needs. Participatory design methods are a great way to gain insights into user needs, expectations, and behaviors. Therefore, hiring managers want to know if you have experience with these methods and how effectively you can incorporate them into your research.

Example: “In my research, I’ve used participatory design methods to ensure that the end product is user-centric. For example, in a project aimed at redesigning an e-commerce website’s checkout process, we included real users from the very beginning.

We conducted workshops where users could map out their ideal checkout process. We then used these maps as a guide for our initial prototypes. This method allowed us to gain valuable insights into what users wanted and needed, which greatly informed our design decisions.

This approach not only increased user satisfaction with the final design but also saved time and resources by reducing the need for extensive revisions later on.”

11. Discuss an instance where your research findings were at odds with the design team’s ideas. How did you resolve this?

This inquiry is designed to gauge your ability to handle conflict and manage differing perspectives within a team setting. As a design researcher, your role is pivotal in providing insights that guide the design process. However, there might be times when your findings clash with the design team’s vision or ideas. The interviewer wants to understand how you navigate such situations, ensuring that the final output is not compromised and that team harmony is maintained.

Example: “In a recent project, my research suggested that users preferred a more minimalist design approach. However, the design team was keen on incorporating numerous features and visuals. This discrepancy led to some initial disagreements.

To resolve this, I presented my findings in detail, explaining how user experience could be impacted negatively by an overly complex design. We then had a brainstorming session where we discussed ways to incorporate essential features without compromising simplicity.

Ultimately, we found a balance between functionality and aesthetics, resulting in a product that was well-received by users. It was a valuable lesson in effective communication and collaboration.”

12. How do you balance the need for quick results with the need for thorough research in the design process?

Design research involves a constant tug-of-war between the desire for comprehensive data and the constraints of project timelines. Employers want to know that you can navigate this tension and deliver valuable insights without holding up the design process. They’re eager to see if you can strike the right balance, prioritizing the most critical research tasks, while also moving swiftly to keep projects on track.

Example: “Balancing quick results with thorough research in design is a matter of prioritizing and strategizing. Understanding the project’s scope, goals, and timeline helps to identify critical areas requiring immediate attention.

For rapid results, I employ lean methodologies such as iterative prototyping and user feedback sessions. This allows for early detection and rectification of design flaws.

However, for comprehensive research, I ensure deep dives into user behavior, market trends, and competitor analysis. While this takes time, it guarantees informed decisions that enhance product longevity and relevance.

The key is integrating both approaches: using quick methods to guide initial stages while allowing findings from detailed research to refine the final design. It’s about working smart, not just hard.”

13. Can you describe a project where you used a unique or innovative research method to inform design?

In the world of design, being innovative and thinking outside the box is more than just a desirable trait—it’s a necessity. Hiring managers want to know if you can put theory into practice, and more importantly, if you can come up with unique solutions to problems. Demonstrating your ability to use innovative research methods to inform design decisions is a way to showcase your critical thinking skills, creativity, and your ability to drive a project in new and exciting directions.

Example: “In a recent project, I used a method called “Cultural Probes” to gather insights. This involved creating packages with tasks for participants to complete over time, such as taking photos or keeping journals.

This qualitative approach allowed us to gain deep, personal insights into the users’ lives and experiences that we wouldn’t have obtained through traditional surveys or interviews. The data collected was invaluable in informing our design process, leading to more empathetic and user-centered solutions.”

14. How do you approach research for designing inclusive and accessible products?

Inclusivity and accessibility are extremely important in modern product design. A design researcher is responsible for making products that are usable by as many people as possible, regardless of their abilities or backgrounds. By asking this question, hiring managers want to know if you understand the value of inclusive design and if you have strategies for conducting research that will help make products more accessible and inclusive.

Example: “In designing inclusive and accessible products, I start with understanding the diverse needs of users. This involves conducting in-depth user research that includes people from different demographics, abilities, and backgrounds.

I also utilize guidelines such as WCAG for accessibility standards to ensure our designs meet these criteria. Collaboration is key; working closely with developers, UX writers, and other stakeholders helps us address potential issues early on.

Finally, usability testing is crucial. It allows us to validate our design decisions, ensuring they work well for all users. Inclusivity and accessibility are not afterthoughts, but integral parts of my design process.”

15. What strategies do you use to recruit participants for your research studies?

Recruiting participants for research studies is often a key aspect of a design researcher’s role, and the strategies you use can greatly impact the quality and relevance of your findings. Interviewers want to ensure you’re adept at identifying and reaching out to potential participants in a way that’s effective, ethical, and aligned with the study’s objectives.

Example: “To recruit participants for research studies, I employ a multi-pronged approach.

I use social media platforms and online forums related to the study’s subject matter as they are effective tools in reaching out to potential participants. I also leverage existing networks of contacts within relevant industries or communities.

Incentives can be an effective strategy too; these could range from financial compensation to exclusive access to the results of the study.

However, it’s crucial to ensure that the recruitment process is ethical and unbiased. Therefore, I always strive to maintain transparency about the purpose of the study and what participation involves.”

16. Describe a situation where you had to defend your research findings to stakeholders. How did you handle it?

Stakeholders might not always agree with your findings, and in some cases, they might have their own set of data that they believe contradicts yours. Because of this, they might need you to explain your methodology, findings, and reasoning. Hiring managers want to know you can handle these situations professionally, tactfully, and effectively, ensuring your research is understood and applied correctly.

Example: “In one project, our research suggested a new user interface design that was contrary to the stakeholders’ initial preference. They were skeptical about the change.

I handled it by presenting the data and methodology we used in reaching our conclusion. I explained how this approach would improve user experience based on behavioral patterns observed during testing.

To address their concerns further, I proposed A/B testing for both designs. This allowed us to gather real-time feedback from users, which ultimately validated our recommendation. It demonstrated my commitment to evidence-based decision making and reassured them of the robustness of our research process.”

17. Can you discuss a project where you used data visualization to communicate your research findings?

Being able to communicate complex data in a visually appealing and easy-to-understand manner is a key skill for design researchers. This question is asked to gain insight into your ability to translate research into actionable insights that can be easily digested by a wide range of stakeholders, from designers and developers to executives. Your answer will reveal your proficiency in data visualization tools, your creative thinking, and your ability to make data-driven decisions.

Example: “One project involved analyzing customer feedback for a retail company. I used sentiment analysis to categorize comments into positive, negative, and neutral categories.

I then created an interactive dashboard using Tableau that displayed these sentiments over time. This allowed stakeholders to easily identify trends and patterns in customer satisfaction.

The visualization was instrumental in driving decisions about product improvements and customer service strategies. It helped the team understand complex data sets and make informed decisions based on customers’ needs and preferences.”

18. How do you ensure your research methods are ethical, especially when dealing with sensitive user data?

Hiring managers want to know that you understand the importance of ethics in research. As a Design Researcher, you will often handle sensitive user data, and it’s critical that this information is treated with the utmost respect and confidentiality. The way you gather, use, and store data should adhere to the highest ethical standards to maintain users’ trust and comply with regulations.

Example: “Ensuring ethical research methods, particularly when dealing with sensitive user data, requires a multi-faceted approach.

One key aspect is obtaining informed consent from users before collecting or using their data. This involves clearly explaining the purpose of the research, what data will be collected, how it will be used and stored, and any potential risks involved.

Another crucial element is maintaining confidentiality and privacy. This can be achieved by anonymizing data, storing it securely, and only sharing it on a need-to-know basis.

Finally, I believe in conducting regular audits to ensure compliance with these practices and staying updated on evolving ethical standards and regulations related to data handling and user research.”

19. How do you approach the challenge of translating abstract research data into concrete design solutions?

The crux of a design researcher’s role is to utilize their research findings to implement effective, user-friendly design solutions. Interviewers ask this question to understand your ability to interpret and apply research data, demonstrating your analytical skills, creativity, and practical problem-solving abilities. They want to see that you can bridge the gap between data and design, bringing valuable insights to life in a tangible way.

Example: “Translating abstract research data into design solutions requires a systematic approach. I start by thoroughly understanding the data, identifying key insights and trends.

Next, I map these findings to user needs and business goals, ensuring alignment. This often involves creating personas or journey maps to visualize the information.

Then, through brainstorming sessions and iterative prototyping, I translate these insights into tangible design concepts. These are tested and refined based on feedback until they effectively address the identified needs.

Throughout this process, clear communication with stakeholders is crucial for aligning expectations and facilitating effective decision-making.”

20. What is your experience with remote user testing and research?

In today’s digital-first landscape, the ability to conduct user testing and research remotely is a critical skill for a design researcher. This question is asked to gauge your experience and comfort level with various remote research methodologies, tools, and platforms. It also helps determine whether you can adapt to a remote work environment, which is becoming increasingly prevalent in many industries.

Example: “I have extensive experience with remote user testing and research. In my past projects, I’ve used tools like UserZoom and Lookback to conduct usability tests, interviews, and surveys.

One key aspect of remote research is clear communication. To ensure this, I always prepare a detailed test plan and script. This helps in setting expectations for participants and minimizing potential confusion during the session.

Another important element is being adaptable. Remote sessions can be unpredictable due to technical issues or participant availability. Therefore, having contingency plans and being flexible with scheduling are crucial.

Through these experiences, I’ve learned how to effectively gather and analyze data remotely, which has been invaluable in informing design decisions.”

21. Can you share an example of a project where your research led to a significant design breakthrough?

This question is rooted in understanding your ability to transform research findings into actionable design insights. It’s all about seeing how you apply your knowledge and skills in the real world. They want to know if you can take complex information and distill it into clear, usable design principles that lead to tangible improvements in the user experience. Your answer will demonstrate your analytical skills, creativity, and capacity to contribute to innovative design solutions.

Example: “In a recent project, we were tasked with redesigning an e-commerce website to increase user engagement. My research involved studying user behavior patterns and conducting surveys.

The data revealed that users found the checkout process too complicated leading to cart abandonment. I proposed simplifying the process by reducing the number of steps and introducing a progress bar for better visibility.

Post-implementation, there was a 30% decrease in cart abandonment rates and a significant boost in conversions. This example underscores how research can directly impact design decisions and improve user experience.”

22. How do you manage and prioritize multiple research projects with overlapping deadlines?

Design research is a field where juggling multiple projects at once is often the norm rather than the exception. Hiring managers want to see that you can handle this kind of environment without missing deadlines or sacrificing the quality of your work. Demonstrating your project management skills in your answer can reassure them that you’re up to the task.

Example: “Managing and prioritizing multiple research projects requires a strategic approach. I use project management tools to track tasks, deadlines, and progress. This allows me to visualize the workload and allocate resources effectively.

Prioritization is crucial in managing overlapping deadlines. I prioritize based on urgency, importance, and the project’s impact. Regular communication with stakeholders helps align expectations and keep everyone informed about the project status.

In addition, flexibility is key. Unexpected issues can arise, so it’s essential to be adaptable and ready to adjust plans when necessary. By maintaining an organized workflow and staying proactive, I ensure all projects are completed efficiently without compromising quality.”

23. What is your process for developing user personas based on your research?

User personas are a critical tool in design research for understanding and empathizing with the end users. Therefore, hiring managers want to see if you have a systematic approach to creating these personas. They’re interested in knowing if you can identify user needs, behaviors, and motivations, and translate those insights into actionable tools for the design team. This question also reveals your ability to communicate complex user data in understandable, relatable ways.

Example: “Creating user personas starts with comprehensive research. I gather qualitative and quantitative data through methods like surveys, interviews, and observation to understand user behavior, needs, and motivations.

From the collected data, I identify common patterns and group similar behaviors to form a rough persona grouping. Each group represents a unique user type that shares common characteristics.

Then, I refine these groups into well-defined personas by adding specific details such as demographic information, goals, pain points, etc., making them relatable and realistic.

Finally, I validate these personas with real users to ensure their accuracy and make any necessary adjustments.

This process is iterative and requires regular updates as we learn more about our users or when business objectives change.”

24. How have you used A/B testing in your research to inform design decisions?

A/B testing is a valuable tool in a design researcher’s arsenal. It’s a way to compare two variants of a design to see which performs better. By asking this question, hiring managers are looking for evidence of your ability to use A/B testing effectively. This includes setting up the test, interpreting the results, and applying those insights to improve the design. They also want to ensure you can make data-informed decisions that lead to a better user experience.

Example: “In my research, A/B testing has been a critical tool for making data-driven design decisions. For instance, in one project aimed at improving user engagement, I created two versions of a landing page with different layouts.

After collecting and analyzing the data, it was clear that one layout outperformed the other significantly in terms of click-through rates and time spent on the page. This informed our decision to adopt that particular layout across the site.

A/B testing not only helps validate design choices but also provides insights into user behavior and preferences, which are invaluable when creating user-centric designs.”

25. Can you discuss a time when you had to adjust your research methods due to budget constraints?

The real-world research process is often constrained by budget, timeline, or other resources. Employers want to know that you can still deliver valuable insights even when you can’t use the most ideal methods. It’s about demonstrating your flexibility, creativity, and problem-solving skills in finding alternative ways to conduct research.

Example: “In one project, we were researching user interaction with a new app. Initially, we planned to conduct extensive in-person usability testing. However, due to budget cuts, we had to rethink our approach.

We decided to leverage digital tools for remote usability testing which was more cost-effective. We also shifted from individual interviews to focus groups to gather more data at once.

Despite the constraints, the adjustments led to rich insights and ultimately contributed to a successful product launch. It taught me that creativity can often lead to even better outcomes when faced with limitations.”

26. How do you stay updated with the latest design research methods and tools?

This question is a way for potential employers to gauge your commitment to ongoing professional development and your ability to stay current with evolving industry trends. The world of design research is dynamic and rapidly changing, and employers want to ensure you’re someone who takes the initiative to keep their skills and knowledge fresh. This not only benefits you as an individual but also contributes to the overall competitiveness and success of the team and organization.

Example: “I stay updated with the latest design research methods and tools by regularly attending industry-specific webinars, workshops, and conferences. I also subscribe to several professional newsletters and journals such as UX Design Weekly and the Journal of Design Research.

Moreover, I actively participate in online communities like Behance and Dribbble where designers share their work and discuss new trends and techniques.

Lastly, I take advantage of online learning platforms like Coursera and Udemy to enhance my skills and knowledge about emerging tools and methodologies in design research.”

27. Can you describe a project where you had to collaborate with cross-functional teams during the research process?

Collaboration is a key skill for a design researcher, as the role often involves working with teams from different departments such as product, marketing, and user experience. The interviewer wants to understand how you navigate these relationships and facilitate a collaborative atmosphere. Your answer will reveal your ability to communicate, cooperate, and integrate insights from various perspectives, which is crucial for a holistic design approach.

Example: “In a recent project, I was part of a team developing an innovative kitchen appliance. The goal was to create a user-friendly product that met market demands and regulatory requirements.

My role involved collaborating with the engineering, marketing, and legal teams. With engineers, we conducted usability tests to ensure design functionality. We worked with marketing to understand consumer needs and trends. Legal collaboration ensured our design complied with safety regulations.

This cross-functional collaboration was key in creating a successful product that not only met user needs but also adhered to market and legal standards. It demonstrated the importance of diverse perspectives in the research process.”

28. How do you handle negative feedback or criticism of your research findings?

As a design researcher, you’re in a field that thrives on feedback and iteration. Negative feedback or criticism is not a sign of failure, but an opportunity to refine and improve your work. Interviewers want to see if you can accept criticism positively, constructively, and use it as a catalyst for growth and improvement. They are interested in how you respond to challenges and setbacks, as well as your ability to work collaboratively with others who may not always agree with your findings.

Example: “Negative feedback or criticism is a crucial part of the research process. I view it as an opportunity to improve and refine my work. When faced with such situations, I ensure that I understand the critique fully by asking for clarification if needed.

I then objectively analyze the feedback against my findings, considering its validity and how it can enhance the results. If the criticism is valid, I revise my approach accordingly. However, if I believe in my methodology, I am prepared to defend it respectfully while providing supporting evidence.

Remember, research is about learning and growth, and constructive criticism plays a vital role in this journey.”

29. Can you discuss a project where you had to use research to design for a complex user journey?

This question is designed to test your practical application of research principles in the design process. Designing for complex user journeys requires an in-depth understanding of user needs, pain points, and behavior. By asking for a specific example, hiring managers can assess your ability to conduct effective research, synthesize findings, and apply these insights to create a thoughtful and effective design solution.

Example: “One project that stands out is when I was tasked with designing a digital platform for an international non-profit organization. The user journey was complex due to the diverse audience which included donors, volunteers, and beneficiaries across different countries and cultures.

I started by conducting extensive research using methods like surveys, interviews, and usability testing to understand each user group’s needs, motivations, and pain points. This helped me create detailed personas and map out their unique journeys.

The design process involved creating wireframes and prototypes that were iteratively tested and refined based on user feedback. The end result was a user-centric platform that effectively catered to the distinct needs of each user group, improving engagement and satisfaction rates significantly.”

30. How do you measure the success or impact of your research on the final design?

The key to effective design research is not just conducting the research, but also ensuring it informs the design process and leads to a successful final product. This question is asked to evaluate if you can effectively translate data into actionable design insights, and if you understand how to measure the impact of your research on the final product. It helps interviewers assess your analytical skills and your ability to think critically about the design process.

Example: “Measuring the success of research in design involves both qualitative and quantitative methods. User feedback is crucial; it provides insights into how well the design meets user needs and expectations.

Quantitative data such as usage statistics, conversion rates, and time spent on specific tasks can also indicate the effectiveness of a design.

A successful design not only meets its functional objectives but also enhances the overall user experience. If users find value in the product and continue to use it over time, that’s a strong indicator of impactful research.”

30 Research Consultant Interview Questions and Answers

30 dental office administrator interview questions and answers, you may also be interested in..., 30 plant operations manager interview questions and answers, 30 optical lab technician interview questions and answers, 20 most asked community associate interview questions (with answers), 30 data integration manager interview questions and answers.

Expert Interviews: The Windmill Guide to Design Thinking

' src=

  • UX & Design

Every designer understands that research is a critical component in a successful product design. Expert interviews are an integral element of research. Talking to experts at the early stages of a project’s life cycle is a more efficient and concentrated method of data collection than, say, participatory observation or systematic quantitative surveys .

It is an excellent source of ‘inside’ information for startups, their products, and their target markets. Gathering information from experts with extensive experience in your field of interest gives your products a competitive advantage that is critical to their success.

What are Expert Interviews in Design Thinking?

In Design Thinking, the first step in the design process is Empathize, which is the gathering of information to understand your user.

This data-gathering is crucial to problem-solving and a human-centered design process that enables designers to discard assumptions and gain real insight into users and their needs.

Interviewing subject matter experts (SMEs) is one of the most credible ways to gain valuable perspective in data collection. An expert interview is usually a one-on-one conversation with someone who has extensive experience and knowledge in a specific field or subject matter.

Expert interviews typically involve consulting specialists to find out more about the area(s) of concern and conduct observations to aid in engaging and empathizing with your users.

Why should a company conduct Expert Interviews?

Your company needs to carry out expert interviews for several important reasons. They are:

  • Expert interviews give designers an authoritative source of data, bringing real-world perspectives and technical advice that are hard to find elsewhere.
  • It may be useful to speak with some experts in the field to better understand the problems you are facing, especially if they are complex, differentiated, and unique-to-your-field problems.
  • Expert interviews can help to shorten time-consuming data collection processes, especially if the experts are seen as rich sources of practical insider knowledge and are interviewed as representatives of a larger circle of users.
  • They recognize and dispel false assumptions and assist in forming ideas.
  • By giving you information gathered through years of experience, the knowledge makes you a mini-expert in that field.
  • Expert interviews are also useful in situations where gaining access to a particular social field may be difficult or impossible. For instance, with sensitive topics.
  • It gives timely and unproblematic access to objective data.

When should Expert Interviews be conducted?

Expert Interviews should be used when working on complex issues that require context knowledge. The expert interview should take place at the beginning of your research process, or at the end, to sum up, and corroborate research findings.

design research expert interview

How to use the template

You’ve already downloaded the template. But before you start, let’s bring clarity on the use of the template.

The expert interview template is a Design Thinking tool modeled to help you better understand the right problems to solve during your interview session with the stakeholder(s). It marks the beginning of your product and user empathy journey . But how do we put the template to use?

Getting ready to use the template and conduct an Expert Interview

The template is a Design Thinking tool that can be used by up to four stakeholders. This is because multiple expert interviews provide a more accurate form of diverse data. Before getting to the questions, make sure you and your interview subjects are prepared.

  • Provide the experts with context around the why and what of the interview session and how long they can expect it to last.
  • Be focused—listen carefully, check you’ve understood their meaning, ask follow-up questions, and take notes. Ideally, the session will be recorded, but ask for permission first.

The template includes five questions that will get to the heart of the task.

1. What is your product?

This question seeks to uncover what products the product is and what it offers. Stakeholders should shed light on product portfolio services, such as definitions and classifications. You can also try to determine which products are the most and least popular.

2. What problem is your product trying to solve?

With this question, you’re attempting to elicit the basic, underlying characteristics that shape the product/service from the stakeholder.

What was the issue they were trying to solve by attempting to launch the product? What problem are they attempting to solve with their product? What were the success metrics for measuring how their products addressed the problem?

3. What are the main challenges each customer segment faces? What are the core values our product provides to aid them?

This question focuses on your product/target service’s customer segments . The goal is to identify market segments. Is the value you’re providing in line with the criteria that customers look for in products like yours? Who are the target market’s most common customers? (Age, gender, characteristics, and so on.) What method do you use to keep track of your customers.

What are the key customer pain points for each segment? Which do they prioritize (based on the solution they could provide through their product)? Which of the customer’s pain points are addressed by the products? How much does a typical customer spend on your product?

4. Who is using the product currently? Who would we like to use the product?

The stakeholder is addressing market size and market growth trends. Find out which segment of society is using the product. Are they part of the target group? If not, why are they attracted to the product? Is the target group for the product using the product? Who are the primary and secondary target groups who should be using the product? How do we expand our target market? What are the underlying drivers and inhibitors of growth for the target market?

5. Who are your direct/indirect competitors?

This question should address major competitors. Who else is providing the same value to our target customers that we are? Where else can your target market find the value that your products provide? Where do they get their products? Are they your direct or indirect rivals? How much of an impact do they have on our market share? 

It’s critical to prepare ahead of time when conducting expert interviews, which is why we’ve created a template to make things easier. However, even if you have prepared questions, don’t pass up the opportunity to ask follow-up questions if they provide more clarity and new insight. That being said, keep a close eye on the time.

Remember to pay close attention so you don’t miss anything important.

Learn what an expert interview is and how it differs from a user interview. If possible, arrange for a follow-up if you have additional questions for the expert.

After the interview, sit down and compile your feedback. Consider what stood out for you. What did you discover that you were previously unaware of? How can you apply the findings to the problem(s) at hand? Do you need to conduct additional tests and research? These questions will allow you to take the next step in your design process.

We’ve tried to make it as easy as possible for your team to benefit from the expert interview template. But nothing beats the guidance that an experienced design team can offer, so get in touch today to find out how Windmill can help your business clarify its data-gathering, expert interviews, and other key strategic pillars.

Contact with Windmill envelope

Facing a digital product challenge?

As guide and partner, let us help you deliver impactful change and delight your customers.

More articles

Innovation Strategy

What is Innovation Strategy? Overview, Key Benefits, and Examples

' src=

How to Build a Customer Experience Strategy in 10 Steps

design research expert interview

InVisionApp, Inc.

Inside Design

7 ways to prepare for a design research interview

Emily esposito,   •   jun 14, 2018.

W hether you’re a trained researcher or a designer leading your own user study, it takes time to brainstorm questions, identify the goals, and write a discussion guide for design research —not to mention actually finding the right participants.

Just like with a gourmet meal, the prep work might be the most time-intensive, but you can bet it pays off in the end (yum!).

Top Stories

Here are seven ways to prepare for conducting design research:

1. Figure out what you’re going to test

What is the actual thing you want to research? What will users see and interact with? It can be tempting to get caught up in the research part of this process, but don’t forget your prototype should really be the star.

Related: Who should conduct and observe design research?

Will you be testing a low-fidelity or high-fidelity prototype ? Do you want users to be able to interact with the UX or do you want to present a paper mockup to get feedback on general concepts? Depending on your goals for the research, you may have to make changes to your prototype.

2. Agree on goals

On the other hand, if you’re in the early stages of ideation, you may simply want to understand the problems your users face in their everyday lives, helping to inform the product.

3. Articulate your assumptions

We all bring our own assumptions to the table, and while it’s almost impossible to instantly get rid of those biases, it helps to write them out before testing. This will help you ask better questions and can also provide a useful comparison point.

By listing your assumptions and hypotheses before the research, you can go back and compare the actual findings post-study.

“Before doing user research, articulate your assumptions.”

4. Select the interview method

Will you be conducting in-person or remote research? Do you want to record the sessions and do you have the proper software to do so?

Don’t forget to think about the research method itself. Primary research can be the most common in design research, but you’re usually choosing between gathering two types of primary information: exploratory (general, open-ended research) or specific (research used to solve a problem identified during the exploratory phase).

You may also want to do evaluative research, looking at a specific problem to evaluate usability and interaction. This usually involves having people use your product or service and think out loud as they interact with it.

Image from Inside Design: Wealthfront

5. prepare a discussion guide.

And while we’re on the topic of preparing questions for your discussion guide, remember that there are good questions and bad questions. Make sure you ask open-ended, unbiased questions. For example:

Bad: Would adding XYZ functionality be helpful to you? Good: How would you feel about XYZ functionality being added? Bad: How did you like the onboarding process? Good : What did you think about the onboarding process?

“Let your users ultimately guide you.”

6. Find interviewees

You can’t have a study without users, so think through where you will find your participants. Do you want to invite current customers or people completely new to your product? Are you looking for a certain age range or job function?

Related: Scalable user research recruitment

Designing for inclusivity: How and why to get started

Depending on who you want to invite, you may be able to find participants yourself or you can hire a recruiting firm to find people for you.

7. Remember your role

The most important think you can do during design research is to listen. Don’t get wrapped up in the discussion guide you created or your own hypothesis of how things should work.

The prep is worth it

Design research takes time, resources, and preparation, but the results are worth it. The user insights you uncover allow you to design based on facts and not assumptions, help with focus and prioritization—and it all results in happier customers.

InVision Cloud

Sign Up Free

More on design research

  • Your team needs to make user research a habit
  • User research that will transform your company
  • Pareto Principle-based user research

by Emily Esposito

Emily has written for some of the top tech companies, covering everything from creative copywriting to UX design. When she's not writing, she's traveling the world (next stop: Japan!), brewing kombucha, and biking through the Pacific Northwest.

Collaborate in real time on a digital whiteboard Try Freehand

Get awesome design content in your inbox each week, give it a try—it only takes a click to unsubscribe., thanks for signing up, you should have a thank you gift in your inbox now-and you’ll hear from us again soon, get started designing better. faster. together. and free forever., give it a try. nothing’s holding you back..

Grad Coach

Qualitative Research 101: Interviewing

5 Common Mistakes To Avoid When Undertaking Interviews

By: David Phair (PhD) and Kerryn Warren (PhD) | March 2022

Undertaking interviews is potentially the most important step in the qualitative research process. If you don’t collect useful, useable data in your interviews, you’ll struggle through the rest of your dissertation or thesis.  Having helped numerous students with their research over the years, we’ve noticed some common interviewing mistakes that first-time researchers make. In this post, we’ll discuss five costly interview-related mistakes and outline useful strategies to avoid making these.

Overview: 5 Interviewing Mistakes

  • Not having a clear interview strategy /plan
  • Not having good interview techniques /skills
  • Not securing a suitable location and equipment
  • Not having a basic risk management plan
  • Not keeping your “ golden thread ” front of mind

1. Not having a clear interview strategy

The first common mistake that we’ll look at is that of starting the interviewing process without having first come up with a clear interview strategy or plan of action. While it’s natural to be keen to get started engaging with your interviewees, a lack of planning can result in a mess of data and inconsistency between interviews.

There are several design choices to decide on and plan for before you start interviewing anyone. Some of the most important questions you need to ask yourself before conducting interviews include:

  • What are the guiding research aims and research questions of my study?
  • Will I use a structured, semi-structured or unstructured interview approach?
  • How will I record the interviews (audio or video)?
  • Who will be interviewed and by whom ?
  • What ethics and data law considerations do I need to adhere to?
  • How will I analyze my data? 

Let’s take a quick look at some of these.

The core objective of the interviewing process is to generate useful data that will help you address your overall research aims. Therefore, your interviews need to be conducted in a way that directly links to your research aims, objectives and research questions (i.e. your “golden thread”). This means that you need to carefully consider the questions you’ll ask to ensure that they align with and feed into your golden thread. If any question doesn’t align with this, you may want to consider scrapping it.

Another important design choice is whether you’ll use an unstructured, semi-structured or structured interview approach . For semi-structured interviews, you will have a list of questions that you plan to ask and these questions will be open-ended in nature. You’ll also allow the discussion to digress from the core question set if something interesting comes up. This means that the type of information generated might differ a fair amount between interviews.

Contrasted to this, a structured approach to interviews is more rigid, where a specific set of closed questions is developed and asked for each interviewee in exactly the same order. Closed questions have a limited set of answers, that are often single-word answers. Therefore, you need to think about what you’re trying to achieve with your research project (i.e. your research aims) and decided on which approach would be best suited in your case.

It is also important to plan ahead with regards to who will be interviewed and how. You need to think about how you will approach the possible interviewees to get their cooperation, who will conduct the interviews, when to conduct the interviews and how to record the interviews. For each of these decisions, it’s also essential to make sure that all ethical considerations and data protection laws are taken into account.

Finally, you should think through how you plan to analyze the data (i.e., your qualitative analysis method) generated by the interviews. Different types of analysis rely on different types of data, so you need to ensure you’re asking the right types of questions and correctly guiding your respondents.

Simply put, you need to have a plan of action regarding the specifics of your interview approach before you start collecting data. If not, you’ll end up drifting in your approach from interview to interview, which will result in inconsistent, unusable data.

Your interview questions need to directly  link to your research aims, objectives and  research questions - your "golden thread”.

2. Not having good interview technique

While you’re generally not expected to become you to be an expert interviewer for a dissertation or thesis, it is important to practice good interview technique and develop basic interviewing skills .

Let’s go through some basics that will help the process along.

Firstly, before the interview , make sure you know your interview questions well and have a clear idea of what you want from the interview. Naturally, the specificity of your questions will depend on whether you’re taking a structured, semi-structured or unstructured approach, but you still need a consistent starting point . Ideally, you should develop an interview guide beforehand (more on this later) that details your core question and links these to the research aims, objectives and research questions.

Before you undertake any interviews, it’s a good idea to do a few mock interviews with friends or family members. This will help you get comfortable with the interviewer role, prepare for potentially unexpected answers and give you a good idea of how long the interview will take to conduct. In the interviewing process, you’re likely to encounter two kinds of challenging interviewees ; the two-word respondent and the respondent who meanders and babbles. Therefore, you should prepare yourself for both and come up with a plan to respond to each in a way that will allow the interview to continue productively.

To begin the formal interview , provide the person you are interviewing with an overview of your research. This will help to calm their nerves (and yours) and contextualize the interaction. Ultimately, you want the interviewee to feel comfortable and be willing to be open and honest with you, so it’s useful to start in a more casual, relaxed fashion and allow them to ask any questions they may have. From there, you can ease them into the rest of the questions.

As the interview progresses , avoid asking leading questions (i.e., questions that assume something about the interviewee or their response). Make sure that you speak clearly and slowly , using plain language and being ready to paraphrase questions if the person you are interviewing misunderstands. Be particularly careful with interviewing English second language speakers to ensure that you’re both on the same page.

Engage with the interviewee by listening to them carefully and acknowledging that you are listening to them by smiling or nodding. Show them that you’re interested in what they’re saying and thank them for their openness as appropriate. This will also encourage your interviewee to respond openly.

Need a helping hand?

design research expert interview

3. Not securing a suitable location and quality equipment

Where you conduct your interviews and the equipment you use to record them both play an important role in how the process unfolds. Therefore, you need to think carefully about each of these variables before you start interviewing.

Poor location: A bad location can result in the quality of your interviews being compromised, interrupted, or cancelled. If you are conducting physical interviews, you’ll need a location that is quiet, safe, and welcoming . It’s very important that your location of choice is not prone to interruptions (the workplace office is generally problematic, for example) and has suitable facilities (such as water, a bathroom, and snacks).

If you are conducting online interviews , you need to consider a few other factors. Importantly, you need to make sure that both you and your respondent have access to a good, stable internet connection and electricity. Always check before the time that both of you know how to use the relevant software and it’s accessible (sometimes meeting platforms are blocked by workplace policies or firewalls). It’s also good to have alternatives in place (such as WhatsApp, Zoom, or Teams) to cater for these types of issues.

Poor equipment: Using poor-quality recording equipment or using equipment incorrectly means that you will have trouble transcribing, coding, and analyzing your interviews. This can be a major issue , as some of your interview data may go completely to waste if not recorded well. So, make sure that you use good-quality recording equipment and that you know how to use it correctly.

To avoid issues, you should always conduct test recordings before every interview to ensure that you can use the relevant equipment properly. It’s also a good idea to spot check each recording afterwards, just to make sure it was recorded as planned. If your equipment uses batteries, be sure to always carry a spare set.

Where you conduct your interviews and the equipment you use to record them play an important role in how the process unfolds.

4. Not having a basic risk management plan

Many possible issues can arise during the interview process. Not planning for these issues can mean that you are left with compromised data that might not be useful to you. Therefore, it’s important to map out some sort of risk management plan ahead of time, considering the potential risks, how you’ll minimize their probability and how you’ll manage them if they materialize.

Common potential issues related to the actual interview include cancellations (people pulling out), delays (such as getting stuck in traffic), language and accent differences (especially in the case of poor internet connections), issues with internet connections and power supply. Other issues can also occur in the interview itself. For example, the interviewee could drift off-topic, or you might encounter an interviewee who does not say much at all.

You can prepare for these potential issues by considering possible worst-case scenarios and preparing a response for each scenario. For instance, it is important to plan a backup date just in case your interviewee cannot make it to the first meeting you scheduled with them. It’s also a good idea to factor in a 30-minute gap between your interviews for the instances where someone might be late, or an interview runs overtime for other reasons. Make sure that you also plan backup questions that could be used to bring a respondent back on topic if they start rambling, or questions to encourage those who are saying too little.

In general, it’s best practice to plan to conduct more interviews than you think you need (this is called oversampling ). Doing so will allow you some room for error if there are interviews that don’t go as planned, or if some interviewees withdraw. If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel , delay, or not produce useful data.

You should consider all the potential risks, how you’ll reduce their probability and how you'll respond if they do indeed materialize.

5. Not keeping your golden thread front of mind

We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don’t want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims . Your research aims, objectives and research questions – i.e., your golden thread – should influence every design decision and should guide the interview process at all times. 

A useful way to avoid this mistake is by developing an interview guide before you begin interviewing your respondents. An interview guide is a document that contains all of your questions with notes on how each of the interview questions is linked to the research question(s) of your study. You can also include your research aims and objectives here for a more comprehensive linkage. 

You can easily create an interview guide by drawing up a table with one column containing your core interview questions . Then add another column with your research questions , another with expectations that you may have in light of the relevant literature and another with backup or follow-up questions . As mentioned, you can also bring in your research aims and objectives to help you connect them all together. If you’d like, you can download a copy of our free interview guide here .

Recap: Qualitative Interview Mistakes

In this post, we’ve discussed 5 common costly mistakes that are easy to make in the process of planning and conducting qualitative interviews.

To recap, these include:

If you have any questions about these interviewing mistakes, drop a comment below. Alternatively, if you’re interested in getting 1-on-1 help with your thesis or dissertation , check out our dissertation coaching service or book a free initial consultation with one of our friendly Grad Coaches.

design research expert interview

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Writing A Dissertation/Thesis Abstract

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Expert Interview

Gain valuable insights and expert knowledge through a structured interview with a professional in the field.

An Expert Interview involves conducting in-depth interviews with subject matter experts to gather specialized knowledge, insights, and opinions. By engaging with industry leaders, academics, or professionals, researchers can access expertise that might not be readily available through other means. Expert Interviews are valuable in market research, academic studies, and product development, where they provide authoritative insights, validate assumptions, and guide strategic decisions.

Suitable for

  • ✓ Finding hard-to-reach expert information;
  • ✓ Obtaining professional advice;
  • ✓ Acquiring a comprehensive view of the issue.

Deliverables

Interview recruitment plan.

A structured plan for identifying and recruiting industry experts that align with the research goals and subjects.

Expert profiles

A set of profiles for each expert, including their credentials, qualifications, and relevant experiences.

Interview guide

A structured, pre-defined set of questions to ask during the interviews, focused on obtaining relevant insights related to the research objectives.

Informed consent form

A document that informs the experts about the purpose, process, and confidentiality terms of the interview. The experts have to sign this form before participating in the interview.

Audio/Video recordings

Recordings of each interview, either in audio or video format, to help with transcription, analysis, and further review.

Transcripts

Typed, detailed text records of each interview, enabling easy reference and analysis.

Analysis and synthesis

A comprehensive analysis of the findings from the transcripts, identifying key themes, insights, and trends, followed by synthesized outcomes to inform design recommendations.

Recommendations report

A detailed report outlining the recommendations based on the expert interviews, aimed at improving the user experience and addressing the research objectives.

Presentation

A visually appealing and concise presentation summarizing the findings, insights, and recommendations derived from the expert interviews, tailored for key stakeholders.

Identify the Research Objectives

Before conducting an expert interview, it is crucial to outline the research objectives. Clearly define what information you are seeking and determine which aspects of the user experience the experts can provide insights about.

Compile a List of Potential Experts

Create a list of potential experts with relevant knowledge and experience in the field of interest. Look for professionals with diverse backgrounds to get a comprehensive understanding of the research topic.

Develop Selection Criteria

Establish specific criteria for selecting experts to interview. Factors such as expertise level, years of experience, and relevance to the research objectives should be considered. This will help ensure that the selected participants can provide valuable insights.

Recruit the Selected Experts

Reach out to the potential experts on your list and invite them to participate in your research. Clearly explain the objectives and expectations of the interview, as well as any benefits they may receive, such as financial compensation or research credit.

Prepare the Interview Questions

Develop a set of open-ended, insightful questions to ask during the interview. Focus on questions that cannot be answered through existing literature, allowing the experts to elaborate on their unique perspectives and experiences.

Schedule and Conduct the Interviews

Schedule the interviews with the selected experts, ensuring that adequate time has been allotted for each session. During the interview, ask open-ended questions and encourage the experts to provide detailed, nuanced answers. Take thorough notes or record the conversation with the participant's consent.

Analyze and Synthesize Findings

After completing the interviews, analyze the responses and identify recurring themes, insights, and patterns. Synthesize these findings by comparing and contrasting the perspectives of the different experts, and use this information to generate insights relevant to the research objectives.

Report the Results

Summarize the key insights and conclusions drawn from the expert interviews in a well-organized report. Share this report with stakeholders, such as project managers or designers, in order to inform the design process and improve the overall user experience.

Follow-up and Maintain Relationships

Establish and maintain relationships with the experts who participated in the interviews. Keep them informed about the progress of the project and how their insights influenced the outcome. Consider inviting them for future research or collaboration when appropriate.

Prerequisites

1 hour or more

Stationery, dictaphone

1 researcher, 1 or more experts

How to get the most out of an interview with a subject matter expert uxdesign.cc

Ux in the wild: expert interview linkedin.com, user interviews: how, when, and why to conduct them nngroup.com, my expert guide to user interviews: techniques & tips with my interview cards by stéphanie walter stephaniewalter.design, expert interviews: the windmill guide to design thinking windmill.digital, support the project.

Donate to UX Methods today. As the largest UX method database on the web, your contributions will help maintain our platform and drive exciting new features. Keep the resource free, up-to-date, and comprehensive for everyone. Make a difference in the UX community!

Sign up today  for O*Academy’s Design Research Mastery Course

  • UX Research & UX Design
  • UX Staff Augmentation
  • Service Design
  • Design Workshops
  • Case Studies
  • Why Outwitly?
  • Outwitly Team
  • Diversity, Equality and Inclusion

Design Research Methods: In-Depth Interviews

In our new three-part blog series, we introduce our favourite qualitative research methods and strategies that you can immediately start applying to your human-centered design projects.

We cover the following design research methods:

In-Depth Interviews (in this post)

Contextual Observations , and

Diary Studies

Do you want to conduct better interviews? 

We help you navigate in-depth interviews for your users and customers. We’ll explore how to plan and execute a stellar interview, and we’ll outline our Top 7 Tips for In-Depth Interviewers.

What are in-depth interviews?

In-depth interviews are one of the most common qualitative research methods used in design thinking and human-centered design processes. They allow you to gather a lot of information at once, with relative logistical ease. In-depth interviews are a form of ethnographic research, where researchers observe participants in their real-life environment. They are most effective when conducted in a one-on-one setting.

How and when can you use interviews?

In-depth interviews are best deployed during the Discovery phase of the Human-Centered Design (HCD) process . They are an opportunity to explore and a chance to uncover your user’s needs and challenges. Do you want to find out where they are struggling the most with your service? Now is the time to ask.

User Interview Workbook - This image directs you to Outwitly's free workbook that prepares and teaches UX designers how to conduct interviews like a pro.

Logistics for In-Depth Interviews

Here are our top tips for planning out the logistics for your interviews:

Recruiting: Properly recruiting for interviews is a crucial step, and it can sometimes be the most challenging part of the process. Recruitment can either be handled by the client or in-house, and sometimes by an external recruiting firm. You’ll identify the demographics and characteristics of your different user groups as a first step (e.g. by gender, age, occupation, etc.), and then you’ll ideally find 4-6 interview participants that match your recruiting criteria.

Scheduling: Outwitly uses a scheduling tool called Calendly to schedule all of our interviews. This handy platform syncs directly with our internal calendars, and it will even hook-up to our web-conferencing tool to send call information directly to the participant.

Format: Interviews can be conducted in-person or remotely over the phone, or a combination of the two. An advantage to conducting in-person interviews is that they allow for easier rapport-building, and you’re able to more fully understand the context of how your participant may interact with the product, service, or organization, as well as a holistic picture of their lives. The advantage to remote interviews is that they are easier to schedule and recruit for, and they can really be conducted from anywhere with a cell signal or a WiFi connection. Ideally, you are able to do a mix of both interview types, or you’re able to use remote interviewing in conjunction with another research method, like observations.

Duration: The sweet spot for in-depth interview length is between 45–90 minutes. This depends on how many research themes and questions you have, and of course, your participant’s schedule. Anything over 90 minutes can be very draining for both you and the participant.

Note-Taking: When possible (and with the participant’s consent), it’s best to audio record interviews. This way you are not scrambling to keep up with your hand-written notes, and you are able to fully engage with the participant and listen closely. At Outwitly, we use manual audio recorders, but the iPhone Voice Record Pro app is also an option for in-person interviews. For remote interviewing, you might opt to use call recording software; we like to use the built-in recording feature of GoToMeeting , which is our preferred web-conference platform. Once audio recordings have been collected, we typically get the recordings transcribed using services like Rev.com . This saves a lot of time during the data analysis phase.

Interview Protocol: Before running a set of interviews, it’s important to prepare an ‘interview protocol.’ A protocol is the combination of two things:

1) An introductory script about the research and what the participant can expect from the interview. This is also the time to ask consent for recording and to assure participants that their names and everything they say will be kept confidential.

7 Tips for In-Depth Interviewers

Interviewing is an art form, and it requires a high level of emotional intelligence. You need to be in tune with how comfortable your interviewee/research participant feels, and enable them to open up to you–a complete stranger–about their challenges. Research can sometimes involve particularly sensitive subjects like weight management, divorce, personal finances, and more, so rapport-building (Tip #4) is especially crucial for successful interviewing. Here are our Top 7 best practices for interviewers.

Active Listening: The best skill an interviewer can foster is their listening ability. In a strong interview, the interviewer is not interrupting, bringing up their own anecdotes, or asking too many questions. While some of these “what-not-to-do’s” can actually be helpful to make the participant feel comfortable, too many can derail the interview and also lead the participant to certain answers (as discussed in Tip #3). The interview should flow naturally, and you should mostly allow for the participant to lead the conversation. You’ll want to be listening to them, and when appropriate, repeating key points back to them to reiterate that you are actively listening. Asking a question like “I heard you say your biggest challenges are XYZ. Is there anything else?” shows the participants that you are interested in what they are saying, and it encourages them to keep sharing.

Probing: ‘Probing’ in the context of in-depth interviews refers to diving deeper on a particular response or topic. Typically, you will have prepared your interview protocol with a list of questions and sub-questions–the latter are your probing questions. For example, you might begin with an open-ended, general question, and as your participant replies, you might ask subsequent questions that encourage them to keep digging into the subject. A good interviewer also knows when to continue probing on a subject–and when to move on.

Non-Leading: Learn not to ask leading questions. A leading question is one in which you are making an assumption in the way your question is phrased. This can influence how your participant answers the question. For example, if you ask a participant “What challenges do you have with XYZ?”, you are assuming there are challenges, which may skew the participants response. They may not have any challenges to begin with, but they might reply that there were challenges anyway to fit the question. A better way to ask that question would be: “What challenges, if any, have you had with XYZ?” When prepping the interview protocol, be careful not to draft leading questions. And in the heat of the moment if you go off-script, you’ll need to think about how you’re phrasing your questions.

Building Rapport: Learning to build rapport is one of the most important skills to cultivate as an interviewer. By ensuring your participants feel comfortable, they are much more likely to open up to you. Remember to always be friendly and courteous in your communication prior to conducting the interview (e.g. in emails you send regarding scheduling). In the interview, use a tone of voice that is soft and inquisitive, as well as understanding. Introduce yourself as the researcher and explain the research to the participant. Emphasize that you are there to learn about them, and to understand their needs and how the product, service, or organization they are interacting with could be improved to suit them. During the interview, if you hear in their tone of voice that something in their experience was very frustrating, use language to acknowledge that, by saying “It sounds like that was very frustrating” or “I understand” to let them know that you are on their side. Also, reassure them throughout the interview that their feedback is very useful and helpful by saying things like “Thank you – that’s very interesting,” or “I’ve heard that before from others, you are not the only one!”

Agility & Go-with-the-Flow Attitude: You can prepare, rehearse, and write your interview protocol, but in every interview you will have to be agile. For example, if you’ve separated your interview questions into sections, and the participant naturally starts talking about a topic that you have written down for a later portion of the interview, you should freely move down to those questions and jump back to where you were afterwards. This way, the interview will feel more organic and conversational, and less robotic. Flexibility is also critical because some participants just do not have a lot to say. In these cases, you’ll be required to think of more “off the cuff” questions, or you’ll need to reconsider whether the interview is still a valuable use of your time and theirs. Knowing when to cut an interview short is also an important skill. For the most part, let the participant lead the conversation, feel comfortable jumping around a little in your protocol, and listen to them to know what other questions you could ask that might not be in the protocol. Also, know when to skip a question if you’ve already gotten a response elsewhere in the interview.

Facilitate & Guide: Sometimes interviews will be easy and they’ll naturally follow the flow of your interview protocol. And sometimes they’ll be more challenging, especially if an interviewee is particularly passionate about one topic. In this case, you’ll need to guide your participants as much as possible, so that you can move through more of your questions. This is a delicate balance of listening, finding a time to cut in, and using transitional phrases like “That’s very helpful. I’m mindful of the time, and I would like to ask you some questions about XYZ.”

Comfort with Discomfort: It can sometimes be difficult for participants to answer a question quickly in an interview. They might need to think about their answer before responding. Or they may be able to answer quickly, but there might be things in the back of their mind related to the question that might take a minute for them to recall. It’s important to allow interviewees that space to think about the question. From a human perspective, leaving open silence can feel awkward, but it’s important to create space for the participant to remember anything else that might be important. So while you might be sitting there thinking “wow this is awkward,” they are actually just thinking about their answer. On the flip side, you also don’t want to leave too much space in case there is nothing else to add–this can in turn make participants feel insecure that they have not said enough. Perfecting this skill comes with a lot of experience, so for now, try counting to 10, or perhaps mention that you need a few seconds to catch-up on your note taking–this gives them the space to think longer without feeling too much time pressure. Of course, if nothing more comes up, just feel free to move on.

  View this post on Instagram   A post shared by Outwitly | UX & Service Design (@outwitly)

Click through to download your copy now…

Next in our Research Methods blog series, we walk you through best practices for conducting observations and shadowing as part of your research and design process.

Resources we like…

Calendly for Scheduling

GotoMeeting for Remote Interviewing

iPhone Voice Record Pro app for Audio Recording

Rev for Audio Transcription

Similar blog posts you might like...

design research expert interview

Making your Journey Map Actionable and Creating Change: 301

design research expert interview

The New Outwitly Blog: Design, Research, and Storytelling

design research expert interview

Design Research Methods: Diary Study

A man is working on an empathy map in a virtual empathy mapping workshop. He is using the Miro empathy mapping template.

How to Plan and Conduct a Virtual Empathy Mapping Workshop

Subscribe to the weekly wit, what you’ll get.

  • Hot remote industry jobs
  • Blogs, podcasts, and worthwhile resources
  • Free ebooks, webinars, and mini-courses
  • Tips from the brightest minds in design

Ready to conduct user interviews like a pro?

Download our free user interview workbook.

  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Design Research

What is design research.

Design research is the practice of gaining insights by observing users and understanding industry and market shifts. For example, in service design it involves designers’ using ethnography—an area of anthropology—to access study participants, to gain the best insights and so be able to start to design popular services.

“We think we listen, but very rarely do we listen with real understanding, true empathy. Yet listening, of this very special kind, is one of the most potent forces for change that I know.” — Carl Rogers, Psychologist and founding father of the humanistic approach & psychotherapy research

Service design expert and Senior Director of User Research at Twitch Kendra Shimmell explains what goes into good design research in this video.

  • Transcript loading…

Get Powerful Insights with Proper Design Research

When you do user research well, you can fuel your design process with rich insights into how your target users interact—or might interact—in contexts to do the things they must do to achieve their goals using whatever they need on the way. That’s why it’s essential to choose the right research methods and execute them properly. Then, you’ll be able to reach those participants who agree to be test users/customers, so they’ll be comfortable enough to give you accurate, truthful insights about their needs, desires, pain points and much more. As service design can involve highly intricate user journeys , things can be far more complex than in “regular” user experience (UX) design . That’s where design research comes in, with its two main ingredients:

Qualitative research – to understand core human behaviors, habits and tasks/goals

Industry and Market research – to understand shifts in technology and in business models and design-relevant signs

An ideal situation—where you have enough resources and input from experts—is to combine the above to obtain the clearest view of the target customers of your proposed—or improved—service and get the most accurate barometer reading of what your market wants and why. In any case, ethnography is essential. It’s your key to decoding this very human economy of habits, motivations, pain points, values and other hard-to-spot factors that influence what people think, feel, say and do on their user journeys. It’s your pathway to creating personas —fictitious distillations that prove you empathize with your target users as customers—and to gain the best insights means you carefully consider how to access these people on their level. When you do ethnographic field studies, you strive for accurate observations of your users/customers in the context of using a service .

design research expert interview

© Interaction Design Foundation, CC BY-SA 4.0

How to Leverage Ethnography to Do Proper Design Research

Whatever your method or combination of methods (e.g., semi-structured interviews and video ethnography), the “golden rules” are:

Build rapport – Your “test users” will only open up in trusting, relaxed, informal, natural settings. Simple courtesies such as thanking them and not pressuring them to answer will go a long way. Remember, human users want a human touch, and as customers they will have the final say on a design’s success.

Hide/Forget your own bias – This is a skill that will show in how you ask questions, which can subtly tell users what you might want to hear. Instead of asking (e.g.) “The last time you used a pay app on your phone, what was your worst security concern?”, try “Can you tell me about the last time you used an app on your phone to pay for something?”. Questions that betray how you might view things can make people distort their answers.

Embrace the not-knowing mindset and a blank-slate approach – to help you find users’ deep motivations and why they’ve created workarounds. Trying to forget—temporarily—everything you’ve learned about one or more things can be challenging. However, it can pay big dividends if you can ignore the assumptions that naturally creep into our understanding of our world.

Accept ambiguity – Try to avoid imposing a rigid binary (black-and-white/“yes”-or-“no”) scientific framework over your users’ human world.

Don’t jump to conclusions – Try to stay objective. The patterns we tend to establish to help us make sense of our world more easily can work against you as an observer if you let them. It’s perfectly human to rely on these patterns so we can think on our feet. But your users/customers already will be doing this with what they encounter. If you add your own subjectivity, you’ll distort things.

Keep an open mind to absorb the users’ world as present it – hence why it’s vital to get some proper grounding in user research. It takes a skilled eye, ear and mouth to zero in on everything there is to observe, without losing sight of anything by catering to your own agendas, etc.

Gentle encouragement helps; Silence is golden – a big part of keeping a naturalistic setting means letting your users stay comfortable at their own pace (within reason). Your “Mm-mmhs” of encouragement and appropriate silent stretches can keep your research safe from users’ suddenly putting politeness ahead of honesty if they feel (or feel that you’re) uncomfortable.

Overall, remember that two people can see the same thing very differently, and it takes an open-minded, inquisitive, informal approach to find truly valuable insights to understand users’ real problems.

Learn More about Design Research

Take our Service Design course, featuring many helpful templates: Service Design: How to Design Integrated Service Experiences

This Smashing Magazine piece nicely explores the human dimensions of design research: How To Get To Know Your Users

Let Invision expand your understanding of design research’s value, here: 4 types of research methods all designers should know .

Literature on Design Research

Here’s the entire UX literature on Design Research by the Interaction Design Foundation, collated in one place:

Learn more about Design Research

Take a deep dive into Design Research with our course Service Design: How to Design Integrated Service Experiences .

Services are everywhere! When you get a new passport, order a pizza or make a reservation on AirBnB, you're engaging with services. How those services are designed is crucial to whether they provide a pleasant experience or an exasperating one. The experience of a service is essential to its success or failure no matter if your goal is to gain and retain customers for your app or to design an efficient waiting system for a doctor’s office.

In a service design process, you use an in-depth understanding of the business and its customers to ensure that all the touchpoints of your service are perfect and, just as importantly, that your organization can deliver a great service experience every time . It’s not just about designing the customer interactions; you also need to design the entire ecosystem surrounding those interactions.

In this course, you’ll learn how to go through a robust service design process and which methods to use at each step along the way. You’ll also learn how to create a service design culture in your organization and set up a service design team . We’ll provide you with lots of case studies to learn from as well as interviews with top designers in the field. For each practical method, you’ll get downloadable templates that guide you on how to use the methods in your own work.

This course contains a series of practical exercises that build on one another to create a complete service design project . The exercises are optional, but you’ll get invaluable hands-on experience with the methods you encounter in this course if you complete them, because they will teach you to take your first steps as a service designer. What’s equally important is that you can use your work as a case study for your portfolio to showcase your abilities to future employers! A portfolio is essential if you want to step into or move ahead in a career in service design.

Your primary instructor in the course is Frank Spillers . Frank is CXO of award-winning design agency Experience Dynamics and a service design expert who has consulted with companies all over the world. Much of the written learning material also comes from John Zimmerman and Jodi Forlizzi , both Professors in Human-Computer Interaction at Carnegie Mellon University and highly influential in establishing design research as we know it today.

You’ll earn a verifiable and industry-trusted Course Certificate once you complete the course. You can highlight it on your resume, CV, LinkedIn profile or on your website.

All open-source articles on Design Research

Adding quality to your design research with an ssqs checklist.

design research expert interview

  • 8 years ago

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

Qualitative Research: Semi-structured Expert Interview

  • First Online: 23 November 2016

Cite this chapter

design research expert interview

  • Patric Finkbeiner 2  

3 Citations

In Chap. 5 , the motivational factors were ranked according to the relevance observed (Table 5.6 ). In the second phase of this qualitative methodology, the researcher begins with the explanation of different interview types and the justification for choosing semi-structured expert interviews for testing or adding to the factors obtained by PO.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Translation: the Law of Informational Self-determination.

e.g. “wanna” instead of “want to”

Engl.: “You sit together with your boss and discuss and talk a bit” (23_25).

Engl.: “Yes good, first these things are discussed internally” (02_04).

Engl.: “We know each other quite well” (01_06).

Engl.: “I have experienced cases, when I just went online and started looking. Somewhere […] some community, you can still get some ideas” (23_13).

Engl.: “that one tries to help each other” (02_16).

Engl.: “our boss feel attacked on his professional honor” (14_06).

Engl.: “I think i have the ambition to just wangle some things. I take my time and persist and stay longer. I can go till 7 or 7:30”(23_47).

Engl.: “it must be fun, that is the essential” (03_24).

Engl.: “Interest are very important for me” (21_46).

Engl.: “when you have to motivate yourself” (20_43).

Engl.: “think logical that what I preach to all of them” (20_28).

Engl.: “I don’t expect that everyone knows everything—I have to know where to find it!” (10_14).

Engl.: “Yes, but the communication is very important” (06_14).

Engl.: “don’t ask […] if they do not communicate” (10_31).

Engl.: “that he can decide himself, what makes sense and what not” (02_02).

Engl.: “a good interhuman relationship and a good common basis” (17_45).

Engl.: “that the companionship is correct, as well as the interpersonal atmosphere” (10_33).

Engl.: “boss, because someone has to be there to take a decision” (23_08).

Engl.: “I usually carry a whip, but sometimes I also pet them” (06_38).

Engl.: “then I watch and check how the work proceeds” (03_16).

Engl.: “one relies on the workers, who have expertise” (07_30).

Engl.: “trust should be given, and then it should work out fine” (23_28).

Engl.: “has been in the profession for 30 years” (02_24).

Engl.: “that is because of the experience and the age” (23_37).

Engl.: “and the age pays a role and that are all factors” (03_26).

Engl.: “many things are basic repair knowledge which sometimes is missing” (23_51).

Author information

Authors and affiliations.

Stuttgart, Germany

Patric Finkbeiner

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Patric Finkbeiner .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Finkbeiner, P. (2017). Qualitative Research: Semi-structured Expert Interview. In: Social Media for Knowledge Sharing in Automotive Repair. Springer, Cham. https://doi.org/10.1007/978-3-319-48544-7_6

Download citation

DOI : https://doi.org/10.1007/978-3-319-48544-7_6

Published : 23 November 2016

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-48543-0

Online ISBN : 978-3-319-48544-7

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 10 May 2024

Community-based participatory-research through co-design: supporting collaboration from all sides of disability

  • Cloe Benz   ORCID: orcid.org/0000-0001-6950-8855 1 ,
  • Will Scott-Jeffs 2 ,
  • K. A. McKercher   ORCID: orcid.org/0000-0003-4417-585X 3 ,
  • Mai Welsh   ORCID: orcid.org/0000-0002-7818-0115 2 , 4 ,
  • Richard Norman   ORCID: orcid.org/0000-0002-3112-3893 1 ,
  • Delia Hendrie   ORCID: orcid.org/0000-0001-5022-5281 1 ,
  • Matthew Locantro 2 &
  • Suzanne Robinson   ORCID: orcid.org/0000-0001-5703-6475 1 , 5  

Research Involvement and Engagement volume  10 , Article number:  47 ( 2024 ) Cite this article

820 Accesses

Metrics details

As co-design and community-based participatory research gain traction in health and disability, the challenges and benefits of collaboratively conducting research need to be considered. Current literature supports using co-design to improve service quality and create more satisfactory services. However, while the ‘why’ of using co-design is well understood, there is limited literature on ‘ how ’ to co-design. We aimed to describe the application of co-design from start to finish within a specific case study and to reflect on the challenges and benefits created by specific process design choices.

A telepractice re-design project has been a case study example of co-design. The co-design was co-facilitated by an embedded researcher and a peer researcher with lived experience of disability. Embedded in a Western Australian disability organisation, the co-design process included five workshops and a reflection session with a team of 10 lived experience and staff participants (referred to as co-designers) to produce a prototype telepractice model for testing.

The findings are divided into two components. The first describes the process design choices made throughout the co-design implementation case study. This is followed by a reflection on the benefits and challenges resulting from specific process design choices. The reflective process describes the co-designers’ perspective and the researcher’s and organisational experiences. Reflections of the co-designers include balancing idealism and realism, the value of small groups, ensuring accessibility and choice, and learning new skills and gaining new insights. The organisational and research-focused reflections included challenges between time for building relationships and the schedules of academic and organisational decision-making, the messiness of co-design juxtaposed with the processes of ethics applications, and the need for inclusive dissemination of findings.

Conclusions

The authors advocate that co-design is a useful and outcome-generating methodology that proactively enables the inclusion of people with disability and service providers through community-based participatory research and action. Through our experiences, we recommend community-based participatory research, specifically co-design, to generate creative thinking and service design.

Plain language summary

Making better services with communities (called co-design) and doing research with communities (e.g. community-based participatory research) are ways to include people with lived experience in developing and improving the services they use. Academic evidence shows why co-design is valuable, and co-design is increasing in popularity. However, there needs to be more information on how to do co-design. This article describes the process of doing co-design to make telepractice better with a group of lived experience experts and staff at a disability organisation. The co-design process was co-facilitated by two researchers – one with a health background and one with lived experience of disability. Telepractice provides clinical services (such as physiotherapy or nursing) using video calls and other digital technology. The co-design team did five workshops and then reflected on the success of those workshops. Based on the groups’ feedback, the article describes what worked and what was hard according to the co-designers and from the perspective of the researchers and the disability organisation. Topics discussed include the challenge of balancing ideas with realistic expectations, the value of small groups, accessibility and choice opportunities and learning new skills and insights. The research and organisational topics include the need to take time and how that doesn’t fit neatly with academic and business schedules, how the messiness of co-design can clash with approval processes, and different ways of telling people about the project that are more inclusive than traditional research. The authors conclude that co-design and community-based participatory research go well together in including people with lived experience in re-designing services they use.

Peer Review reports

Introduction

Co-design has the potential to positively impact co-designers and their community, researchers, and organisations. Co-design is defined as designing with, not for, people [ 1 ] and can reinvigorate business-as-usual processes, leading to new ideas in industry, community and academia. As co-design and community-based participatory research gain traction, the challenges and benefits of collaborative research between people with lived experience and organisations must be considered [ 2 ].

Disability and healthcare providers previously made decisions for individuals as passive targets of an intervention [ 3 ]. By contrast, the involvement of consumers in their care [ 4 ] has been included as part of accreditation processes [ 4 ] and shown to improve outcomes and satisfaction. For research to sufficiently translate into practice, consumers and providers should be involved actively, not passively [ 4 , 5 ].

Approaches such as community-based participatory research promote “a collaborative approach that equitably involves community members, organisational representatives and researchers in all aspects of the research process” [ 6 ] (page 1). This approach originated in public health research and claims to empower all participants to have a stake in project success, facilitating a more active integration of research into practice and decreasing the knowledge to practice gap 6 . Patient and public involvement (PPI) increases the probability that research focus, community priorities and clinical problems align, which is increasingly demanded by research funders and health systems [ 7 ].

As community-based participatory research is an overarching approach to conducting research, it requires a complementary method, such as co-production, to achieve its aims. Co-production has been attributed to the work of Ostrom et al. [ 8 ], with the term co-design falling under the co-production umbrella. However, co-design can be traced back to the participatory design movement [ 9 ]. The term co-production in the context of this article includes co-planning, co-discovery, co-design, co-delivery, and co-evaluation [ 10 ]. Within this framework, the concept of co-design delineates the collaborative process of discovery, creating, ideating and prototyping to design or redesign an output [ 11 ]. The four principles of co-design, as per McKercher [ 1 ], are sharing power, prioritising relationships, using participatory means and building capacity [ 1 ]. This specific method of co-design [ 1 ] has been used across multiple social and healthcare publications [ 10 , 12 , 13 , 14 ].

A systematic review by Ramos et al. [ 15 ] describes the benefits of co-design in a community-based participatory-research approach, including improved quality and more satisfactory services. However, as identified by Rahman et al. [ 16 ], the ‘ why ’ is well known, but there is limited knowledge of ‘ how ’ to co-design. Multiple articles provide high-level descriptions of workshops or briefly mention the co-design process [ 13 , 17 , 18 , 19 ]. Pearce et al. [ 5 ] include an in-depth table of activities across an entire co-creation process, however within each part i.e., co-design, limited descriptions were included. A recent publication by Marwaa et al. [ 20 ] provides an in-depth description of two workshops focused on product development, and Tariq et al. [ 21 ] provides details of the process of co-designing a research agenda. Davis et al. [ 11 ] discuss co-design workshop delivery strategies summarised across multiple studies without articulating the process from start to finish. Finally, Abimbola et al. [ 22 ] provided the most comprehensive description of a co-design process, including a timeline of events and activities; however, this project only involved clinical staff and did not include community-based participation.

As “We know the why, but we need to know the how-to” [ 16 ] (page 2), of co-design, our primary aim was to describe the application of co-design from start to finish within a specific case study. Our secondary aim was to reflect on the challenges and benefits created by specific process design choices and to provide recommendations for future applications of co-design.

Overview of telepractice project

The case study, a telepractice redesign project, was based at Rocky Bay, a disability support service provider in Perth, Australia [ 23 ]. The project aimed to understand the strengths and pain points of telepractice within Rocky Bay. We expanded this to include telepractice in the wider Australian disability sector. The project also aimed to establish potential improvements to increase the uptake and sustainability of Rocky Bay’s telepractice service into the future. Rocky Bay predominantly serves people under the Australian National Disability Insurance Scheme (NDIS) [ 24 ] by providing a variety of services, including allied health (e.g. physiotherapy, dietetics, speech pathology, etc.), nursing care (including continence and wound care), behaviour support and support coordination [ 23 ]—Rocky Bay services metropolitan Perth and regional Western Australia [ 23 ].

The first author, CB, predominantly conducted this research through an embedded researcher model [ 25 ] between Curtin University and Rocky Bay. An embedded researcher has been defined as “those who work inside host organisations as members of staff while also maintaining an affiliation with an academic institution” [ 25 ] (page 1). They had some prior contextual understanding which stemmed from being a physiotherapist who had previously delivered telehealth in an acute health setting. A peer researcher, WSJ, with lived experience of disability, worked alongside CB. They had no previous experience in research or co-design, this was their first paid employment and they had an interest in digital technology. Peer Researcher is a broad term describing the inclusion of a priority group or social network member as part of the research team to enhance the depth of understanding of the communities to which they belong [ 26 ]. Including a peer researcher in the team promoted equity, collective ownership, and better framing of the research findings to assist with connecting with people with lived experience. These outcomes align with key components of community-based participatory research and co-design [ 27 , 28 , 29 , 30 ].

Person-first language was used as the preference of experts with lived experience who contributed to this research to respect and affirm their identity. However, we respect the right to choose and the potential for others to prefer identity-first language [ 31 ].

A summary of the structure of the phases completed before co-design workshops are represented in Fig.  1 below. Ethical approval for the project was received iteratively before each phase on the timeline (Fig.  1 ) from the Curtin Human Research Ethics Committee (HRE2021-0731). The reporting of this article has been completed in line with the Guidance for Reporting Involvement of Patients and the Public (GRIPP2) checklist [ 7 ].

figure 1

Summary of telepractice co-design project structure [ 1 ]

Here, we present an outline of the chosen research methods with descriptions of each process design choice and supporting reasons and examples specific to the study. The format is in chronological order, with further details of each step provided in Appendix 1 (Supplementary Material 1).

Methods and results

Process of co-production and preparation for co-design.

Co-production was chosen as the planning method for the study, as the inclusion of community members (Rocky Bay Lived experience experts and Staff) in each step of the research process would increase buy-in and make the research more likely to meet their needs [ 5 ]. An example of co-planning (part of co-production) includes the study steering committee, with a lived experience expert, clinician and project sponsor representatives collaborating on the selection of study aim, methods and recruitment processes. Another example of co-planning, co-design, and co-delivery was recruiting a peer researcher with disability, who worked with the embedded researcher throughout the study design and delivery.

The second process design choice was to attempt to build safe enough conditions for community participation, as people who feel unsafe or unwelcome are less likely to be able to participate fully in the research [ 1 ]. Building conditions for safety was applied by repeatedly acknowledging power imbalances, holding space for community input, and anticipating and offering accessibility adjustments without judgment.

Getting started

Understanding and synthesising what is already known about telepractice experiences and learning from lived experience was prioritised as the first step in the process. We paired a scoping review of the literature with scoping the lived experiences of the community [ 32 ]. Our reasoning was to understand whether the findings aligned and, secondly, to learn what had already been done and to ask what was next, rather than starting from the beginning [ 1 ]. Examples of strategies used in this step included interviewing clinicians and service provider Managers across Australia to establish how they implemented telepractice during the pandemic and understand their views of what worked and what did not. The second learning process occurred onsite at Rocky Bay, with people with lived experience, clinicians and other support staff, whom the embedded researcher and peer researcher interviewed to understand experiences of telepractice at Rocky Bay.

The authors presented the interview findings during focus groups with Rocky Bay participants to share the learnings and confirm we had understood them correctly. The groups were divided into staff and lived experience cohorts, allowing for peer discussions and sharing of common experiences. This helped build relationships and a sense of familiarity moving into the workshop series.

Co-design workshops

This section outlines specific components of the co-design workshop preparation before describing each of the five workshops and the final reflection session.

Staff and community co-designers

Two process design choices were implemented to form the co-design group. The first was to prioritise lived experience input as there are generally fewer opportunities for lived experience leadership in service design [ 16 ], and because the disability community have demanded they be included where the focus impacts them [ 33 ]. To acknowledge the asymmetry of power between community members, people with lived experience of disability and professionals, we ensured the co-design group had at least the same number of lived experience experts as staff.

The second priority for the co-design group was to include people for whom involvement can be difficult to access (e.g. people who are isolated for health reasons and cannot attend in-person sessions, people who live in supported accommodation, part-time staff, and people navigating the dual-role of staff member while disclosing lived experience). It was important to learn from perspectives not commonly heard from and support equity of access for participants [ 4 ].

Workshop series structure

When structuring the workshop series, lived experience co-designers nominated meeting times outside standard work hours to reduce the impact of co-design on work commitments and loss of income while participating. The workshops were designed to be delivered as a hybrid of in-person and online to give co-designers a choice on how they wanted to interact. The workshops were designed as a series of five sequential 90-minute workshops, where co-designers voted for the first workshop to be predominantly in-person and the remainder of the workshops online. Some co-designers chose to attend the initial session in person to build rapport. However, the virtual option remained available. The subsequent online sessions reduced the travel burden on co-designers, which the co-designers prioritised over further face-to-face meetings.

Workshop facilitators

To maintain familiarity and ensure predictability for co-designers, the workshops were co-facilitated by the embedded researcher and peer researcher. The co-facilitators built on relationships formed through previous interactions (interviews and focus groups), and each facilitator represented part of the co-designer group as a clinician or a person with disability. An extra support person was tasked with supporting the co-designers with disability to break down tasks and increase the accessibility of activities. The reason for selecting the support person was that they could contribute their skills as a school teacher to support the communication and completion of activities, and they had no previous experience with disability services to influence the co-designers opinions. This role was adapted from the provocateur role described by McKercher [ 1 ].

Pre-workshop preparations

To prepare for the workshops, each co-designer was asked to complete a brief survey to ensure the co-facilitators understood co-designers collect preferences and needs ahead of the session to enable preparation and make accommodations. The survey included pronouns, accessibility needs and refreshment preferences. Following the survey, the co-facilitators distributed a welcome video; the peer researcher, a familiar person, was videoed explaining what to expect, what not to expect and expected behaviours for the group to support a safe environment [ 1 ]. This process design choice was made to allow co-designers to alleviate any potential anxieties due to not having enough information and to increase predictability.

Workshop resources and supports

As the first workshop was in-person, specific process choices were made to ensure co-designers felt welcome and to uphold the dignity of co-designers with lived experience [ 34 ]. Examples of process design choices include facilitating transport and parking requests, providing easy access to the building and room, making a sensory breakout room available and having the peer researcher waiting at the entrance to welcome and guide people to the workshop room.

After reaching the workshop room, all co-designers received an individualised resource pack to equalise access to workshop materials, aiming again to balance power in a non-discriminatory way [ 11 ]. The resource pack included name tags with pronouns, individualised refreshments, a fidget toy [ 35 ] whiteboard markers and a human bingo activity described in a later section. An easy-to-apply name tag design was selected after consulting a co-designer with an upper limb difference. Further details on the resource packs are included in Appendix 1 (Supplementary Material 1).

Enabling different kinds of participation

We provided non-verbal response cards to each co-designer as communication preferences vary significantly within the disability community. The cards were intended to benefit any co-designer who struggled to use the response buttons on MS teams. The co-facilitators co-created the Yes, No, and In-the-middle response cards (Fig.  2 ) and were guided by recommendations by Schwartz and Kramer [ 29 ]. They found that people with intellectual disability were more likely to respond “yes” if the negative option included a frowning face or red-coloured images, as choosing these types of alternatives was perceived as being negative or would cause offence [ 29 ].

figure 2

Non-verbal response cards

A summary of the structure and purpose of each of the five workshops is shown in Fig.  3 , followed by a more in-depth discussion of the strategies employed in each workshop.

figure 3

Outline of workshop and group structures

Workshop 1: the beginning

Human Bingo was the first workshop activity, as it aimed to support relationship building in an inclusive way for both in-person and online attendees. The activity asked each co-designer to place a name in each worksheet box of someone who fit the described characteristic of that square(for example, someone who likes cooking). To include the two online attendees, laptops were set up with individual videocall streams and noise cancelling headphones enabling the online co-designers to interact one-on-one with others during the activities.

The second activity used The Real Deal cards by Peak Learning [ 36 ] to ask the co-designers to sort cards to prioritise the top five experiences and feelings they would want in a future version of telepractice. This activity aimed to set initial priorities for the redesign of telepractice [ 1 ]. Small groups with a mix of lived experience experts and staff were tasked with negotiating and collaborating to produce their top five desired experiences and feelings for future service success.

A follow-up email was sent after the session to thank co-designers, provide closure, invite feedback and let co-designers know what to expect from the next session.

Workshop 2: mapping the journey

In the second workshop, held online, the co-facilitators explained the journey mapping process and showed a draft of how the visual representation would likely look (Fig.  4 ). As the first step, co-designers were tasked with completing a series of activities to analyse lived experience interview data on the current experience of telepractice for lived experience experts. Small mixed groups were created, prioritising the needs of the lived experience experts to have staff who would be the best fit in supporting them to work through the task [ 1 ]. The small groups were allocated interview quotes corresponding to the steps of a customer journey through telepractice and asked to identify strengths, challenges and emotions associated with the current Telepractice service journey at Rocky Bay [ 1 ]. Further details on the journey map analysis are described in Appendix 1 (Supplementary Material 1) and in a published article co-authored by the co-designers (Benz et al. [ 37 ]).

figure 4

Draft journey map visualisation

After workshop two, the embedded researcher drafted a journey map by compiling the co-designer group responses to the analysis activity, which was then circulated for feedback and confirmation. The completed journey map is published with further details on the process in an article co-authored with the co-designers, Benz et al. [ 37 ].

Workshop 3: ideas for addressing pain points

For the third workshop, the co-facilitators selected activities to be completed separately by lived experience and staff co-designers. The lived experience expert activity involved exploring preferences for improving pain points identified through the journey map. The lived experience expert activity was facilitated by the peer researcher and support person and included questions such as, how would it be best to learn how to use telepractice? Visual prompt cards were shared to support idea creation, where lived experience expert co-designers could choose any option or suggest an alternative (Fig.  5 ).

figure 5

Option cards for Lived experience expert co-designer workshop activity

Simultaneously, the staff co-designers completed a parallel activity to address pain points from a service delivery point of view. These pain points were identified in the clinical and non-clinical staff interviews and from the journey map summary of lived experience expert interviews (analysed in Workshop 2). Staff co-designers completed a mind map based on service blueprinting guidelines by Flowers and Miller [ 38 ]. The activity used service blueprinting to identify a list of opportunities for improvement, with four prompts for co-designers to commence planning the actions required to implement these improvements. The foci of the four prompts were roles, policies, technology and value proposition [ 38 ] (described further in Appendix 1 (Supplementary Material 1)). Each of the four prompts were completed for the ten proposed opportunities for improvement to draft plans for future telepractice service delivery.

Workshop 4: story telling and generation of future state solutions

In the fourth workshop, we introduced the concept of prototyping [ 39 ] as a designerly way to test co-designers’ ideas for improving telepractice according to desirability, feasibility and viability with a wider audience of lived experience experts and staff. The co-designers helped to plan the prototyping, and accessibility was a key consideration in selecting a prototype, as the group were conscious of the target audience.

Creating the prototype was collaborative, allowing co-designers to produce an output representing their ideas. They selected a video storyboard prototype with a staff and customer version formatted similarly to a children’s book. It included cartoon animations completed on PowerPoint, voiceover narration, closed captioning and an introductory explanation from two co-designers.

After workshop four, the co-designers collaborated on the customer and staff prototypes during the two weeks between workshops four and five, with support and input from the facilitators. The prototype files were co-produced, with different co-designers working on the visual aspects, the script for the main audio narration and the introductory explanation.

Workshop 5: finishing the story

The co-design group reviewed the draft prototypes in the final workshop, with specific attention paid to the story’s cohesiveness.

The feedback questionnaire was then created to be completed by viewers outside of the co-design group after engaging with either the staff or the customer prototype. The survey allowed Rocky Bay customers and staff to contribute ideas. Following thoughtful discussions, consensus was reached by all co-designers on the final survey questions (Appendix 2 (Supplementary Material 1)).

A reflection activity concluded the final workshop, allowing co-designers to provide feedback on the co-design process, elements for improvement and aspects they valued in participating in the project. Their reflections on the benefits and challenges of co-design in this study are included in the section Co-designer’s perspectives of the workshop series , with the reflection questions included in Appendix 3 (Supplementary Material 1).

Post prototype reflection session

The prototype feedback responses were reviewed with co-designers in a final reflection session. The group then discussed adaptations to the implementation plan for proposal to Rocky Bay. Following the survey discussion, co-designers reviewed proposed service principles for the new telepractice implementation recommendations. These principles aim to align any future decisions in the implementation and service provision stages of the telepractice project with the intentions of the co-designers. An additional reflection activity was completed, specific to the telepractice proposal they had produced and the prototyping process. Feedback relevant to subsequent discussions of the challenges and benefits of co-design is included in the following section: Co-designer’s perspectives of the workshop series , with the reflection prompts in Appendix 3 (Supplementary Material 1).

Benefits and challenges

Learnings derived from completing a study of this kind are complex. However, it is necessary to reflect on which strategies used in the project were beneficial and which strategies created challenges - anticipated and unexpected. These reflections are discussed in two sections, the first being the challenges and benefits reflected upon by co-designers. The second set of reflections relates to organisational and research project-level benefits and challenges from the perspective of clinical department managers and researchers involved in the project.

Co-designer’s perspectives of the workshop series

Co-designers were positive overall about the workshop series. Responses to a prompt for one-word descriptors of their experience included “captivating, innovative, fulfilling, exciting, insightful, helpful, eye-opening and informative ” .

Co-designing as a team

A foundational strategy implemented in this project was the intentional collaboration of lived experience experts with staff; this linked to the co-design principle of prioritising relationships and sharing power. Multiple reflections commented on feeling like a team and that having diverse perspectives across the group was beneficial.

It was especially interesting to hear the perspective of clinicians (for us, the other side of Telepractice). [Lived experience expert Co-designer]

Additionally, the combination of facilitators, including an embedded researcher with an allied health clinical background, a peer researcher with lived experience and a support person with strengths in breaking down tasks, provided different facets of support and task modelling to the co-designers throughout the process.

Balancing idealism and realism

There is an inherent challenge in collaboration between lived experience experts and service providers, whereby co-designers formulate ideas for service improvement and then, in good faith, propose required changes to be implemented. Strategies to support imagination and idealism while being honest about the constraints of what can be delivered were implemented in the context of this project. This was essential to reinforce to co-designers that their contributions and ideas are valid while tempering their hopes with the truth that organisational change is challenging and funding for change is limited. Co-designers were encouraged to be cognisant of ideas that would require high investment (cost and time) and which ideas faced fewer barriers to implementation. This strategy did not prevent the ideation of changes and prioritising what mattered most to them, and co-designers felt it was beneficial in adding a level of consideration regarding what investments they deemed necessary versus those that would be nice to have. For example, having a person to call for help was viewed as necessary, while a nice to have was more advanced technological features.

I feel that the prototype is useful; however, I worry that nothing will be carried over to the Rocky Bay Service. I feel like more customers will want to access telepractice, and Rocky Bay now needs to start the implementation process to ensure that telepractice is utilised, including processes, education and training. [Clinician Co-designer]

The value of small groups

Working in small groups was another beneficial strategy, aiming to create a more hospitable environment for co-designers to voice their thoughts. The small groups varied across activities and workshops, with facilitators intentionally pairing groups that would best support the lived experience of expert co-designers completing activities. As described in the workshop sections, some activities suited mixed groups, whereas others suited lived experience expert and staff-specific groups. Two reflective comments demonstrated the benefit of the small groups, one from a clinician who reflected on supporting a fellow co-designer:

I found that in our group, all of us had a say; however, [Lived Experience Co-designer name] was a bit overwhelmed at times, so I tried to support her with that. [Clinician Co-designer]

And a lived experience expert co-designer additionally reflected:

The breakout rooms were a very good idea. It can be quite intimidating speaking in front of the main group. I found it much easier to participate in the smaller groups . [Lived experience expert Co-designer]

The second session included an unplanned whole group activity, which challenged co-designers. Co-designers reflections of this experience demonstrate the benefits of smaller groups:

I did feel that at the end when the whole group did the task, there wasn’t as much collaboration as there were quite a few more assertive participants, so the quieter ones just sat back. [Clinician Co-designer]

Accessibility and choice

A challenge navigated throughout the workshop series with a diverse group of co-designers was meeting their varying individual health and other needs. This required responding in sensitive, non-judgemental, and supportive ways to encourage co-designers to engage fully. Examples of support include the presence of a support person and adaption of resource packs for co-designers who have difficulty swallowing (re: refreshments), as well as the previously mentioned non-verbal response cards and accessible name tags.

Accessibility supports were also provided for the peer researcher during facilitation activities, including pre-written scripts to provide clarity when explaining tasks to the co-design group, written reminders and regular check-ins. A lived experience expert co-designer reflected that it was beneficial that they could tell the peer researcher was nervous but appreciated that he was brave and made them feel like they did not need to be perfect if the peer researcher was willing to give it a go.

When facilitating the sessions, the embedded researcher and peer researcher identified that the workshops were long and, at times, mentally strenuous. One co-designer requested “more breaks during each session” . Breaks were offered frequently; however, upon reflection, we would schedule regular breaks to remove the need for co-designers to accept the need for a break in front of the group. The instructions for each activity were visual, verbal and written and given at the start of a task. However, once the co-designers were allocated to breakout rooms, they could no longer review the instructions. Many co-designers suggested that having the instructions in each breakout room’s chat window would have been a valuable visual reminder.

One thing I think might of helped a little is having the instructions in the chat as I know I that I listened but couldn’t recall some of the instructions for the group task. [Lived experience expert Co-designer]

Learning new skills and gaining new insight

The co-designers considered that the benefits of working together included learning new skills and widening their understanding of research, the services they provide or use, and the differences between the priorities of lived experience experts and staff. Two lived experience experts commented that the opportunity to learn collaboration skills and create cartoons using PowerPoint were valuable skills for them to utilise in the future. One clinician reflected that the process of co-design had improved their clinical practice and increased their use of telepractice:

My practice is 100% better. I am more confident in using telepractice and more confident that, as a process, it doesn’t reduce the impact of the service- in some ways, it has enhanced it when customers are more relaxed in their own environments. I have not seen my stats, but my use of telepractice has increased significantly, too. [Clinician Co-designer]

The management co-designer acknowledged that although ideas across the group may be similar, prioritisation of their importance can vary dramatically:

Whilst all the feedback and potential improvements were very similar, some things that I viewed as not an issue, was very different to a customer’s perspective. [Management Co-designer]

Overall, the workshop series challenged co-designers. However, the provision of a supportive and accessible environment resulted in mutual benefits for the research, organisation, and co-designers themselves. The strategy for facilitating the workshops was to pose challenges, support the co-designers in rising to meet them, and take into account their capabilities if provided with the right opportunity. A lived experience expert co-designer summarised the effectiveness of this strategy:

I found the activities to be challenging without being too difficult. Each activity provided enough guidance and structure to encourage interesting group discussions and make collaboration easy. [Lived experience expert Co-designer]

Research and organisational reflections of benefits and challenges of co-design

A significant challenge in completing this project was that building foundational relationships and trust takes time. While the authors view this trust as the foundation on which community-based participatory research and co-design are built, they note the direct tension of the time needed to develop these foundational relationships with the timeline expectations of academic and organisational decision-making. The flexibility required to deliver a person-centred research experience for the co-designers resulted in regular instances when timeline extensions were required to prioritise co-designer needs over efficiency. The result of prioritising co-designer needs over research timeline efficiency was an extended timeline that was significantly longer than expected, which sometimes created a disconnect between the flexibility of co-design and the rigidity in traditional academic and organisational processes.

The impacts of a longer-than-expected timeline for completion of the co-design process included financial, project scope, and sponsorship challenges. The project’s initial scope included a co-implementation and co-evaluation phase; however, due to the three-year time constraint, this was modified to conclude following the prototyping process. Whilst the three-year period set expectations for project sponsors and other collaborators from Rocky Bay, the wider context for the project varied significantly and rapidly over this period. This included two changes in Rocky Bay supervisor and one change in Rocky Bay project sponsor. Additionally, one of the academic supervisors left Curtin. This challenge indicates that the project would benefit from key role succession planning.

The peer researcher role was beneficial in providing an opportunity for a person with lived experience to join the study in a strength-based role and experience academic and business processes. However, challenges arose with the timeline extensions, which required this part-time, casual role to be extended by seven months. While the contract extension posed budgetary challenges, the role was viewed as vital to the completion of the project.

While an essential component of research, particularly involving vulnerable populations, ethical approvals proved challenging due to the non-traditional research methods involved in co-design. It was evident to the authors that while the ethics committee staff adhered to their processes, they were bound by a system that did not have adequate flexibility to work with newer research methods, such as co-design. Multiple methods in this study were heavily integrated into the community, including embedded research, peer research and co-design.

The present ethics process provided a comprehensive review focusing on planned interactions within research sessions (e.g. interviews and workshops). Unfortunately, this failed to account for a wider view, including the initial co-production prior to ethical application and anecdotal interactions that occurred regularly in the organic co-design process. In addition to the repeated submissions required to approve the sequential study format, these interactions created a significant workload for the research team and ethics office. These challenges were compounded by the need to navigate Rocky Bay’s organisational processes and changing business needs within ethical approval commitments.

In the authors’ opinion, prioritising the inclusion of lived experience experts in co-creating outputs to disseminate findings was beneficial. The co-creation enabled an authentic representation of the study to audiences regarding community-based participatory research and co-design method implementation. For example, the presentation of a panel discussion at a conference in which the peer researcher could prerecord his responses to questions as his preferred method of participation. All posters presented by the project were formatted to be accessible to lay consumers and were collaboratively produced, with the additional benefit of the posters being displayed across Rocky Bay hubs for customers and staff to gain study insights.

Due to the co-design method’s dynamic nature, some budgetary uncertainty was challenging to navigate. However, financial and non-financial remuneration for all non-staff participants in the project was prioritised. As previously discussed, the position of peer researcher was a paid role; additionally, all lived experience expert participants were remunerated at a rate of AUD 30/hour in the form of gift cards. The carer representative on the steering committee recommended using gift cards to avoid income declaration requirements from government benefits people may receive. Non-financial remuneration for the valuable time and contribution of the co-designer group included co-authorship on an article written regarding the Journey Map they produced (Benz et al. [ 37 ]) and acknowledgement in any other appropriate outputs. The implementation proposal provided to Rocky Bay included recommendations for continued inclusion and remuneration of co-designers.

Setting a new bar for inclusion

Another benefit to reflect upon, which may be the most significant legacy of the project, was setting the precedence for the inclusion of people with disability in decision-making roles in future projects and research conducted by the University and Rocky Bay. After this project commenced, other Rocky Bay clinical projects have similarly elevated the voices of lived experience in planning and conducting subsequent quality improvement initiatives.

I’m lucky enough to have been part of a lot of projects. But I guess I probably haven’t been a part of continuous workshops, pulling in all perspectives of the organisation perfectly… So, collaboration and getting insight from others I haven’t usually was a very unique experience, and I definitely found value if this were to continue in other projects. [Manager Co-designer]

In summary, the findings from using a co-design method for the telepractice research study produced a series of benefits and presented the researchers with multiple challenges. The findings also addressed a literature gap, presenting in-depth descriptive methods to demonstrate how co-design can be applied to a specific case.

Drawn from these findings, the authors identified six main points which form the basis of this discussion. These include (1) the fact that the necessary time and resources required to commit to co-design process completion adequately were underestimated at the outset, (2) there is a need to support the health, well-being and dignity of lived experience expert participants, (3) academic ethical processes have yet to adapt to address more participatory and integrated research methods, (4) strategies used to foster strong collaborative relationships across a diverse group were valued by all participants, (5) better delineation between terminologies such as co-design and community-based participatory research or patient and public involvement would improve the clarity of research methods and author intent and, (6) broader non-traditional impacts that participatory research can create should be better quantified and valued in the context of research impact. Each point will now be discussed in further detail.

In underestimating the time and resources required to complete the telepractice study, a scope reduction was required. This scope reduction removed the study’s originally planned co-implementation and co-evaluation phases. While Harrison et al. [ 40 ] and Bodden and Elliott [ 41 ] advocate for more frequent and comprehensive evaluation of co-designed initiatives, the authors acknowledge that this became no longer feasible within the study constraints. A growing body of literature indicates expected timelines for completed co-production projects from co-planning to co-evaluation. An example by Pearce et al. [ 5 ] indicated that a timeline of five years was reasonable. In contrast, a more limited co-design process was completed with a shorter timeline by Tindall et al. [ 13 ]. Although neither of these articles were published when this study commenced, they are complementary in building an evidence base for future research to anticipate an adequate timeline.

While co-design and other co-production processes are resource and time-intensive, the investment is essential to prioritise the health and other needs of potentially vulnerable population groups in the context of an imbalance of power [ 42 ]. In exploring the concept of dignity for people with disability, Chapman et al. [ 34 ] indicated that recognising the right to make decisions and proactively eliminating or minimising barriers to inclusion are key to protecting dignity. Community participation in decision-making processes such as this study can result in messy and unpredictable outcomes. However, the onus must be placed on policymakers, organisations, and academia to acknowledge this sufficiently rather than demand conformity [ 15 ].

The authors posit that the study would have benefited from an alternative ethics pathway, which may provide additional required flexibility while upholding the rigour of the ethical review process. The increasing frequency of participatory research studies indicates that challenges experienced by the authors of this study are unlikely to be isolated. Lloyd [ 43 ] described challenges regarding information gathered in-between, before and after structured research sessions, reflecting that they relied on personal judgement of the intent to consent for research use. Similarly, Rowley [ 44 ] reflected on the ethical complexities of interacting with families and respecting their confidentiality within the context of being integrated within an organisation. While these studies were co-production in child protection and education, the ethical challenges of their reflections parallel those experienced in the telepractice study. The risks posed by inadequate ethical support in these contexts are that increased poor ethical outcomes will occur, especially in the in-between times of co-design. Therefore, an ethics pathway that involves more frequent brief liaisons with a designated ethics representative to update project progress and troubleshoot ethical considerations may better support researchers to safeguard study participants.

We believe the decision to complete a sequential workshop series with a consistent group of diverse co-designers, led by co-facilitators, was a strength of the co-design process implemented in the telepractice re-design project. The group worked together across a series of workshops, which enabled them to build solid working relationships. Pearce et al. [ 5 ], Rahman et al. [ 16 ] and Tindall et al. [ 13 ] also demonstrated a collaborative whole-team approach to co-design. By contrast, studies that involved separate workshops with different cohorts or multiple of the same workshop did not demonstrate strong collaboration between co-designers [ 18 , 19 , 20 ]. Nesbitt et al. [ 19 ] explicitly highlighted that they would improve their method by completing sequential workshops with a continuous cohort. Stephens et al. [ 45 ] found that small mixed groups were not sufficient to support the participation of people with disability, indicating that the choice to intentionally balance groups to meet the lived experience expert co-designer’s needs may have been an impacting factor on our success.

A lack of clarity in the terminology used in co-design and community-based participatory practice was identified during the completion of this study. We found that co-design frequently meant either a collaborative design process or good participatory practices [ 46 ]. When viewing the structure of the telepractice re-design project, the overarching research approach was community-based participatory-research, and the method was co-design [ 9 ]. The delineation between the overarching approach and methods clarifies the misappropriation of the term co-design with the intent of meaning public participation [ 46 ] rather than the joint process of creative thinking and doing to design an output [ 11 ]. The use of the two-level structure appears more prominent in the United Kingdom, whereas Fox et al. [ 47 ] systematic review assessing public or patient participants identified that 60% of studies originated from the United Kingdom, compared to the next highest 16% for Canada or 4% from Australia and the United States. To improve clarity and reduce confusion about the terminology used, the authors advocate for greater awareness and implementation of the delineation between the concepts of a community-based-participatory-research/patient or public involvement approach versus the co-design method.

An example of co-design being used where alternate terms such as community-based participatory processes (or research) may be more relevant was the most recent amendment to the act governing the NDIS under which this project resided [ 48 ]. The term co-design could be interpreted as an intent to collaborate with people with disability for equitable involvement in all aspects of the NDIS [ 48 ]. It is proposed that the differentiation of these terms would assist in clarifying the intent of the study and dissuade inaccurate expectations of community involvement or design processes.

Implementing community-based participatory research has demonstrated the potential to create an impact that expands further than the original aim of the study. The skills learned by co-designers, the learning of the research team in collaboration with people with disability, the engagement and skill-building of a peer researcher with lived experience, the organisations who engaged in the co-design process and the academic and lay people who engaged with research outputs, all carry a piece of the impact of the co-design process. Rahman et al. [ 16 ] contend that co-design processes positively impact communities. In the context of this study, the peer researcher was included in the National Disability Insurance Agency’s quarterly report as an example of strength-based employment opportunities, which significantly positively impacted his career prospects [ 49 ]. This project provided skills for people with disability that they value and improved the clinical practice of clinician co-designers, which echoes the conclusions of Ramos et al. [ 15 ], who described that participants felt valued and experienced improved self-esteem. There is additional intent from the authors to positively impact disability providers and academia, to advocate for greater collaboration, and to provide open-access publications to provide a stronger evidence base for co-design in clinical practice and service delivery.

Strengths and limitations

The study provides reflective evidence to support the challenges and benefits experienced during the implementation of the study. However, a limitation in the project’s design was the exclusion of outcome measures to assess the impact of process design choices directly. Stephens et al. [ 45 ] completed targeted outcome measures correlating to accessibility adaptations in co-design and conceded that the variability of findings and individual needs reduced the usefulness of these measures.

The reduction of project scope enabled the completion of the study within the limitations of budgeting and timeline restrictions. Although the scope of the project had some flexibility, there were limitations to how far this could be extended as resources were not infinite, and staffing changes meant that organisational priorities changed. Including implementation and evaluation would have improved the study’s rigour. However, Rocky Bay now has the opportunity to implement internally without potential research delays and restrictions.

The blended and flexible approach to the co-design process was a strength of the study as it met the co-designers needs and maximised the project’s potential inclusivity. This strength has the potential to positively impact other studies that can modify some of the process design choices to suit their context and increase inclusivity [ 11 ]. It is believed that the messiness of co-design is important in meeting the needs and context of each individual study; therefore, no two co-design processes should look the same.

The authors concede that the inclusion of a cohort of people with disability and clinical staff does not represent the entirety of their communities, and their proposed changes may cause some parts of the disability community to experience increased barriers [ 50 ]. It is important to note that while the co-designers who participated in this project provided initial design developments, future opportunities remain to iterate the proposed telepractice service and continue to advocate for equitable access for all.

Recommendations for future studies

Recommendations from this study fall into two categories: recommendations for those intending to utilise the described methods and recommendations for future avenues of research inquiry. For those intending to implement the methods, the primary recommendations are to build ample time buffers into the project schedule, implement key role succession planning and set remuneration agreements at the outset, and work together as partners with the mindset that all contributors are creative [ 51 ] with important expertise and invaluable insights if supported appropriately.

Regarding avenues for future inquiry, we recommend investigating a more dynamic and flexible ethics process that may utilise more frequent short consultations to respond to ethical considerations during the emergent co-design and participatory research.

In the authors’ opinion, supported by co-designers experiences, co-design is a useful and outcome-generating methodology that can proactively enable the inclusion of people with disability and service providers in a community-based participatory research approach. The process is both time and resource-intensive; however, in our opinion, the investment is justified through the delivery of direct research benefits and indirect wider community benefits. We advocate for using community-based participatory-research/processes paired with co-design to generate creative thinking within service design processes. Through co-design processes, we recommend collaborating with a single diverse group of co-designers who have the time and space to build trusting working relationships that enable outputs representative of the group consensus.

Data availability

The dataset supporting the conclusions of this article is predominantly included within the article (and its additional files). However, due to the small number of co-designers reflecting upon the research, despite deidentification, there is a reasonable assumption of identification; therefore, the reflection activity response supporting data is not available.

Abbreviations

Australian Dollar

Guidance for Reporting Involvement of Patients and the Public 2 Checklist

Human Research Ethics Committee

Doctor of Philosophy

Patient and Public Involvement

Microsoft Teams

National Disability Insurance Scheme

McKercher KA. Beyond Sticky Notes doing co-design for Real: mindsets, methods, and movements. 1 ed. Sydney, NSW: Beyond Sticky Notes; 2020. p. 225.

Google Scholar  

Mullins RM, Kelly BE, Chiappalone PS, Lewis VJ. No-one has listened to anything I’ve got to say before’: co-design with people who are sleeping rough. Health Expect. 2021;24(3):930–9. https://doi.org/10.1111/hex.13235 .

Article   PubMed   PubMed Central   Google Scholar  

Ekman I, Swedberg K, Taft C, Lindseth A, Norberg A, Brink E, et al. Person-centered Care — Ready for Prime Time. Eur J Cardiovasc Nurs. 2011;4248–51. https://doi.org/10.1016/j.ejcnurse.2011.06.008 . [cited 3/9/2022];10.

National Commission on Safety and Quality in Healthcare. Partnering with Consumers Standard. Australia: National Commission on Safety and Quality in Healthcare. 2021. https://www.safetyandquality.gov.au/standards/nsqhs-standards/partnering-consumers-standard .

Pearce T, Maple M, McKay K, Shakeshaft A, Wayland S. Co-creation of new knowledge: good fortune or good management? Res Involv Engagem. 2022;8(1):1–13. https://doi.org/10.1186/s40900-022-00394-2 .

Article   Google Scholar  

Bordeaux BC, Wiley C, Tandon SD, Horowitz CR, Brown PB, Bass EB. Guidelines for writing manuscripts about community-based participatory research for peer-reviewed journals. Prog Community Health Partnersh. 2007;1(3):281–8. https://doi.org/10.1353/cpr.2007.0018 .

Staniszewska S, Brett J, Simera I, Seers K, Mockford C, Goodlad S, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. Res Involv Engagem. 2017;3(1):1–11. https://doi.org/10.1186/s40900-017-0062-2 .

Ostrom E, Baugh W, Guarasci R, Parks R, Whitaker G. Community Organization and the Provision of Police Services. Sage; 1973.

Masterson D, Areskoug Josefsson K, Robert G, Nylander E, Kjellström S. Mapping definitions of co-production and co-design in health and social care: a systematic scoping review providing lessons for the future. Health Expect. 2022;25(3):902–13. https://doi.org/10.1111/hex.13470 .

Bibb J. Embedding lived experience in music therapy practice: Towards a future of co-designed, co-produced and co-delivered music therapy programs in Australia. Australian Journal of Music Therapy [Journal Article]. 2022 [cited 2023/08/21];33(2):25–36. https://doi.org/10.3316/informit.829441047529429 .

Davis A, Gwilt I, Wallace N, Langley J. Low-contact Co-design: considering more flexible spatiotemporal models for the co-design workshop. Strategic Des Res J. 2021;14(1):124–37. https://doi.org/10.4013/sdrj.2021.141.11 .

Claborn KR, Creech S, Whittfield Q, Parra-Cardona R, Daugherty A, Benzer J. Ethical by design: engaging the community to co-design a Digital Health Ecosystem to Improve Overdose Prevention efforts among highly vulnerable people who use drugs. Front Digit Health [Original Research]. 2022;4:1–13. https://doi.org/10.3389/fdgth.2022.880849 .

Tindall RM, Ferris M, Townsend M, Boschert G, Moylan S. A first-hand experience of co‐design in mental health service design: opportunities, challenges, and lessons. Int J Ment Health Nurs. 2021;30(6):1693–702. https://doi.org/10.1111/inm.12925 .

Article   PubMed   Google Scholar  

Wahlin DW, Blomkamp DE. Making global local: global methods, local planning, and the importance of genuine community engagement in Australia. Policy Des Pract. 2022;5(4):483–503. https://doi.org/10.1080/25741292.2022.2141489 .

Ramos M, Forcellini FA, Ferreira MGG. Patient-centered healthcare service development: a literature review. Strategic Des Res J. 2021;14(2):423–37. https://doi.org/10.4013/sdrj.2021.142.04 .

Rahman A, Nawaz S, Khan E, Islam S. Nothing about us, without us: is for us. Res Involv Engagem. 2022;8(1):1–10. https://doi.org/10.1186/s40900-022-00372-8 .

Harrison R, Manias E, Ellis L, Mimmo L, Walpola R, Roxas-Harris B, et al. Evaluating clinician experience in value-based health care: the development and validation of the Clinician experience measure (CEM). BMC Health Serv Res. 2022;22:1–11. https://doi.org/10.1186/s12913-022-08900-8 .

Kerr JAS, Whelan M, Zelenko O, Harper-Hill K, Villalba C. Integrated Co-design: a model for co-designing with multiple stakeholder groups from the ‘Fuzzy’ front-end to Beyond Project Delivery. Int J Des. 2022;16(2):1–17. https://doi.org/10.57698/v16i2.06 .

Nesbitt K, Beleigoli A, Du H, Tirimacco R, Clark RA. User experience (UX) design as a co-design methodology: lessons learned during the development of a web-based portal for cardiac rehabilitation. Eur J Cardiovasc Nurs. 2022;21(2):178–83. https://doi.org/10.1093/eurjcn/zvab127 .

Marwaa MN, Guidetti S, Ytterberg C, Kristensen HK. Using experience-based co-design to develop mobile/tablet applications to support a person-centred and empowering stroke rehabilitation. Res Involv Engagem. 2023;9(1):1–17. https://doi.org/10.1186/s40900-023-00472-z .

Tariq S, Grewal EK, Booth R, Nat B, Ka-Caleni T, Larsen M, et al. Lessons learned from a virtual community-based Participatory Research project: prioritizing needs of people who have diabetes and experiences of homelessness to co-design a participatory action project. Res Involv Engagem. 2023;9(1):1–11. https://doi.org/10.1186/s40900-023-00456-z .

Abimbola S, Li C, Mitchell M, Everett M, Casburn K, Crooks P, et al. On the same page: co-designing the logic model of a telehealth service for children in rural and remote Australia. Digit Health. 2019;5:2055207619826468–2055207619826468. https://doi.org/10.1177/2055207619826468 .

Rocky Bay. Rocky Bay Annual Report FY 2021–2022. Perth. 2022. https://www.rockybay.org.au/wp-content/uploads/2022/12/Rocky-Bay-Annual-Report-21-22.pdf .

National Disability Insurance Agency. What is the NDIS? [Internet]. 2021 [updated 14.08.2021. https://www.ndis.gov.au/understanding/what-ndis .

Reen G, Page B, Oikonomou E. Working as an embedded researcher in a healthcare setting: a practical guide for current or prospective embedded researchers. J Eval Clin Pract. 2022;28(1):93–8. https://doi.org/10.1111/jep.13593 .

Bell S, Aggleton P, Gibson A. Peer Research in Health and Social Development 1st Edition ed. London: Routledge; 2021. p. 286.

Book   Google Scholar  

Curran T, Jones M, Ferguson S, Reed M, Lawrence A, Cull N, et al. Disabled young people’s hopes and dreams in a rapidly changing society: a co-production peer research study. Disabil Soc. 2021;36(4):561–78. https://doi.org/10.1080/09687599.2020.1755234 .

Kelly B, Friel S, McShane T, Pinkerton J, Gilligan E. I haven’t read it, I’ve lived it! The benefits and challenges of peer research with young people leaving care. Qualitative Social work: QSW: Res Pract. 2020;19(1):108–24. https://doi.org/10.1177/1473325018800370 .

Schwartz AE, Kramer JM. Inclusive approaches to developing content valid patient-reported outcome measure response scales for youth with intellectual/developmental disabilities. Br J Learn Disabil. 2021;49(1):100–10. https://doi.org/10.1111/bld.12346 .

Webb P, Falls D, Keenan F, Norris B, Owens A, Davidson G, et al. Peer researchers’ experiences of a co-produced research project on supported decision-making. Res Involv Engagem. 2022;8(1):1–10. https://doi.org/10.1186/s40900-022-00406-1 .

People with Disability Australia. PWDA Language Guide: A guide to language about disability. Sydney, Australia. 2021. https://pwd.org.au/wp-content/uploads/2021/12/PWDA-Language-Guide-v2-2021.pdf .

Peters MDJGC, McInerney P, Munn Z, Tricco AC, Khalil H. Chapter 11: Scoping Reviews (2020 version). In: Aromataris E MZ, editor. JBI Manual for Evidence Synthesis, JBI, 2020: JBI; 2020.

Australian Broadcasting Commission. ‘My purpose is changing perceptions’: Australian of the Year Dylan Alcott’s speech in full [Internet]. 2022 [cited 17.08.2023]. https://www.abc.net.au/news/2022-01-26/dylan-alcott-australian-of-the-year-speech-in-full/100783308 .

Chapman K, Dixon A, Ehrlich C, Kendall E. Dignity and the importance of acknowledgement of Personhood for people with disability. Qual Health Res. 2024;34(1–2):141–53. https://doi.org/10.1177/10497323231204562 .

Flattery S. Stim Joy: Using Multi-Sensory Design to Foster Better Understanding of the Autistic Experience: ProQuest Dissertations Publishing; 2023.

Peak Learning. The Real Deal [Internet]. 2023 [cited 6.10.2023]. https://www.peaklearning.com/trd/ .

Benz C, Scott-Jeffs W, Revitt J, Brabon C, Fermanis C, Hawkes M, et al. Co-designing a telepractice journey map with disability customers and clinicians: partnering with users to understand challenges from their perspective. Health Expect. 2023;1–11. https://doi.org/10.1111/hex.13919 .

Flowers E, Miller ME. Your Guide to Blueprinting The Practical Way. 1 ed. USA: Practical By Design 2022. 134 p. pp. 1-134.

Blomkvist J. Benefits of Service Level Prototyping. Des J. 2016;19(4):545–64. https://doi.org/10.1080/14606925.2016.1177292 .

Harrison R, Ní Shé É, Debono D, Chauhan A, Newman B. Creating space for theory when codesigning healthcare interventions. J Eval Clin Pract. 2023;29(4):572–5. https://doi.org/10.1111/jep.13720 .

Bodden S, Elliott J. Finding space for Shared futures. Edinb Archit Res. 2022;37:90–104.

Page K. Ethics and the co-production of knowledge. Public Health Research & Practice. 2022:1–5. https://www.phrp.com.au/issues/june-2022-volume-32-issue-2/ethics-and-co-production/ .

Lloyd J. Life in a lanyard: developing an ethics of embedded research methods in children’s social care. J Children’s Serv. 2021;16(4):318–31. https://doi.org/10.1108/JCS-12-2019-0047 . [cited 2023/12/05];.

Rowley H. Going beyond procedure:engaging with the ethical complexities of being an embedded researcher. Manage Educ. 2014;28(1):19–24. https://doi.org/10.1177/0892020613510119 .

Stephens L, Smith H, Epstein I, Baljko M, McIntosh I, Dadashi N, et al. Accessibility and participatory design: time, power, and facilitation. CoDesign. 2023;1–17. https://doi.org/10.1080/15710882.2023.2214145 .

Gardner G, McKercher KA. But is it co-design? And if it is, so what? 2021. https://healthvoices.org.au/issues/nov-2021/but-is-it-co-design-and-if-it-is-so-what .

Fox G, Lalu MM, Sabloff T, Nicholls SG, Smith M, Stacey D, et al. Recognizing patient partner contributions to health research: a systematic review of reported practices. Res Involv Engagem. 2023;9(1):1–30. https://doi.org/10.1186/s40900-023-00488-5 .

National Disability Insurance Agency. 2022 NDIS legislation amendments Australia; 2022. https://www.ndis.gov.au/news/7975-2022-ndis-legislation-amendments-july-update .

National Disability Insurance Agency. Report to disability ministers for Q4 of Y10 Summary Part A Australia. 2023. https://www.ndis.gov.au/about-us/publications/quarterly-reports .

Lid IM. Universal Design and disability: an interdisciplinary perspective. Disabil Rehabil. 2014;36(16):1344–9. https://doi.org/10.3109/09638288.2014.931472 .

Sanders E, Stappers PJ. Co-creation and the New landscapes of Design. CoDesign. 2008;4:5–18. https://doi.org/10.1080/15710880701875068 .

Download references

Acknowledgements

The authors acknowledge the contribution of Rocky Bay as the industry partner of this project and would like to thank the Co-designers of this project, without whom none of this was possible. The research team would also like to thank Katie Harris for her time and support throughout the workshop series, which were invaluable to the completion of the project and the formation of the published study.

The article forms part of a PhD project funded by the first author, CB’s Australian Government Research Training Program (RTP) scholarship.

Author information

Authors and affiliations.

School of Population Health, Curtin University, Bentley, Australia

Cloe Benz, Richard Norman, Delia Hendrie & Suzanne Robinson

Rocky Bay, Mosman Park, WA, Australia

Will Scott-Jeffs, Mai Welsh & Matthew Locantro

Beyond Sticky Notes, Sydney, Australia

K. A. McKercher

Therapy Focus, Bentley, Australia

Deakin Health Economics, Institute for Health Transformation, Deakin University, Melbourne, Australia

Suzanne Robinson

You can also search for this author in PubMed   Google Scholar

Contributions

CB and MW liaised with the steering committee and conceived the study and structure. SR, DH and RN guided the protocol development and ethics approval. KAM provided methodological support to the project and subject matter expertise. CB and WJS completed participant recruitment, facilitation of workshops and data collection. KAM and CB ideated the format and content of the article. CB completed data analysis and wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved of the final version of the manuscript.

Corresponding author

Correspondence to Cloe Benz .

Ethics declarations

Ethical approval and consent.

The study was approved by the Curtin University Human Research Ethics Committee (ID# HRE2021-0731), and all participants provided written informed consent before engaging in any research activity.

Consent for publication

Not applicable.

Competing interests

Cloe Benz, Richard Norman, Delia Hendrie & Suzanne Robinson do not have any competing interests to declare. Will Scott-Jeffs, Matthew Locantro and Mai Welsh, for all or part of the study period were employed by Rocky Bay a Not-For-Profit Disability Service provider who function as the industry partner for the project. K.A. McKercher is the author of a co-design method book referenced in the article. McKercher also runs a business that helps people co-design.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1:

Appendix 1–3

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Benz, C., Scott-Jeffs, W., McKercher, K.A. et al. Community-based participatory-research through co-design: supporting collaboration from all sides of disability. Res Involv Engagem 10 , 47 (2024). https://doi.org/10.1186/s40900-024-00573-3

Download citation

Received : 13 November 2023

Accepted : 12 April 2024

Published : 10 May 2024

DOI : https://doi.org/10.1186/s40900-024-00573-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Community-based participatory-research
  • Telepractice
  • Lived experience
  • Embedded researcher
  • Digital health
  • Patient and public involvement

Research Involvement and Engagement

ISSN: 2056-7529

design research expert interview

  • Research Note
  • Open access
  • Published: 15 May 2024

Concepts of lines of therapy in cancer treatment: findings from an expert interview-based study

  • Lisa Falchetto 1   na1 ,
  • Bernd Bender 1 , 2   na1 ,
  • Ian Erhard 1 , 2 ,
  • Kim N. Zeiner 3 ,
  • Jan A. Stratmann 11 ,
  • Florestan J. Koll 4 ,
  • Sebastian Wagner 11 ,
  • Marcel Reiser 5 ,
  • Khayal Gasimli 6 ,
  • Angelika Stehle 7 ,
  • Martin Voss 8 ,
  • Olivier Ballo 11 ,
  • Jörg Janne Vehreschild 1 , 9 , 10 &
  • Daniel Maier 1 , 2  

BMC Research Notes volume  17 , Article number:  137 ( 2024 ) Cite this article

Metrics details

The concept of lines of therapy (LOT) in cancer treatment is often considered for decision making in tumor boards and clinical management, but lacks a common definition across medical specialties. The complexity and heterogeneity of malignancies and treatment modalities contribute to an inconsistent understanding of LOT among physicians. This study assesses the heterogeneity of understandings of the LOT concept, its major dimensions, and criteria from the perspective of physicians of different specialties with an oncological focus in Germany. Semi-structured expert interviews with nine physicians were conducted and evaluated using qualitative content analysis.

Most interviewees agreed that there is no single definition for LOT and found it difficult to explicate their understanding. A majority of experts stated that they had already encountered misunderstandings with colleagues regarding LOT and that they had problems with deciphering LOT from the medical records of their patients. Disagreement emerged about the roles of the following within the LOT concept: maintenance therapy, treatment intention, different therapy modalities, changing pharmaceutical agents, and therapy breaks. Respondents predominantly considered the same criteria as decisive for the definition of LOT as for a change in LOT (e.g., the occurrence of a progression event or tumor recurrence).

Peer Review reports

Introduction

While clinical oncology considers line of therapy (LOT) essential information for therapy planning, the field lacks a homogeneous understanding of the concept, as well as clear and consistent criteria for its classification [ 1 ]. Especially in real-world data-based research, it is often unclear whether a certain therapy is still part of an LOT; and often, conflicting interpretations lead to misunderstandings in information exchange about therapy progression [ 1 ]. Existing approaches, for standardizing the classification of LOT either focus on patterns proposed by guidelines (e.g., drug administration period, first-line termination) or on drug administration sequences [ 2 , 3 , 4 , 5 , 6 ]. However, other issues related to the LOT concept remain largely unclear. For example, the roles of maintenance therapies and local therapy modalities have not yet been discussed [ 1 ].

This expert-interview study aims to provide a better conceptual understanding of the defining criteria of LOT for solid and non-solid cancers. Therefore, it may contribute to identifying unclear aspects of the LOT concept and avoiding misunderstandings in communication about LOTs, especially between physicians of different medical disciplines. Concerning the rapidly developing field of real-world cancer research, data augmentation strategies and feature engineering require empirically validated concepts to obtain reliable evidence from observational data. More specifically, investigating the conceptual understanding of LOTs will help us build a rule-based framework for LOT classification within the Clinical Communication Platform of the German Cancer Consortium (DKTK).

The study’s target group was physicians from various specialties with an oncological focus, working in either university hospitals or private practice. Physicians from the University Hospital Frankfurt and private practices were contacted by e-mail. In total, nine were interviewed. Their varied specialties included neuro-oncology, pulmonology, hematology and medical oncology, urology, dermatology, and gynecological oncology, as well as one resident specialist in internal medicine with a focus on hematology and oncology. The interviewees’ professional experience ranged from 3.5 to 29 years and most had experience in treating both solid and non-solid malignancies.

Qualitative expert interviews [ 7 , 8 ] were conducted by posing open questions within a semi-structured framework [ 9 ]. An interview manual delineated this framework and was developed based on existing literature about oncological LOTs and associated concepts (see Additional File 1 ). Before the interviews, the interview manual was pre-tested with an experienced oncologist and adjusted accordingly. Each participant declared their consent before the interview. Confidentiality and anonymity of participants’ responses and information were assured. The first part of the interview manual asked about the interviewee’s underlying understanding of LOTs and the relevant criteria for their definition. Subsequently, questions concerning misunderstandings in interactions with colleagues were posed to determine whether there are frequent uncertainties in the use of the LOT concept and, if so, what reasons may underlie this situation. Next, the interviewer asked about how specific criteria, picked out of the literature, related to the definition of LOT. These included the influence of treatment intention, the role of maintenance therapy, and local therapies. Another focus of the interviews was how the interviewees judged the relationship of both changes in drug regimen and therapy breaks to the definition of LOT.

Data collection/conduct of interviews

The expert interviews were conducted between June 1 and July 17, 2022 via video conference and in German. They lasted between 10 and 25 minutes with an average duration of approximately 18 minutes. The interviews were recorded and transcribed using the ExpressScribe Pro software (Version 10.17).

Data analysis

The interviews were analyzed using methods of qualitative content analysis as described in Mayring [ 10 ] and the software MaxQDA Analytics Pro 2022 (release 22.2.0). A system for coding the interview material was developed based on literature research conducted before the interviews.

Since the interviews were conducted in German, we provide an English translation of selected quotes. Table  1 contains the main topics and sub-topics of the interview, as well as exemplary quotes from the interviewees.

LOT definition and misunderstandings

Most interviewees confirmed that there was no common understanding of LOT and that they had difficulties explicating their own understanding of the concept. Furthermore, four of the interviewees reported misunderstandings with colleagues regarding LOTs and seven reported that they experienced uncertainties in their clinical practice when defining an LOT. For instance, if care for a patient was delivered by multiple centers, misunderstandings concerning LOT progression frequently occurred, because involved persons lacked a common understanding:

“[…] when it comes to categorizing it somehow so that it is standardized and applicable across multiple centers, yes there existed discrepancies in the particular considerations.” (Expert interview (E)05).

Treatment intention

Six interviewees said that treatment intention (curative vs. palliative) is important in the choice of therapy. Consequently, treatment intention is also relevant to LOT planning. Three experts expressed that LOT is especially relevant and established in the palliative setting:

“With a curative therapy option, […] you shouldn’t have any progression under therapy, after all. So that’s why the definition [of the line of therapy] does differ somewhat – palliative versus curative.” (E03).

Maintenance therapy

Starting a maintenance therapy to control a tumor after chemotherapy was predominantly not considered an indicator for a change in LOT, since usually only part of the medication regimen is discontinued for maintenance, while the rest remains the same. However, interviewees also said that maintenance therapy can include an entirely new pharmaceutical agent, which would, in turn, complicate the delineation between LOT:

“Yes, that’s difficult, too. I would probably count maintenance therapy as part of that – if it’s sort of quasi-logically linked to the therapy that was administered before it. But if it’s a completely different type of substance now, then it becomes more difficult again.” (E03).

Local therapies vs. systemic therapies

Six of the physicians interviewed opined that a LOT can contain both local and systemic therapies. However, some participants stated that beginning a new local therapy would not lead to a change of LOT, in contrast to beginning a new systemic therapy. Meanwhile, in contrast to the other six, three physicians emphasized that only systemic therapies can constitute a LOT:

“In my opinion, the therapy line is primarily defined by the systemic therapies. The local therapies are rather something supplementary that is carried out additionally, or – as the case may be – primarily in addition to symptom relief. Local therapies can also be used to achieve a response, but are not usually mentioned as a line of therapy.” (E06).

Change of LOT

All interviewees said that the LOT must be changed if tumor progression or disease relapse occurs or if therapy response fails. Six interviewees considered the occurrence of adverse effects (e.g., severe toxicity) a significant criterion for the decision to change an LOT. Only three interviewees saw the addition of a new pharmaceutical agent as resulting in a change of LOT:

“Dropping an active substance, I would always see as being due to toxicity or at the patient’s request – so actually owed to toxicity. That is, I would never call that a new line of therapy, whereas the addition of a new agent – strictly speaking, it would have to be considered a new line of therapy, although it is also difficult in terms of definition.” (E09).

The other seven interviewees only considered the introduction of new pharmaceutical agents a change in LOT if the treatment intention changed as well, or if a recurrence or progression occurred. Only the replacement of one drug with another of the same class (e.g., cisplatin with carboplatin) was not considered a change of LOT by anyone.

Therapy breaks

There were also ambiguous opinions regarding the role of breaks in therapy for the classification of LOT. On the one hand, the length of the break was considered decisive, whereas on the other hand, it was said that the therapy following the break was more important. Additionally, some viewed breaks in therapy as important for the classification of LOT in the event of a relapse or progression:

“[…] In principle, if no recurrence has occurred and it is perhaps even the same substance […] then I would consider it one line of therapy, regardless of how long the break was.” (E01).

If the break was unplanned, it was considered a significantly more important criterion for a change in LOT than if it was part of the therapy concept.

The expert interviews in this study largely confirmed that there is no common understanding of the LOT concept or its defining criteria. The interview material suggests that individual backgrounds in differing medical disciplines may influence views on and understandings of LOT. This potential context dependency of the LOT concept also appears consistent with heterogeneous working definitions of LOT in different real-world studies of distinct cancer entities [ 1 , 11 , 12 ].

However, it appeared that a LOT was considered a therapeutic concept with start- and endpoints that is focused on systemic therapies, although it may also contain additional treatment modalities. If included in the LOT, such non-systemic modalities would be selected based on individual patient and disease characteristics, and terminated if certain events (e.g., tumor progression) occurred.

There was evident uncertainty about the role of adjuvant and maintenance therapy and whether they should be regarded as an LOT together with the preceding (systemic) therapy. Also, no prevailing opinion could be identified on the questions of whether treatment intention (curative vs. palliative) and therapy breaks were integral to defining LOTs. Furthermore, experts held differing opinions on which changes in the administered drug regimen would initiate a change in LOT.

In the literature, however, individual approaches for standardizing the criteria for a change in LOT exist in the following cases: the termination of a LOT is indicated in the event of treatment discontinuation, addition of a new, non-equivalent agent, interruption of treatment, clinical progression of the disease, or death of the patient [ 2 , 3 ]. The interviewees were also nearly unanimous on these criteria: all considered tumor progression and recurrence decisive for a change in LOT; six experts highlighted the occurrence of side effects or relevant toxicity; three mentioned the scheduled end of therapy; and one cited patients’ wishes. Only some of the interviewees considered a change in pharmaceutical regimen a factor in identifying a change in LOT, while replacement of one drug with another from the same class was not viewed as altering the LOT.

The interviews both identified tumor recurrence and progression as LOT-relevant events and raised questions about the nature of their role. Recurrence and progression during therapy breaks, as well as the length of the break and the treatment thereafter, were considered relevant factors for a change in LOT. In two interviews, although the participants initially identified recurrence and progression as indicators for a change in LOT, their further comments appeared to contradict this standpoint. This apparent inconsistency should be investigated in future research.

Seven interviewees considered treatment intention relevant to LOT. Predominantly, interviewees considered the adoption of maintenance therapy as a continuation of an ongoing LOT. However, it remains unclear whether changes in the dosage or interval of drug administration during maintenance therapy imply a change in LOT. Six interviewees said that both local and systemic therapy modalities should be included in characterizations of LOT, although previous research excluded local modalities [ 1 , 13 , 14 , 15 ].

While similar approaches to standardizing the duration of a LOT [ 2 ] and first-line therapy [ 2 , 3 ] exist, it is not clear whether the definition of LOT can be standardized across disciplines as well as tumor entities. Nevertheless, a cross-disciplinary standard definition of the LOT concept should be targeted.

Limitations

This study exhibits the following limitations:

Qualitative expert interviews were only feasible for a small sample ( n  = 9) of oncological experts, most of whom were located at a single center (eight out of nine). While the study delivers highly granular insights, this approach precludes generalization of the findings. Therefore, subsequent research must evaluate the qualitative insights leaned from this study in larger and more representative samples.

The interviewees had varying degrees of professional experience and different specialties, making direct comparisons of experience and assessments regarding oncological LOT difficult. However, this was intentional to obtain the widest possible range of assessments regarding the broad topic under investigation.

No triangulation in the form of using multiple and diverse data sources, perspectives, locations, or theories took place in conducting the study. Such methods can help to mitigate subjective bias resulting from the explicit focus on one’s own data [ 16 ].

Data availability

Details on the data and materials related to the study may be available upon reasonable request from Bernd Bender ([email protected]).

Abbreviations

German Cancer Consortium

  • Expert interview

Line of therapy

Saini KS, Twelves C. Determining lines of therapy in patients with solid cancers: a proposed new systematic and comprehensive framework. Br J Canc. 2021;125(2):155–63.

Article   Google Scholar  

OPTUM. Determining lines of therapy (LOT) in oncology in claims databases. 2018. https://www.optum.com/content/dam/optum3/optum/en/resources/white-papers/guidelines-for-determining-lines-of-therapy-whitepaper.pdf . Accessed 28 March 2023.

Hess LM, Li X, Wu Y, Goodloe RJ, Cui ZL. Defining treatment regimens and lines of therapy using real-world data in oncology. Future Oncol. 2021;17(15):1865–77.

Article   CAS   PubMed   Google Scholar  

Rajkumar SV, Richardson P, San Miguel JF. Guidelines for determination of the number of prior lines of therapy in multiple myeloma. Blood. 2015;126(7):921–2.

Carroll NM, Burniece KM, Holzman J, McQuillan DB, Plata A, Ritzwoller DP. Algorithm to identify systemic cancer therapy treatment using structured electronic data. JCO Clin Cancer Inf. 2017;1:1–9.

Google Scholar  

Weymann D, Costa S, Regier DA. Validation of a cyclic algorithm to proxy number of lines of systemic cancer therapy using administrative data. JCO Clin Cancer Inf. 2019;3:1–10.

Döringer S. The problem-centred expert interview’. Combining qualitative interviewing approaches for investigating implicit expert knowledge. Int J Soc Res Methodol. 2021;24(3):265–78.

Cooke NM, McDonald JE. A formal methodology for acquiring and representing expert knowledge. Proc IEEE Inst Electr Electron Eng. 1986;74(10):1422–30.

Kallio H, Pietilä AM, Johnson M, Kangasniemi M. Systematic methodological review: developing a framework for a qualitative semi-structured interview guide. J Adv Nurs. 2016;72(12):2954–65.

Article   PubMed   Google Scholar  

Mayring P. Qualitative content analysis: demarcation, varieties, developments. Forum Qual Social Res. 2019;20(3):1–26.

Abrams T, Hess LM, Zhu YE, Schelman W, Liepa AM, Fuchs C. Predictors of heterogeneity in the first-line treatment of patients with advanced/metastatic gastric cancer in the US. Gastric Cancer. 2018;21:738–44.

Meng W, Ou W, Chandwani S, Chen X, Black W, Cai Z. Temporal phenotyping by mining healthcare data to derive lines of therapy for cancer. J Biomed Inf. 2019;100:103335.

Shah S, Raskin L, Cohan D, Freeman M, Hamid O. Treatment patterns of malignant melanoma in the United States from 2011 to 2016: a retrospective cohort study. CMRO. 2020;36:63–72.

CAS   Google Scholar  

Hess GP, Wang PF, Quach D, Barber B, Zhao Z. Systemic therapy for metastatic colorectal cancer: patterns of chemotherapy and biologic therapy use in US medical oncology practice. J Oncol Pract. 2010;6(6):301–7.

Article   PubMed   PubMed Central   Google Scholar  

Fonseca R, Usmani SZ, Mehra M, Slavcev M, He J, Cote S, Lam A, Ukropec J, Maiese EM, Nair S, Potluri R, Voorhees PM. Frontline treatment patterns and attrition rates by subsequent lines of therapy in patients with newly diagnosed multiple myeloma. BMC Cancer. 2020;20:1087.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Cho J, Lee EH. Reducing confusion about grounded theory and qualitative content analysis: similarities and differences. Qual Rep. 2014;19(64):1–20.

Download references

Acknowledgements

We would like to thank the expert physicians who participated in the interviews for their time and willingness to share their experiences and perspectives. Furthermore, we would like to thank the German Cancer Consortium’s Clinical Data Science Group for the support in realizing the study.

Open Access funding enabled and organized by Projekt DEAL. This research is partly funded by the German Cancer Consortium (DKTK).

Author information

Lisa Falchetto and Bernd Bender contributed equally to this work.

Authors and Affiliations

Institute for Digital Medicine and Clinical Data Science, Goethe University Frankfurt, Faculty of Medicine, Frankfurt, Germany

Lisa Falchetto, Bernd Bender, Ian Erhard, Jörg Janne Vehreschild & Daniel Maier

German Cancer Consortium (DKTK), partner site Frankfurt/Mainz and German Cancer Research Center (DKFZ), Heidelberg, Germany

Bernd Bender, Ian Erhard & Daniel Maier

Department for Dermatology, Venerology and Allergology, University Hospital Frankfurt, Frankfurt, Germany

Kim N. Zeiner

Department of Urology, University Hospital Frankfurt, Frankfurt, Germany

Florestan J. Koll

PIOH Praxis Internistischer Onkologie und Hämatologie, Cologne, Germany

Marcel Reiser

Clinic for Gynecology and Obstetrics, University Hospital Frankfurt, Frankfurt, Germany

Khayal Gasimli

Department for Internal Medicine 1, University Hospital Frankfurt, Frankfurt, Germany

Angelika Stehle

Department Neuro-Oncology, University Hospital Frankfurt, Frankfurt, Germany

Martin Voss

Department I of Internal Medicine, University Hospital of Cologne, Cologne, Germany

Jörg Janne Vehreschild

German Center for Infection Research (DZIF) partner site Bonn Cologne, Cologne, Germany

Medical Department 2 (Hematology/Oncology), Center for Internal Medicine, University Hospital Frankfurt, Goethe University Frankfurt, Frankfurt, Germany

Jan A. Stratmann, Sebastian Wagner & Olivier Ballo

You can also search for this author in PubMed   Google Scholar

Contributions

BB, LF, and DM contributed to the writing of this article. LF and DM created the interview manual. LF conducted the interviews with the oncological experts and analyzed the interview material collected. DM and JJV were substantially involved in the conception of the study and in the acquisition of the interviewed experts. JJV also supported the piloting of the interview manual. IE edited the manuscript. KNZ, JAS, FJK, SW, MR, KG, AS, MV and OB participated in the study and provided the substantive statements and findings.

Corresponding author

Correspondence to Bernd Bender .

Ethics declarations

Ethics approval and consent to participate.

All subjects provided written informed consent to participate and this study was conducted according to all relevant ethical and regulatory guidelines. The project was approved by the ethics committee of the department of medicine of the Goethe University Frankfurt (ethical code number: 274/18).

Consent for publication

All interviewees permitted the use of the interview material and consented to publication.

Competing interests

Kim N. Zeiner (KNZ) received an honorarium for presentation from Bristol-Myers Squibb. Jan A. Stratmann (JAS) has personal fees from Boehringer Ingelheim, AstraZeneca, Roche, BMS, Amgen, LEO pharma, Novartis and Takeda. Florestan J. Koll (FJK) received grants from the German Cancer Aid and the German Cancer Consortium (DKTK). Marcel Reiser (MR) received consulting fees from Amgen, Abbvie, Stemline, Novartis and honoria from Roche. Jörg Janne Vehreschild (JJV) has personal fees from Merck / MSD, Gilead, Pfizer, Astellas Pharma, Basilea, German Centre for Infection Research (DZIF), University Hospital Freiburg/ Congress and Communication, Academy for Infectious Medicine, University Manchester, German Society for Infectious Diseases (DGI), Ärztekammer Nordrhein, University Hospital Aachen, Back Bay Strategies, German Society for Internal Medicine (DGIM), Shionogi, Molecular Health, Netzwerk Universitätsmedizin, Janssen, NordForsk, Biontech, APOGEPHA and grants from Merck / MSD, Gilead, Pfizer, Astellas Pharma, Basilea, German Centre for Infection Research (DZIF), German Federal Ministry of Education and Research (BMBF), Deutsches Zentrum für Luft- und Raumfahrt (DLR), University of Bristol, Rigshospitalet Copenhagen. Daniel Maier (DM) received speaker honoraria from Free University Berlin and travel compensation from IQVIA. Lisa Falchetto (LF), Bernd Bender (BB), Ian Erhard (IE), Sebastian Wagner (SW), Khayal Gasimli (KG), Angelika Stehle (AS), Martin Voss (MV) and Olivier Ballo (OB) have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Additional file 1.

Interview manual with all instructions and questions.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Falchetto, L., Bender, B., Erhard, I. et al. Concepts of lines of therapy in cancer treatment: findings from an expert interview-based study. BMC Res Notes 17 , 137 (2024). https://doi.org/10.1186/s13104-024-06789-6

Download citation

Received : 28 July 2023

Accepted : 25 April 2024

Published : 15 May 2024

DOI : https://doi.org/10.1186/s13104-024-06789-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Lines of therapy
  • cancer treatment
  • Therapy planning

BMC Research Notes

ISSN: 1756-0500

design research expert interview

Collaborative and life cycle-based project delivery for environmentally sustainable building construction: views of Finnish project professionals and building operation and maintenance experts

Smart and Sustainable Built Environment

ISSN : 2046-6099

Article publication date: 17 May 2024

The energy performance gap (EPG) in building construction has been one of the major barriers to the realization of environmental and economic sustainability in the built environment. Although there have been a few studies addressing this issue, studying this topic with a special focus on the project delivery process has been almost overlooked. Hence, this study aims to address the EPG in building construction through the lens of collaborative and life cycle-based project delivery.

Design/methodology/approach

In order to realize the objective of this study, the development of a theoretical framework based on the literature review was followed by a qualitative study in which 21 semi-structured interviews were conducted with Finnish project professionals representing clients, design/planning experts, constructors and building operation/maintenance experts to explore their views on the topic under study.

The findings reveal the project delivery-related causes of EPG in building construction. Moreover, the obtained results present a collaborative and life cycle-based delivery model that integrates project and product (i.e. building) life cycles, and it is compatible with all types of contractual frameworks in building construction projects.

Research limitations/implications

Although the findings of this study significantly contribute to theory and practice in the field of collaborative and sustainable construction project delivery, it is acknowledged that these findings are based on Finnish professionals’ input, and expanding this research to other regions is a potential area for further studies. Moreover, the developed model, although validated in Finland, needs to be tested in a broader context as well to gain wider generalizability.

Originality/value

The obtained results reveal the significance and impact of collaborative and life cycle-based project development and delivery on the realization of environmentally sustainable building construction.

Collaborative project delivery

  • Sustainable building construction
  • Life cycle-based project delivery
  • Construction management

Moradi, S. , Hirvonen, J. and Sormunen, P. (2024), "Collaborative and life cycle-based project delivery for environmentally sustainable building construction: views of Finnish project professionals and building operation and maintenance experts", Smart and Sustainable Built Environment , Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/SASBE-01-2024-0004

Emerald Publishing Limited

Copyright © 2024, Sina Moradi, Janne Hirvonen and Piia Sormunen

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

Introduction

Buildings, in general, consume a striking amount of energy, accounting almost to 40% of the whole energy consumption in the world ( Laconte and Gossop, 2016 ). This huge consumption profiles buildings as one of the main areas under focus for further research and development. Consequently, sustainable development goals (SDGs), outlined by United Nations, to a high extend apply to construction industry and, in particular, building construction projects. This high-level recognition has resulted in extensive research on the energy efficiency of building construction and renovation. Regarding building construction, there have been significant advancements (e.g. building information modeling, geothermal energy system), and subsequently the enhanced design expertise and capability in the past decade has aimed for high efficiency or even net-zero energy buildings, in which the amount of consumed and produced energy (i.e. electricity) are even. Although there have been some successes in the construction of highly energy efficient or net-zero-energy buildings in some of the developed countries (e.g. USA) ( Kibert, 2016 ), many of newly constructed buildings have been still struggling to achieve the energy efficiency targets, developed in the design phase. This phenomenon is called energy performance gap (EPG) ( Laconte and Gossop, 2016 ).

Energy performance gap has been one of the major barriers for the realization of environmental and economic sustainability in the built environment. Looking at the definition of EPG, it basically refers to one or more factors in the project life cycle and probably in the commissioning phase of the constructed building, which hinders the efficient performance of the building in terms of energy consumption. In this regard, studies addressing the barriers and enables of the EPG in building construction (e.g. Häkkinen and Belloni, 2011 ; Li and Yao, 2012 ; Moradi et al. , 2023 ; Qian et al. , 2015 ) have found that factors such as collaboration between parties, early involvement of key participants, designer’s competence and integrating project delivery contribute toward solving the performance gap issue. These findings imply project delivery model’s prominent role in filling the EPG because it accounts for the successful accomplishment of building construction projects. In this regard, there have been very few, if any, studies, employing collaborative project delivery as a theoretical lens for looking into the EPG issue. An important point to note is that the project delivery model’s impact is not limited to the project life cycle; it also considerably affects the completed building’s operational life cycle and the realization of energy efficiency goals. Thus, construction project delivery model needs to be collaborative and inclusive in terms of covering both project and product (i.e. building) life cycle.

However, the existing construction project delivery models mostly address project life cycle and almost avoid completed building’s operation period. This is not a surprise as the terminology highlights the focus of the delivery model on the project only and not the product (i.e. constructed building). Consequently, the project parties are not usually held accountable in terms of their responsibility for the performance of the constructed building. This is particularly important for three reasons. First, the research shows that a completed building’s operating costs in its operational life cycle can be even higher than its construction costs ( Mike et al. , 2015 ). Second, realizing sustainable built environment is highly dependent on the actual performance of the buildings in terms of energy efficiency, not the design intentions. And third, actual performance of the building can be seen only in the operation phase. Hence, it seems that project and product life cycle and management are interconnected and need to be integrated in the context of building construction. Thus, further developments and improvements are needed.

In this regard, it is necessary to acknowledge that construction project delivery models have evolved significantly over the past 30 years. In the big picture, the mainstream typology of project delivery models divides them into three categories of traditional, collaborative, hybrid ( Moradi et al. , 2022 ). Traditional delivery models in construction projects refer to design-bid-build, design-build and different types of construction management (e.g. Construction Management (CM) and CM at Risk) ( Forbes and Ahmed, 2010 ). In other words, the terminology associated with traditional delivery models comes from the name of the contract type used in those delivery models. The same logic somewhat applies to the collaborative delivery models which include alliance, partnering, lean project delivery (LPD) and integrated project delivery (IPD) ( Engebø et al. , 2020 ; Lähdenpera, 2012 ; Mesa et al. , 2019 ). The hybrid category refers to those project delivery models which employ traditional contract but also take advantage of collaborative working practices like co-location of the project participants ( Darrington, 2011 ; Moradi et al. , 2021a ). Traditional delivery models are usually characterized by adversarial relationships, mistrust, unfair share of risk-reward, working in silos and dominance of low prince criteria for contractor selection. Conversely, collaborative delivery models feature early involvement of key participants; joint design, planning, control and decision making; open book cost management; aligned interests of stakeholders, continuous learning, fair share of risk-reward; open communication; and trust-based relationships ( Moradi et al. , 2021b ).

What are the project delivery-related barriers and solutions which affect the realization of energy efficiency targets in the operation phase of the constructed buildings?

What kind of model can enable collaborative and life cycle-based delivery of building construction projects for filling the EPG?

The resultant article is structured in six sections, including the introduction, theoretical background, methodology, results, discussion and conclusions.

Theoretical background

Energy performance gap.

When the measured (or actual) energy consumption of buildings differs from the expected energy consumption, the building is said to have an EPG ( Zou et al. , 2018 ). This can mean the difference between simulated and measured energy performance or the difference between targets set by specifications or standards vs the measured performance. The EPG may exist may be observed in existing building as well as in retrofitting and new construction projects ( Mahdavi et al. , 2021 ). In Europe, building energy efficiency is typically measured through the energy efficiency classification from A to G. While the EPG is typically mentioned in the context of higher-than-expected energy consumption, the gap may exist in either direction. For example, in the Swiss residential building stock buildings of low energy classification generally consume significantly less energy than assumed, while buildings of higher energy efficiency class tend to consume slightly more energy than expected ( Cozza et al. , 2020 ). However, Laconte and Gossop (2016) refer to cases where buildings are consuming as much as two or three times the designed energy. The EPG is also related to other types of building performance gaps, like issues with operations and indoor conditions ( Rasmussen and Jensen, 2020 ). Frei et al. (2017) noted that the EPG can arise in three life cycle phases of the building: (1) design and planning (poor early design decisions, uncertainty in energy modeling, oversizing of systems), (2) construction and commissioning (economy over design, poor commissioning) and (3) operation (equipment issues, user interaction and change of building purpose). Boge et al. (2018) especially highlighted the role of early-phase planning. Saving money by not investing enough in the early stage may result in costly remedies in the operational stage and sometimes even permanent problems that cannot be fixed.

In the operational phase of the building, facility managers have a significant role. Borgstein et al. (2018) found that energy performance issues relate to poor management and improper operation of systems. Insufficient energy performance guidelines and poor documentation can result in a lack of proper setpoints or high night-time loads. Floor plans with too large control zones for equipment also prevent correct operation of building automation systems. Facility managers from the USA report that the main reasons for the EPG are (1) higher than expected use of energy by the occupants, (2) there being more than the designed number of occupants and (3) technology failures ( Liang et al. , 2019 ). Facility managers are in principle expected to continually improve energy efficiency in buildings, but are not actually required or incentivized to do so. In fact, some facility managers actively avoid trying to fix issues so as not be held responsible for possible worsening of the gap, referring to unavoidable differences between theory and practice ( Willan et al. , 2020 ). Fears of causing disturbances in building operations and unfamiliarity with data-driven tools prevent the use of data-based recommendations ( Markus et al. , 2022 ). However, it can be argued that continual energy performance improvement should be a key role for facility managers. This role should be started early on, while planning and construction is still taking place ( Boge et al. , 2018 ). While the complexity of modern building services technology can be a cause of the EPG, new technologies may also offer a solution. For example, machine learning can be used to predict EPG based on risk data. This allows the project participants to react to potential energy performance issues early on, before final decisions are made ( Yılmaz et al. , 2023 ).

Definition of project delivery model

Building construction projects go through different phases which include definition, design, planning, construction, closure and handover. This process is usually called project delivery model which is also known as project delivery method or project delivery system. In this article, the term project delivery model is utilized. Project delivery model, according to Mesa et al. (2019) , has three defining elements which are project organization, operational system and contractual relationships. Although the mentioned elements by Mesa et al. (2019) are inclusive, they seem to be missing an important piece which is the delivery process, referring to the steps and activities encompassing project and/or building life cycle and the involved people in each phase. If the delivery process is added to this collection, a new framework can be developed for defining project delivery model. This framework is shown in Figure 1 . This theoretical foundation is of prime importance as the authors’ have often observed in the literature and practice that a certain contract type or operational system or project organization is called as project delivery model whereas all the defining elements shown in Figure 1 need to be in place to have a construction project delivery model.

According to Moradi et al. (2022) , “Collaborative delivery model is one of the umbrella terms which has been utilized by different scholars in reference to alliance, partnering, integrated project delivery, and lean project delivery.” In terms of typology, according to Engebø et al. (2020) , Mesa et al. (2019) and Lähdenpera (2012) , it can be argued that partnering, alliance, IPD and LPD are the pure collaborative project delivery models. However, this study aims to provide an in-depth conceptualization of collaborative project delivery model in construction. To do so, if the framework shown in Figure 1 combined with the features of collaborative delivery models (mentioned earlier in the introduction), the result would be something like Figure 2 , which provides a new framework for defining/distinguishing collaborative project delivery model in construction. The framework, shown in Figure 2 , is consisted of two main elements. The first element is the defining factors of construction project delivery which include project organization, operational system, contractual framework and delivery process. And the second element is the relevant features of collaborative project delivery to the mentioned defining factors. For instance, the collaborative features related to project organization are trust-based relationships and join decision making.

Previous research on collaborative project delivery models

Collaborative project delivery models have been extensively discussed in the recent review studies (e.g. Engebø et al. , 2020 ; Moradi et al. , 2022 ), and this study neither has aim to repeat those discussions in different words, nor it fits to the scope of this article. Instead, an abstract level analysis of the major studied themes is presented in Figure 3 .

As can be seen, success factors and barriers is the only common theme among the conducted research on alliance, partnering, LPD and IPD. The study conducted by Moradi and Kähkönen (2022) has identified commonalities between success factors of collaborative delivery models. Among the research themes shown in Figure 3 , success factors, trust and working relationship and team integration are the most relevant topics to the scope of this article. Hence, the findings of the studies representing those themes have been summarized and are shown in Table 1 .

Research gap and theoretical framework

Collaborative project delivery models emerged, mainly, as a response and reaction to the five common challenges in traditional construction projects. These challenges include accident-free construction, reliability of planning, constructability of design, adversarial working relationships and dominance of low price for selecting the contractor ( Forbes and Ahmed, 2010 ; Oakland and Marosszeky, 2017 ). The research shows that collaborative delivery models have had promising results in overcoming those challenges (e.g. Ibrahim et al. , 2020 ).

However, while the building code sets requirements for building energy consumption and developers set their own energy performance targets, a pitfall in both traditional and collaborative delivery models has been lackluster enforcement of these targets over the building’s operational life cycle. Malfunctioning or inadequately calibrated systems due to lacking construction processes can often result in higher-than-expected energy consumption – an EPG. This gap is of prime importance for realizing sustainability goals, in particular environmental sustainability (energy efficiency and emission), as buildings account for almost 40% of global energy consumption ( Laconte and Gossop, 2016 ).

This becomes even more important if it is noted that there is usually a considerable EPG in building construction projects in terms of the discrepancy between design intentions and actual energy consumption of the building ( Laconte and Gossop, 2016 ). In this regard, the project delivery model seems to have a big role in this EPG. Thus, studying sustainable and collaborative project delivery for building construction with a life cycle perspective is a major research gap which needs to be addressed. Hence, this study aims to do so through developing a conceptual framework (see Figure 4 ), identifying the causes of EPG associated with the project delivery and validating the developed framework to be resulted in the development of a state-of-the-art delivery model which can enable the realization of productive building construction and sustainable built environment in practice.

Methodology

Research design.

What are the challenges/barriers of achieving energy efficiency in building construction projects which are related to project delivery process?

What kind of project delivery model could contribute toward filling the EPG in building construction projects?

Due to the adequacy of literature on the addressed topic in this study, the deductive approach was adopted ( Saunders et al. , 2019 ). Accordingly, literature study and semi-structured interviews were selected as the data collection methods and thematic as well as content analysis as the data analysis methods. These choices were justified with regard to the exploratory purpose of the research ( Saunders et al. , 2019 ). The next step in the research design was to determine the context of study and make a choice about the sampling method. To do so, building construction and renovation projects was selected as the focus of the study. In terms of the building type (construction category), residential buildings, institutional buildings (i.e. school and hospital) and commercial buildings (i.e. shopping mall and office building) were included in the scope of the study.

Concerning sampling method, a combination of quota sampling and purposive sampling method ( Saunders et al. , 2019 ) was utilized in this study through which four groups of interviewees were specified by the research team. These interviewee groups included (1) client project manager, (2) contractor project manager, (3) design manager and (4) property management (i.e. building operation and maintenance) experts. The research team targeted at least five interviewees in each group with a provision to conduct more interviews in each group if data saturation was not achieved ( Saunders et al. , 2019 ). Then, the research team filled each quota by intentionally choosing relevant individuals (i.e. interviewees) in the possession of relevant knowledge and experiences related to the quota and the research topic. The defined interviewee groups in this study provided a basis for life cycle-based and inclusive study of performance gap through the lens of project delivery process based on the input from key project participants in different phases of project life cycle. The life-perspective in data collection was imperative due to the diversity of disciplines involved in the design, construction and operation of a building.

Data collection

Data collection started with formulating the protocol and questions of the semi-structured interviews. The developed questions aimed to the explore the project delivery - related causes behind the EPG in building construction projects based on the viewpoints of key project participants involved in different phases of project life cycle. The developed interview protocol and questions was piloted in the first four interviews (one interview in each interviewee group) to seek feedback from the interviewees. Since there was neither negative feedback nor any changes in the interview protocol and questions, the first four interviews, which had been conducted with piloting purpose, were also considered valid to be analyzed in the data analysis stage.

In the next step, the research team conducted 21 semi-structured interviews in Finland with project professionals representing client, design/planning experts, contractors and building operation/maintenance experts. Since data saturation was achieved in each interviewee group, there was no need for conducting additional interviews ( Saunders et al. , 2019 ). The conducted interviews were audio recorded based on the obtained consent from the interviewees. Then they were transcribed and translated to English language by the native Finnish speaking member of the research team. Table 2 shows interviewees’ discipline, role and their latest project’s type, budget and duration. In addition, Figure 5 shows the demographic information of the interviewees.

Data analysis and validation

The analysis process started with thematic analysis which was performed by inductively coding the extracted research data as a result of analyzing the interview transcripts. The labels of the codes were data derived by the researcher ( Saunders et al. , 2019 ). Validating the generated codes was accomplished through reviewing them three times (each time by one member of the research team) and making the required corrections. The validated codes representing project delivery were formed a theme titled “project delivery.” The establishment of the themes was done based on the sameness or similarity of the codes in terms of the meaning and/or title. Then, a content analysis was performed through which the challenges/barriers and solutions/enablers in the established themes were listed and synthesized based on the similarity or sameness of the title and/or meaning. Finally, the cross validation was carried out through showing the results of thematic and content analysis to the interviewees to ensure the interpretations made in the analysis process were valid. All the interviewees approved the results of thematic and content analysis.

Model development

Following the cross validation, the identified barriers and enablers together provided a basis for modifying the developed conceptual framework ( Figure 4 ) in the literature study and developing a collaborative and life cycle-based delivery model for sustainable building construction. The developed model was validated in two steps. The first step of the validation included two case studies in which the modified model was shown to the project managers of one successful and one unsuccessful building construction project (in terms of energy efficiency and on time and on budget completion) to seek their feedback. The obtained feedback from the case projects was then applied, and the developed delivery model was validated.

Project delivery related challenges and solutions of energy performance gap in building construction

Analyzing the conducted interviews resulted in the identification of several barriers and solutions for achieving energy efficiency in building construction projects (see Appendix ). Among them, some were frequently mentioned by the interviewees, which are shown in Table 3 . As the barriers and enablers implies, the existing delivery models (both collaborative and traditional) ignore the building performance in its operational life cycle and lack sufficient strength for involving building services and maintenance experts in building design and construction phases. Moreover, limiting the contractor’s responsibility to the project life cycle causes fragmentation in the maintenance and optimization of building operation. Dominance of low-price criteria for tendering is another chronic problem which results in the selection of low-capacity contractors who fail to deliver the project efficiently and are incapable of taking responsibility for the building performance in its operational life cycle. Thus, collaborative and life cycle-based delivery model seems to be a viable solution for filling the EPG in building construction projects.

Collaborative and life cycle-based project delivery model (CLCPDM) for sustainable building construction

The literature study and obtained data provided a basis for the development of a collaborative and life cycle-based project delivery model (CLCPDM) for sustainable building construction. This model has two versions: (1) The abstract version, as can be seen in Figure 6 , shows the main steps in the delivery of the project and operation of the building and the main output in each step, and (2) the detailed version also includes descriptions of what happens in each step (see Figure 7 ).

This model has two key differences with the existing delivery models in the literature. First, CLCPDM is inclusive and covers both project life cycle and operational life cycle of the constructed building. Second, it has a combined feature of both traditional and collaborative construction projects, thereby increasing its compatibility with both contexts. The second feature also combines the strengths of both collaborative and traditional delivery models and covers their weaknesses. In other words, it is new a generation of construction project delivery model with capability to realize productivity and sustainability in both project and product life cycle.

Fully realizes the significance of proper project definition, feasibility study and competent as well as price-based contractor selection,

Involves the design team and contractor when they have the highest impact,

Features life cycle-based and collaborative project definition and design,

Treats essential design and planning as an iterative cycle to realize the required improvements,

Employs collaborative tools and working practices in design and construction phases and

prioritizes systematic and continuous documentation of project and building performance data.

Project delivery has been a mechanism for the successful completion of construction projects. The traditional model of this mechanism has not yielded satisfactory results most of the time, particularly in the complex projects, resulting in the over budget, waste, low quality, accident full and delayed delivery of building construction projects (e.g. Forbes and Ahmed, 2010 ; Moradi and Sormunen, 2023 ). Collaborative project delivery emerged to be an effective replacement, and it has had promising performance results (e.g. Hanna, 2016 ; Ibrahim et al. , 2020 ). In spite of this advancement, building construction projects, to a high extent, are still struggling to meet the environmental sustainability goals; their actual energy consumption is considerably higher than expectations (e.g. Laconte and Gossop, 2016 ). Of course, there are several factors behind this EPG phenomenon one of which is project development and delivery process ( Moradi and Sormunen, 2022 ). In fact, this factor happens to be a major cause of the EPG. In particular, the findings showed that inadequate and/or late involvement of key project participants (including building services people) together with fragmented project delivery and maintenance process and dominance of low-price criteria for contractor selection are the key barriers of achieving energy efficiency in building construction ( Moradi et al. , 2023 ). The identified enablers in this study were relevant to the barriers, which can be seen as an indication of the reliability of the obtained results.

The involvement issue can be explained as the missing impact which contractor as well as maintenance experts can have in the project definition and design stages. In other words, these people are a dynamic database of building performance data which can help the client and design team to first reasonably define the goals and then provide input for ensuring constructability of the design in the construction phase and functionality of building in its operation phase. The fragmented project delivery and maintenance exactly reflects on the discovered research gap in this study and its purpose, addressing the fact that project and product life cycle need to be integrated and the key people involved in project life cycle need to be involved and accountable in the product life cycle as well. Finally, the third issue, dominance of low-price criteria, has been a problem for a long time which results in the selection of low-capacity contractors which do not have the required resource and competence. Although collaborative delivery models (e.g. alliance, IPD) has removed this dominance and mostly consider the competency as the selection criteria, they are also missing an important point which is the reasonable price offered by the contractor. Thus, it seems that a mature contractor selection mechanism needs to take into account both tendering (based on a reasonable price range specified in the project definition) and contractor’s capacity (i.e. experience, knowledge, adequate financial resources, sufficient and competent workforce), as the competency criteria. The same selection logic must be also applied for employing the design team. The mentioned solutions in Table 3 concisely characterize the project delivery model, developed in this study, which can overcome the related barriers for achieving energy efficiency in building construction.

The obtained results in this study contribute to the existing body of knowledge in two aspects. First, the findings fill the knowledge gap on the role of project development and delivery in the EPG in building construction. To the best of the knowledge of the authors, this is the first study looking into the EPG in building construction through the lens of project development and delivery process. The second contribution is the development of a novel delivery model which features collaboration and life cycle perspective as its building blocks, and it is yet compatible with the traditional contracts and tendering processes. In other words, CLCPDM is the new generation of construction project delivery model which contributes toward productivity and sustainability achievement in both project and product (i.e. constructed building) life cycle.

From practical perspective, the discussed challenges and solutions together with the developed delivery model informs project professionals and clients on the project delivery related causes of EPG and then provide a practical solution for collaborative and life cycle-based project development and delivery. In particular, the model provides a practical guidance for clients on how to develop their project with a life cycle perspective over the benefits and loss resulting from different decisions. It also reveals the best time for involving the design team and contractor to benefit from their impact.

Although the main focus of the developed model ( Figures 6 and 7 ) is on the delivery process, it still includes the application of relevant tools for measurement, simulation, monitoring and optimization purposes, but does not prescribe/recommend any specific tool. Such optimization could be performed with the help of a digital twin that allows the real-time comparison of actual energy performance to that predicted by simulations ( Spudys et al. , 2023 ). A digital twin might be created from building information modeling (BIM) data that are used in the design phase of the building. The combination of BIM and digital twins could also be used to expand the life cycle optimization to cover not only environmental, but also social and economic impacts ( Boje et al. , 2023 ). As more and more data from buildings becomes available, increasingly accurate prediction of building energy consumption can be made using machine learning methods ( Miller et al. , 2020 ). Artificial intelligence (AI)-based systems may be used to optimize various aspects of buildings, such as energy consumption, thermal comfort and lighting conditions, both in the design and operational phases of the building life cycle ( Mousavi et al. , 2023 ). Accordingly, there is a great potential for the continuous improvement of construction project delivery through the integration of dynamic digital tools like BIM and digital twins.

Conclusions

The project delivery model considerably accounts for the success or failure of the realization of energy efficiency in building construction projects.

Involvement of building services experts and maintenance people in the project definition and design seem to enhance the constructability of the building services design and functionality of the building’s Heating, Ventilation, and Air conditioning (HVAC) system in the operation phase.

Project delivery contract should expand the responsibilities (including risk and reward) of project parties into the constructed building’s operational life cycle.

Collaborative and life cycle-based delivery model combines strengths of both traditional and collaborative delivery models’ and covers their weaknesses. The developed model in this study fulfills this purpose.

The obtained results in this study considerably contribute toward existing body of knowledge in two areas of EPG in building construction and collaborative project delivery. However, it is acknowledged that the findings are based on Finnish professionals’ input and expanding this research to other regions is a potential area for further research. Moreover, the developed model, although validated in Finland, needs to be tested in a broader context as well to increase its generalizability. Furthermore, it is also acknowledged that in this study the interviews were conducted with certain groups of professionals involved in project delivery process and building operation as well as maintenance, and including building users as the fifth groups of interviewees could have been value adding. Hence, obtaining building users’ input is suggested to be considered in the future relevant studies.

design research expert interview

Framework for defining project delivery model

design research expert interview

Conceptualization of collaborative project delivery model in construction

design research expert interview

Major themes in the previous studies addressing collaborative project delivery models

design research expert interview

Conceptual model for collaborative and sustainable delivery of building construction projects

design research expert interview

Demographic information of the interviewees

design research expert interview

Abstract version of collaborative and life cycle-based project delivery model (CLCPDM) for sustainable building construction

design research expert interview

Detailed version of collaborative and life cycle-based project delivery model (CLCPDM) for sustainable building construction

Main findings of the previous studies addressing success factors, trust and relationship and team integration in the context of collaborative project delivery models

Source(s): Authors’ own work

Data availability statement: The authors confirm that the data supporting the findings of this study are available within the article.

Aaltonen , K. and Turkulainen , V. ( 2018 ), “ Creating relational capital through socialization in project alliances ”, International Journal of Operations and Production Management , Vol.  38 No.  6 , pp.  1387 - 1421 , doi: 10.1108/IJOPM-02-2017-0091 .

Bellini , A. , Aarseth , W. and Hosseini , A. ( 2016 ), “ Effective knowledge transfer in successful partnering projects ”, Energy Procedia , Vol.  96 , pp.  218 - 228 , doi: 10.1016/j.egypro.2016.09.127 .

Boge , K. , Salaj , A. , Bjørberg , S. and Larssen , A.K. ( 2018 ), “ Failing to plan – planning to fail: how early phase planning can improve buildings' lifetime value creation ”, Facilities , Vol.  36 Nos 1/2 , pp.  49 - 75 , doi: 10.1108/F-03-2017-0039 .

Boje , C. , Menacho , Á.J.H. , Marvuglia , A. , Benetto , E. , Kubicki , S. , Schaubroeck , T. and Gutiérrez , T.N. ( 2023 ), “ A framework using BIM and digital twins in facilitating LCSA for buildings ”, Journal of Building Engineering , Vol.  76 , 107232 , doi: 10.1016/j.jobe.2023.107232 .

Borgstein , E.H. , Lamberts , R. and Hensen , J.L.M. ( 2018 ), “ Mapping failures in energy and environmental performance of buildings ”, Energy and Buildings , Vol.  158 , pp.  476 - 485 , doi: 10.1016/j.enbuild.2017.10.038 .

Chan , A.P. , Chan , D.W. , Chiang , Y.H. , Tang , B.S. , Chan , E.H. and Ho , K.S. ( 2004a ), “ Exploring critical success factors for partnering in construction projects ”, Journal of Construction Engineering and Management , Vol.  130 No.  2 , pp.  188 - 198 , doi: 10.1061/(ASCE)0733-9364(2004)130:2(188) .

Chan , A.P. , Scott , D. and Chan , A.P. ( 2004b ), “ Factors affecting the success of a construction project ”, Journal of Construction Engineering and Management , Vol.  130 No.  1 , pp.  153 - 155 , doi: 10.1061/(ASCE)0733-9364(2004)130:1(153) .

Cheng , E.W. and Li , H. ( 2004 ), “ Development of a practical model of partnering for construction projects ”, Journal of Construction Engineering and Management , Vol.  130 No.  6 , pp.  790 - 798 , doi: 10.1061/(ASCE)0733-9364(2004)130:6(790) .

Cho , K. , Hyun , C. , Koo , K. and Hong , T. ( 2010 ), “ Partnering process model for public-sector fast-track design-build projects in Korea ”, Journal of Management in Engineering , Vol.  26 No.  1 , pp.  19 - 29 , doi: 10.1061/(ASCE)0742-597X(2010)26:1(19) .

Cozza , S. , Chambers , J. and Patel , M.K. ( 2020 ), “ Measuring the thermal energy performance gap of labelled residential buildings in Switzerland ”, Energy Policy , Vol.  137 , 111085 , doi: 10.1016/j.enpol.2019.111085 .

Darrington , J. ( 2011 ), “ Using a design-build contract for lean integrated project delivery ”, Lean Construction Journal , pp.  85 - 91 , available at: https://lean-construction-gcs.storage.googleapis.com/wp-content/uploads/2022/08/08161000/Design-build_contract_for_Lean_IPD.pdf

Drexler , J.A. , Jr. and Larson , E.W. ( 2000 ), “ Partnering: why project owner-contractor relationships change ”, Journal of Construction Engineering and Management , Vol.  126 No.  4 , pp.  293 - 297 , doi: 10.1061/(ASCE)0733-9364(2000)126:4(293) .

Engebø , A. , Lædre , O. , Young , B. , Larssen , P.F. , Lohne , J. and Klakegg , O.J. ( 2020 ), “ Collaborative project delivery methods: a scoping review ”, Journal of Civil Engineering and Management , Vol.  26 No.  3 , pp.  278 - 303 , doi: 10.3846/jcem.2020.12186 .

Forbes , L.H. and Ahmed , S.M. ( 2010 ), Modern Construction: Lean Project Delivery and Integrated Practices , CRC Press , Boca Raton, FL , ISBN: 9780429145278 .

Franz , B. , Leicht , R. , Molenaar , K. and Messner , J. ( 2017 ), “ Impact of team integration and group cohesion on project delivery performance ”, Journal of Construction Engineering and Management , Vol.  143 No.  1 , doi: 10.1061/(ASCE)CO.1943-7862.0001219 .

Frei , B. , Sagerschnig , C. and Gyalistras , D. ( 2017 ), “ Performance gaps in Swiss buildings: an analysis of conflicting objectives and mitigation strategies ”, Energy Procedia , Vol.  122 , pp.  421 - 426 , doi: 10.1016/j.egypro.2017.07.425 .

Häkkinen , T. and Belloni , K. ( 2011 ), “ Barriers and drivers for sustainable building ”, Building Research and Information , Vol.  39 No.  3 , pp.  239 - 255 , doi: 10.1080/09613218.2011.561948 .

Hanna , A.S. ( 2016 ), “ Benchmark performance metrics for integrated project delivery ”, Journal of Construction Engineering and Management , Vol.  142 No.  9 , doi: 10.1061/(ASCE)CO.1943-7862.0001151 .

Heidemann , A. and Gehbauer , F. ( 2010 ), “ Cooperative project delivery in an environment of strict design-bid-build tender regulations ”, Proceedings of the 18th Annual Conference of the International Group for Lean Construction (IGLC-18) , Perth , 28-31 July , pp.  590 - 591 , available at: https://iglc.net/Papers/Details/674

Hietajärvi , A.M. , Aaltonen , K. and Haapasalo , H. ( 2017a ), “ What is project alliance capability? ”, International Journal of Managing Projects in Business , Vol.  10 No.  2 , pp.  404 - 422 , doi: 10.1108/IJMPB-07-2016-0056 .

Hietajärvi , A.M. , Aaltonen , K. and Haapasalo , H. ( 2017b ), “ Opportunity management in large projects: a case study of an infrastructure alliance project ”, Construction Innovation , Vol.  17 No.  3 , pp.  340 - 362 , doi: 10.1108/CI-10-2016-0051 .

Hietajärvi , A.M. , Aaltonen , K. and Haapasalo , H. ( 2017c ), “ Managing integration in infrastructure alliance projects: dynamics of integration mechanisms ”, International Journal of Managing Projects in Business , Vol.  10 No.  1 , pp.  5 - 31 , doi: 10.1108/IJMPB-02-2016-0009 .

Ibrahim , C.K.I.C. , Costello , S.B. and Wilkinson , S. ( 2015a ), “ Establishment of quantitative measures for team integration assessment in alliance projects ”, Journal of Management in Engineering , Vol.  31 No.  5 , doi: 10.1061/(ASCE)ME.1943-5479.0000318 .

Ibrahim , C.K.I.C. , Costello , S.B. and Wilkinson , S. ( 2015b ), “ Development of an assessment tool for team integration in alliance projects ”, International Journal of Managing Projects in Business , Vol.  8 No.  4 , pp.  813 - 827 , doi: 10.1108/IJMPB-02-2015-0019 .

Ibrahim , C.K.I.C. , Costello , S.B. and Wilkinson , S. ( 2016 ), “ Application of a team integration performance index in road infrastructure alliance projects ”, Benchmarking: An International Journal , Vol.  23 No.  5 , pp.  1341 - 1362 , doi: 10.1108/BIJ-06-2015-0058 .

Ibrahim , C.K.I.C. , Costello , S.B. and Wilkinson , S. ( 2018 ), “ Making sense of team integration practice through the ‘lived experience’ of alliance project teams ”, Engineering, Construction and Architectural Management , Vol.  25 No.  5 , pp.  598 - 622 , doi: 10.1108/ECAM-09-2016-0208 .

Ibrahim , M.W. , Hanna , A. and Kievet , D. ( 2020 ), “ Quantitative comparison of project performance between project delivery systems ”, Journal of Management in Engineering , Vol.  36 No.  6 , doi: 10.1061/(ASCE)ME.1943-5479.0000837 .

Kent , D.C. and Becerik-Gerber , B. ( 2010 ), “ Understanding construction industry experience and attitudes toward integrated project delivery ”, Journal of Construction Engineering and Management , Vol.  136 No.  8 , pp.  815 - 825 , doi: 10.1061/(ASCE)CO.1943-7862.0000188 .

Kibert , C.J. ( 2016 ), Sustainable Construction: Green Building Design and Delivery , John Wiley & Sons .

Laconte , P. and Gossop , C. ( 2016 ), Sustainable Cities: Assessing the Performance and Practice of Urban Environments , Bloomsbury Publishing , New York, NY , ISBN: 978-1784532321 .

Lahdenperä , P. ( 2012 ), “ Making sense of the multi-party contractual arrangements of project partnering, project alliancing and integrated project delivery ”, Construction Management and Economics , Vol.  30 No.  1 , pp.  57 - 79 , doi: 10.1080/01446193.2011.648947 .

Lee , H.W. , Tommelein , I.D. and Ballard , G. ( 2013 ), “ Energy-related risk management in integrated project delivery ”, Journal of Construction Engineering and Management , Vol.  139 No.  12 , A4013001 , doi: 10.1061/(ASCE)CO.1943-7862.0000753 .

Li , B. and Yao , R. ( 2012 ), “ Building energy efficiency for sustainable development in China: challenges and opportunities ”, Building Research and Information , Vol.  40 No.  4 , pp.  417 - 431 , doi: 10.1080/09613218.2012.682419 .

Liang , J. , Qiu , Y. and Hu , M. ( 2019 ), “ Mind the energy performance gap: evidence from green commercial buildings ”, Resources, Conservation and Recycling , Vol.  141 , pp.  364 - 377 , doi: 10.1016/j.resconrec.2018.10.021 .

Lichtig , W.A. ( 2005 ), “ Sutter health: developing a contracting model to support lean project delivery ”, Lean Construction Journal , Vol.  2 , pp.  105 - 112 , doi: 10.60164/70f9c0e7d .

Ling , F.Y. , Teo , P.X. , Li , S. , Zhang , Z. and Ma , Q. ( 2020 ), “ Adoption of integrated project delivery practices for superior project performance ”, Journal of Legal Affairs and Dispute Resolution in Engineering and Construction , Vol.  12 No.  4 , 05020014 , doi: 10.1061/(ASCE)LA.1943-4170.0000428 .

Lloyd , H.L. and Varey , R.J. ( 2003 ), “ Factors affecting internal communication in a strategic alliance project ”, Corporate Communications: An International Journal , Vol.  8 No.  3 , pp.  197 - 207 , doi: 10.1108/13563280310487658 .

Love , P.E. , Mistry , D. and Davis , P.R. ( 2010 ), “ Price competitive alliance projects: identification of success factors for public clients ”, Journal of Construction Engineering and Management , Vol.  136 No.  9 , pp.  947 - 956 , doi: 10.1061/(ASCE)CO.1943-7862.0000208 .

Mahdavi , A. , Berger , C. , Amin , H. , Ampatzi , E. , Andersen , R.K. , Azar , E. , Barthelmes , V.M. , Favero , M. , Hahn , J. , Khovalyg , D. and Knudsen , H.N. ( 2021 ), “ The role of occupants in buildings' energy performance gap: myth or reality? ”, Sustainability , Vol.  13 No.  6 , 3146 , doi: 10.3390/su13063146 .

Markus , A.A. , Hobson , B.W. , Gunay , H.B. and Bucking , S. ( 2022 ), “ Does a knowledge gap contribute to the performance gap? Interviews with building operators to identify how data-driven insights are interpreted ”, Energy and Buildings , Vol.  268 , 112238 , doi: 10.1016/j.enbuild.2022.112238 .

Mesa , H.A. , Molenaar , K.R. and Alarcón , L.F. ( 2019 ), “ Comparative analysis between integrated project delivery and lean project delivery ”, International Journal of Managing Projects in Business , Vol.  37 No.  3 , pp.  395 - 409 , doi: 10.1016/j.ijproman.2019.01.012 .

Mike , E.M. , Schmitz , A. and Netherton , L.M. ( 2015 ), Real Estate Development . Principles and Process , 5th ed. , ULI Urban Land Institute , Washington, DC , ISBN: 978-0874203431 .

Miller , C. , Arjunan , P. , Kathirgamanathan , A. , Fu , C. , Roth , J. , Park , J.Y. , Balbach , C. , Gowri , K. , Nagy , Z. , Fontanini , A.D. and Haberl , J. ( 2020 ), “ The ASHRAE great energy predictor III competition: overview and results ”, Science and Technology for the Built Environment , Vol.  26 No.  10 , pp.  1427 - 1447 , doi: 10.1080/23744731.2020.1795514 .

MohammadHasanzadeh , S. , Hosseinalipour , M. and Hafezi , M. ( 2014 ), “ Collaborative procurement in construction projects performance measures, case study: partnering in Iranian construction industry ”, Procedia - Social and Behavioral Sciences , Vol.  119 , pp.  811 - 818 , doi: 10.1016/j.sbspro.2014.03.091 .

Mollaoglu-Korkmaz , S. , Swarup , L. and Riley , D. ( 2013 ), “ Delivering sustainable, high-performance buildings: influence of project delivery methods on integration and project outcomes ”, Journal of Management in Engineering , Vol.  29 No.  1 , pp.  71 - 78 , doi: 10.1061/(ASCE)ME.1943-5479.0000114 .

Moradi , S. and Kähkönen , K. ( 2022 ), “ Success in collaborative construction through the lens of project delivery elements ”, Built Environment Project and Asset Management , Vol.  12 No.  6 , pp.  973 - 991 , doi: 10.1108/BEPAM-09-2021-0118 .

Moradi , S. and Sormunen , P. ( 2022 ), “ Lean and sustainable project delivery in building construction: development of a conceptual framework ”, Buildings , Vol.  12 No.  10 , 1757 , doi: 10.3390/buildings12101757 .

Moradi , S. and Sormunen , P. ( 2023 ), “ Revisiting the concept of waste and its causes in construction from analytical and conceptual perspectives ”, Construction Management and Economics , Vol.  41 No.  8 , pp.  621 - 633 , doi: 10.1080/01446193.2023.2189278 .

Moradi , S. , Kähkönen , K. , Klakegg , O.J. and Aaltonen , K. ( 2021a ), “ A competency model for the selection and performance improvement of project managers in collaborative construction projects: behavioral studies in Norway and Finland ”, Buildings , Vol.  11 No.  4 , p. 4 , doi: 10.3390/buildings11010004 .

Moradi , S. , Kähkönen , K. , Klakegg , O. and Aaltonen , K. ( 2021b ), “ Profile of project managers' competencies for collaborative construction projects ”, in Scott , L. and Neilson , C.J. (Eds), Proceedings of the 37th Annual ARCOM Conference , 6-7 September , Association of Researchers in Construction Management , pp.  350 - 359 , available at: https://www.arcom.ac.uk/-docs/archive/2021-Indexed-Papers.pdf

Moradi , S. , Kähkönen , K. and Sormunen , P. ( 2022 ), “ Analytical and conceptual perspectives toward behavioral elements of collaborative delivery models in construction projects ”, Buildings , Vol.  12 No.  3 , p. 316 , doi: 10.3390/buildings12030316 .

Moradi , S. , Hirvonen , J. and Sormunen , P. ( 2023 ), “ A qualitative and life cycle-based study of the energy performance gap in building construction: perspectives of Finnish project professionals and property maintenance experts ”, Building Research and Information , pp.  1 - 13 , doi: 10.1080/09613218.2023.2284986 .

Mousavi , S. , Marroquín , M.G.V. , Hajiaghaei-Keshteli , M. and Smith , N.R. ( 2023 ), “ Data-driven prediction and optimization toward net-zero and positive-energy buildings: a systematic review ”, Building and Environment , Vol.  242 , 110578 , doi: 10.1016/j.buildenv.2023.110578 .

Nevstad , K. , Børve , S. , Karlsen , A.T. and Aarseth , W. ( 2018 ), “ Understanding how to succeed with project partnering ”, International Journal of Managing Projects in Business , Vol.  11 No.  4 , pp.  1044 - 1065 , doi: 10.1108/IJMPB-07-2017-0085 .

Ng , S.T. , Rose , T.M. , Mak , M. and Chen , S.E. ( 2002 ), “ Problematic issues associated with project partnering—the contractor perspective ”, International Journal of Project Management , Vol.  20 No.  6 , pp.  437 - 449 , doi: 10.1016/S0263-7863(01)00025-4 .

Oakland , J.S. and Marosszeky , M. ( 2017 ), Total Construction Management: Lean Quality in Construction Project Delivery , Routledge , Abingdon , ISBN: 9781315694351 .

Qian , Q.K. , Chan , E.H. and Khalid , A.G. ( 2015 ), “ Challenges in delivering green building projects: unearthing the transaction costs (TCs) ”, Sustainability , Vol.  7 No.  4 , pp.  3615 - 3636 , doi: 10.3390/su7043615 .

Radziszewska-Zielina , E. and Szewczyk , B. ( 2016 ), “ Supporting partnering relation management in the implementation of construction projects using AHP and fuzzy AHP methods ”, Procedia Engineering , Vol.  161 , pp.  1096 - 1100 , doi: 10.1016/j.proeng.2016.08.854 .

Raslim , F.M. and Mustaffa , N.E. ( 2017 ), “ The success factors of relationship-based procurement (RBP) in Malaysia ”, International Journal of Civil Engineering and Technology , Vol.  8 , pp.  1616 - 1625 , available at: http://www.iaeme.com/ijciet/issues.asp?JType=IJCIET&VType=8&IType=8

Rasmussen , H.L. and Jensen , P.A. ( 2020 ), “ A facilities manager's typology of performance gaps in new buildings ”, Journal of Facilities Management , Vol.  18 No.  1 , pp.  71 - 87 , doi: 10.1108/JFM-06-2019-0024 .

Saunders , M.N.K. , Lewis , P. and Thornhill , A. ( 2019 ), Research Methods for Business Students , 8th ed. , Pearson Education , Harlow , ISBN: 9781292208787 .

Spudys , P. , Afxentiou , N. , Georgali , P.Z. , Klumbyte , E. , Jurelionis , A. and Fokaides , P. ( 2023 ), “ Classifying the operational energy performance of buildings with the use of digital twins ”, Energy and Buildings , Vol.  290 , 113106 , doi: 10.1016/j.enbuild.2023.113106 .

Sundquist , V. , Hulthén , K. and Gadde , L.E. ( 2018 ), “ From project partnering towards strategic supplier partnering ”, Engineering, Construction and Architectural Management , Vol.  25 No.  3 , pp.  358 - 373 , doi: 10.1108/ECAM-08-2016-0177 .

Whang , S.-W. , Park , K.S. and Kim , S. ( 2019 ), “ Critical success factors for implementing integrated construction project delivery ”, Engineering, Construction and Architectural Management , Vol.  26 No.  10 , pp.  2432 - 2446 , doi: 10.1108/ECAM-02-2019-0073 .

Willan , C. , Hitchings , R. , Ruyssevelt , P. and Shipworth , M. ( 2020 ), “ Talking about targets: how construction discourses of theory and reality represent the energy performance gap in the United Kingdom ”, Energy Research and Social Science , Vol.  64 , 101330 , doi: 10.1016/j.erss.2019.101330 .

Yılmaz , D. , Tanyer , A.M. and Toker , İ.D. ( 2023 ), “ A data-driven energy performance gap prediction model using machine learning ”, Renewable and Sustainable Energy Reviews , Vol.  181 , 113318 , doi: 10.1016/j.rser.2023.113318 .

Young , B. , Hosseini , A. and Lædre , O. ( 2016 ), “ The characteristics of Australian infrastructure alliance projects ”, Energy Procedia , Vol.  96 , pp.  833 - 844 , doi: 10.1016/j.egypro.2016.09.145 .

Zhang , X. and Kumaraswamy , M.M. ( 2001 ), “ Procurement protocols for public-private partnered projects ”, Journal of Construction Engineering and Management , Vol.  127 No.  5 , pp.  351 - 358 , doi: 10.1061/(ASCE)0733-9364(2001)127:5(351) .

Zhang , L. , Cheng , J. and Fan , W. ( 2016 ), “ Party selection for integrated project delivery based on interorganizational transactive memory system ”, Journal of Construction Engineering and Management , Vol.  142 No.  3 , doi: 10.1061/(ASCE)CO.1943-7862.0001068 .

Zou , P.X. , Xu , X. , Sanjayan , J. and Wang , J. ( 2018 ), “ Review of 10 years research on building energy performance gap: life-cycle and stakeholder perspectives ”, Energy and Buildings , Vol.  178 , pp.  165 - 181 , doi: 10.1016/j.enbuild.2018.08.040 .

Acknowledgements

This study was financially supported by the “Hiilineutraalit energiaratkaisut ja lämpöpumpputeknologia” research project (No. 3122801074) at Tampere University in Finland. The funders of this research project are Tampereen korkeakoulusäätiö sr, Tampereen teknillisen yliopiston tukisäätiö sr / Paavo V. Suomisen rahasto, Sähkötekniikan ja energiatehokkuuden edistämiskeskus STEK ry, Granlund Oy, HUS Tilakeskus, HUS Kiinteistöt Oy, Senaatti- kiinteistöt, and Ramboll Finland Oy.

Corresponding author

Related articles, we’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

  • Open access
  • Published: 17 May 2024

Co-designing Entrustable Professional Activities in General Practitioner’s training: a participatory research study

  • Vasiliki Andreou   ORCID: orcid.org/0000-0002-6679-0514 1 , 4 ,
  • Sanne Peters   ORCID: orcid.org/0000-0001-6235-1752 1 , 2 ,
  • Jan Eggermont   ORCID: orcid.org/0000-0002-8497-1159 3 &
  • Birgitte Schoenmakers   ORCID: orcid.org/0000-0003-1909-9613 1  

BMC Medical Education volume  24 , Article number:  549 ( 2024 ) Cite this article

Metrics details

In medical education, Entrustable Professional Activities (EPAs) have been gaining momentum for the last decade. Such novel educational interventions necessitate accommodating competing needs, those of curriculum designers, and those of users in practice, in order to be successfully implemented.

We employed a participatory research design, engaging diverse stakeholders in designing an EPA framework. This iterative approach allowed for continuous refinement, shaping a comprehensive blueprint comprising 60 EPAs. Our approach involved two iterative cycles. In the first cycle, we utilized a modified-Delphi methodology with clinical competence committee (CCC) members, asking them whether each EPA should be included. In the second cycle, we used semi-structured interviews with General Practitioner (GP) trainers and trainees to explore their perceptions about the framework and refine it accordingly.

During the first cycle, 14 CCC members agreed that all the 60 EPAs should be included in the framework. Regarding the formulation of each EPAs, 20 comments were given and 16 adaptations were made to enhance clarity. In the second cycle, the semi-structured interviews with trainers and trainees echoed the same findings, emphasizing the need of the EPA framework for improving workplace-based assessment, and its relevance to real-world clinical scenarios. However, trainees and trainers expressed concerns regarding implementation challenges, such as the large number of EPAs to be assessed, and perception of EPAs as potentially high-stakes.

Accommodating competing stakeholders’ needs during the design process can significantly enhance the EPA implementation. Recognizing users as experts in their own experiences empowers them, enabling a priori identification of implementation barriers and potential pitfalls. By embracing a collaborative approach, wherein diverse stakeholders contribute their unique viewpoints, we can only create effective educational interventions to complex assessment challenges.

Peer Review reports

Introduction

In recent years, the landscape of medical education has significantly transformed due to increasing demands of public accountability and changing patient needs. In response to these evolving demands, competency-based medical education (CBME) has emerged. CBME has been gaining popularity in medical education programs [ 1 ]. In a CBME paradigm, medical curricula are structured based on predefined competencies that physicians should have acquired upon completion of the program [ 2 , 3 ]. Despite the theoretical underpinnings of CBME, its implementation has encountered various obstacles [ 4 ]. Particularly, assessing competencies in real clinical environments has been a major barrier in the effective integration of CBME into medical education systems [ 5 ]. Recognizing this challenge, the concept of Entrustable Professional Activities (EPAs) has emerged.

EPAs are essentially tasks or activities that medical professionals should be able to perform competently and independently by the time they complete their training [ 6 , 7 ]. EPAs are used to assess a learner’s ability to integrate and apply the necessary competencies in real-world clinical practice. They necessitate evaluating a learner’s progress and readiness for independent practice by observing their performance in these key professional activities in clinical practice [ 8 ]. The term “entrustable” indicates that, upon graduation or completion of a specific training period, a supervising physician or mentor should be able to entrust a medical graduate with these activities without direct supervision, considering them proficient and safe for the patients to perform these tasks independently [ 9 , 10 ].

Considering the immense potential, integration and implementation of EPAs has gained rapid momentum, across various health professions and medical specialties [ 11 , 12 ]. Despite this progress, a significant gap notably persists, when it comes to accommodating competing needs of curriculum designers and those of users in practice, namely trainers and trainees [ 13 ]. While the promise of EPAs in facilitating CBME is promising, there is lack of comprehensive evidence incorporating users’ perceptions during the design phase [ 8 , 11 , 14 ]. Therefore, the aim of this study was to design an EPA framework for workplace-based assessment by actively involving clinical educators, trainees and trainers throughout the process.

Setting and participants

This study took place in the interuniversity postgraduate General Practitioner’s (GP) Training, Belgium. To standardize GP Training across Flanders, four Flemish universities (KU Leuven, Ghent University, University of Antwerp, and the Flemish Free University of Brussels) collaboratively developed a postgraduate training program. This training program consists of three different training-phases and rotations, spread through three years, two rotations are in a GP practice, while one takes place at a hospital setting.

The GP Training is overseen by the Interuniversity Centre for GP Training (ICGPT). The ICGPT plays a pivotal role in coordinating and managing various aspects of the curriculum. Among its key responsibilities, the ICGPT oversees the allocation of clinical internships, conducts examinations, facilitates regular meetings between trainees and trainers, and maintains trainees’ learning electronic (e-) portfolios.

In 2018, the ICGPT initiated a shift towards CBME. The rationale of CBME was introduced in the curriculum by integrating first the CanMEDS roles. To facilitate this transition, two clinical competence committees (CCCs), comprising medical doctors and clinical educators from the four universities were appointed. These CCCs were tasked with coordinating workplace-based learning, and curriculum and assessment, respectively.

To align the curriculum with the patient needs in primary care, the two CCCs designated and defined ten different care contexts characteristic of primary care (i.e. short-term care, chronic care, emergency care, palliative care, elderly care, care for children, mental healthcare, prevention, gender related care, and practice management). Subsequently, in 2022, we initiated the process of designing specific EPAs for the GP Training. The EPAs aimed to facilitate and improve workplace-based assessment. These two CCCs participated in the design process, while trainers and trainees were invited to share their opinion as well.

Designing the EPA framework

The design of the EPA framework was based on participatory research design to engage different stakeholders [ 15 ]. Participatory research design is a community-based methodology aiming to create solutions for and with the people who are involved [ 15 ]. This iterative research approach encompassed three fundamental design-stages in a circular relationship, namely design, evaluation and refinement (Fig.  1 ). We executed two distinct iterative cycles, each with a specific group of stakeholders (Fig.  2 ). In cycle 1, we focused on CCCs, fostering discussions and validating the framework. In cycle 2, we involved clinical trainers and trainees, ensuring cross-validation. In the following section, we describe each iterative cycle, indicated as cycle 1 and as cycle 2, respectively.

figure 1

Three design phases for designing the EPA framework

figure 2

Process for developing the EPA framework based on participatory design research

In cycle 1, after reviewing relevant literature, we developed a blueprint of 60 EPAs corresponding to the ten different care contexts, already integrated in the curriculum [ 9 , 10 ]. By doing so, we wanted to ensure practical applicability and relevance of our framework within the established educational environment. Afterwards, we linked all EPAs to the CanMEDS competency framework [ 16 ]. We defined competencies as broad statements that describe knowledge, skills and attitudes that GP trainees should achieve during the different training phases [ 17 ]. The CanMEDS framework identifies and describes different competencies for patient-centred care, and comprises seven different roles: medical expert, communicator, collaborator, leader, health advocate, scholar, and professional. By linking EPAs to CanMEDS, we constructed a matrix that served as a structured guide for integrating the EPAs in the workplace. Also, together with the CCCs we defined behavioural and cognitive criteria to anchor entrustment levels [ 9 ]. These criteria described required knowledge, skills, and attitudes in order for an EPA to be entrusted.

In cycle 2, we aimed at operationalising the EPAs, cross validating them by interviewing trainers and trainees, and deciding entrustment levels. Specifically, to operationalise the EPAs, we developed an assessment form, called Clinical Practice Feedback form (Fig.  3 ). We chose to link EPA assessments not only to direct and video observations, but also for case-based discussions. Additionally, we agreed upon entrustment levels and the entrustability scale. Entrustment was anchored on criteria that were defined along the EPAs. We decided to use the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) for validity and reliability reasons (Fig.  4 ) [ 18 ]. The Ottawa scale requires assessors to describe how much supervision they provided to trainees while performing a specific EPA. Concretely, the scale comprises five levels of performance ranging from trainers taking over the activity to trainees performing the activity without supervision (Fig.  3 ) [ 18 ].

figure 3

Example of Clinical Practice Feedback form available in the e-portfolio

figure 4

Five levels of entrustment based on the O-SCORE scale [ 19 ]

Data collection and analysis

In cycle 1, we evaluated the EPA blueprint by employing a modified Delphi methodology, with two rounds [ 19 ]. We invited members of the two CCCs ( N  = 14) to give feedback on the EPA blueprint via e-mail and during meetings, scheduled by the ICGPT. Members were asked whether they thought each EPA was necessary for workplace-based assessment and needed to be included in the framework. They were also encouraged to give feedback regarding the formulation of the EPAs. Once we gathered all the comments, we refined the blueprint and sent it back to the CCC members. In cycle 2, we interviewed two trainers and two trainees using semi-structured interviews and following the ‘think-aloud protocol’ [ 20 , 21 , 22 ], where we asked them whether each EPA was necessary and whether they were comprehensible for workplace-based assessment. Participants were required to articulate their thoughts while reading the EPA framework. This enabled us to gain insights into their thought processes and perspectives [ 22 ].

Data collection took place from February 2022 until September 2022. For quantitative data analysis we calculated descriptive statistics of consensus rates using SPSS 27 (IBM SPSS Statistics 27). We analysed qualitative data from CCCs members using content analysis on Microsoft Excel. For analysing data from the interviews with the trainers and trainees, we first verbatim transcribed the interviews, and, then, analysed the data using thematic analysis in NVivo (QSR International) [ 23 , 24 ]. Qualitative data were analysed by two researchers separately to achieve triangulation, while a third researcher was consulted, when discrepancies arose [ 25 ].

Reflexivity and research team

The research team was composed of members with different backgrounds. Two members had a background in education, while the other two members had a background in biomedical sciences and general practice. All authors had research training and experience in medical education research. Methodological and design decisions were in line with the available literature. We predefined methodological steps before commencing the study. To ensure adherence to our design stages, we maintained a detailed logbook to document systematically progression and modifications from our initial protocol. We regularly discussed the results to ensure that our interpretations were close to the data.

In cycle 1, fourteen members of the CCCs gave feedback on the list of 60 EPAs. In the first feedback round, all members agreed that all 60 EPAs were required in the framework. Twenty comments were given regarding the formulation of the EPAs and 16 adaptations were made based on the new suggestions. Comments regarding the formulation were about the use of certain words in order to make the framework understandable. In the second feedback round, consensus was reached on the formulation of the EPAs (Table  1 ).

In cycle 2, we interviewed two trainers and two trainees. CCC members, trainers, and trainees agreed that all EPAs should be included in the framework. From these interviews, we identified three themes. Table  2 presents these three themes alongside their subthemes. Necessity of EPAs was the first theme and included shared mindsets about necessity of EPAs in order to improve workplace-based assessment and difficulties with interpreting the CanMEDS roles.

“ The EPAs are better than the CanMEDS. My trainer and I often do not know what we have to assess…He (the trainer) sometimes gives the same feedback for multiple roles .” (trainee 1).

Second theme was about the relevance of EPAs to clinical practice. Users thought that the EPA framework could easily be linked to their clinical work, promoting assessment and feedback opportunities. They agreed that EPAs were understandable and formulated in intuitive language for clinical work.

“ I think that it (the EPA framework) is quite intuitive. I can see a lot of links between the EPAs and my daily practice .” (trainer 2).
I like the (EPA) framework. My trainer and I already discuss some of these (activities) during our weekly feedback session . (trainee 2)

Third theme included challenges in implementation of EPAs, regarding the large number of EPAs, perception of high-stakes assessment within an e-portfolio, and limitations inherent to the current e-portfolio. First, users expressed their concern regarding the large number of EPAs. They indicated that only a limited number might be feasible because of time constraints in the clinical workplace. Also, users thought that due to the large number of EPAs, trainees would “pick and choose” EPAs where they had performed well. Along with limited functionalities of the current e-portfolio, they indicated that EPAs might be used as showcasing performance instead for workplace-based assessment and feedback purposes. Mainly trainees expressed hesitation to document EPAs where they would need further improvement. They perceived the e-portfolio as a tool more suitable for high-stakes assessments rather than for feedback purposes.

“ The list (of EPAs) is quite extensive… I do want to have a nice portfolio, so for sure I will try to include as many as possible. In case something happens (in my curriculum), I want to show how well I have been performing .” (trainee 1).
“ I normally do not include patient cases that went wrong in my portfolio. Because different people have access to it (the e-portfolio).” (trainee 2).

The aim of this study was to design an EPA framework by actively engaging and collaborating with different stakeholders. To be established as a “good” assessment framework, EPAs should be acceptable by the different stakeholders involved in the assessment process, such as curriculum designers, trainees and trainers [ 26 , 27 ]. Incorporating their opinions and understanding their different needs must be integral to the design process. However, literature regarding EPAs design has mainly focused on experts’ opinion, neglecting users in practice [ 8 ].

From our findings, it becomes apparent that direct involvement and communication among diverse stakeholders are crucial for designing a useful for everyone EPA assessment framework. When various groups are involved in developing educational interventions, competing needs can be optimally addressed [ 28 ]. This optimization fosters a cohesive approach, ensuring high applicability rates and effectiveness, when the EPA framework is used in practice. The need for users’ involvement in the development process is currently demonstrated in the most recent EPA literature [ 29 , 30 ]. Users’ involvement promotes common language and expectations, enhancing the clarity and effectiveness of EPA interventions, and, most importantly, empowers the users themselves by acknowledging their perspectives [ 31 ]. Ultimately, trainees and trainers are the ones using the EPA assessment frameworks during daily clinical practice, and are potentially confronted with unforeseen obstacles.

Additionally, users’ involvement in the process can help to identify potential implementation challenges [ 32 , 33 ]. Our findings indicate differences in opinions regarding implementation of EPAs. In contrast to the CCC members, users expressed their concerns about the large number of EPAs included in the framework. They were particularly concerned about how to use sufficiently and adequately EPA assessments, while juggling clinical work. This concern echoes findings from other studies as well, related to the assessment burden [ 34 ]. In particular, when challenges in assessment processes arise in the clinical workplace, assessment is most probably not performed as intended [ 35 ].

Furthermore, our results illustrate tensions between assessment of learning and assessment for learning. Although the EPA assessments aim to better prepare trainees for clinical practice, users suggested that the purpose of the EPAs might not be explicit for everyone. Since EPAs are a form of assessment, they could potentially lead to strategic behaviours of documenting successful EPAs, and, therefore, creating a fragmented idea about trainees’ performance in clinical practice. Additionally, the use of the current e-portfolio for high-stakes assessments only adds to this tension. Especially, trainees were not comfortable with sharing performance evidence for improvement, because they perceived the stakes as high [ 36 ]. The dilemma between learning versus performing has been the Achilles point in workplace-based assessment [ 37 ]. The lines between assessment and feedback seem to be also blurred in EPAs [ 38 , 39 ].

Involving users during the design process can lead not only to early adaptations and refinement of EPAs, but also to better allocation of resources. In order to ensure successful implementation of EPAs, it is essential to recognize the central role of both trainers and trainees. Future research should focus on training programs designed to equip faculty, trainers, and trainees with a profound understanding of EPAs. Users in practice need rigorous training covering EPA principles, assessment techniques, and feedback strategies [ 40 ]. Moreover, fostering a culture of interdisciplinary collaboration among stakeholder groups is imperative. Encouraging review of assessment tools and facilitating the exchange of opinions during designprocesses can significantly enhance the overall quality of EPA frameworks, and, even more broadly, of workplace-based assessment practices.

Although EPAs are a valuable framework for assessing competencies in workplace settings, integrating other assessment tools is crucial to capture the full spectrum of skills needed to meet patient needs. Future research should focus on combining EPAs with other assessment methods, such as simulation-based assessments, either with standardized patients or with virtual reality, that would allow trainees to demonstrate their clinical and interpersonal skills within safe, controlled environments that closely replicate challenging patient scenarios [ 41 ]. Additionally, incorporating multisource feedback and continuous portfolio assessments could offer a comprehensive view of a trainee’s performance across various settings and interactions [ 42 , 43 ]. Together, these methods would enhance the EPA framework, ensuring a comprehensive assessment of all essential competencies that future physicians should acquire.

Limitations

We need to acknowledge several limitations in this study. First, in medical education research, users’ involvement prerequisites a degree of experience with a specific subject. In our study, we involved users in the early design process of the EPA framework. Although we are aware of this limitation, we intentionally and consciously chose a participatory research design. We believe that users are experts in their own experience, and that they hold the knowledge and capabilities to be involved as partners in the development process. Second, our study involved a low number of users due to difficulties in recruitment. This might have led to recruiting participants who are fully engaged in the educational practices of the GP Training. Nevertheless, our findings are rooted in two methodologies, namely a modified Delphi method and semi-structured interviews. Therefore, we used triangulation to verify our results [ 25 ]. Finally, although workshops are mostly commonly in co-design studies [ 44 ], our study coincided with the last COVID-19 lockdown, necessitating adjustments. To cope with these challenges and uncertainties, we opted for methods that were the most feasible for our participants at that moment. Despite these challenges, the contributions from all stakeholders were invaluable, particularly in exploring potential implementation and evaluation issues.

For EPAs to be successful, they need to be acceptable as an assessment framework by different stakeholders’ groups. Accommodation of competing stakeholders’ needs during the design process is crucial for enhancing acceptability and effectiveness during implementation. Our findings highlight the significance of collaborative efforts to design EPAs, emphasizing its potential to empower users, identify implementation barriers, and pinpoint unintended consequences. Through this collaborative approach, wherein diverse stakeholders contribute their perspectives, we can create effective educational solutions to complex assessment challenges.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

General Practitioner

competency-based medical education

Entrustable Professional Activity

Canadian Medical Education Directives for Specialists

Interuniversity Centre for GP Training

clinical competence committee

Frank JR, Snell LS, Cate OT, Holmboe ES, Carraccio C, Swing SR, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45.

Article   Google Scholar  

Iobst WF, Sherbino J, Cate OT, Richardson DL, Dath D, Swing SR, et al. Competency-based medical education in postgraduate medical education. Med Teach. 2010;32(8):651–6.

Frank JR, Snell L, Englander R, Holmboe ES. Implementing competency-based medical education: moving forward. Med Teach. 2017;39(6):568–73.

Nousiainen MT, Caverzagie KJ, Ferguson PC, Frank JR. Implementing competency-based medical education: what changes in curricular structure and processes are needed? Med Teach. 2017;39(6):594–8.

Lockyer J, Carraccio C, Chan M-K, Hart D, Smee S, Touchie C, et al. Core principles of assessment in competency-based medical education. Med Teach. 2017;39(6):609–16.

Ten Cate O, Scheele F. Competency-based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542–7.

Carraccio C, Englander R, Gilhooly J, Mink R, Hofkosh D, Barone MA, et al. Building a framework of entrustable professional activities, supported by competencies and milestones, to bridge the educational continuum. Acad Med. 2017;92(3):324–30.

Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99. Med Teach. 2015;37(11):983–1002.

Ten Cate O, Taylor DR. The recommended description of an entrustable professional activity: AMEE guide no. 140. Med Teach. 2021;43(10):1106–14.

Carraccio C, Martini A, Van Melle E, Schumacher DJ. Identifying core components of EPA implementation: a path to knowing if a complex intervention is being implemented as intended. Acad Med. 2021;96(9):1332–6.

de Graaf J, Bolk M, Dijkstra A, van der Horst M, Hoff RG, Ten Cate O. The implementation of entrustable professional activities in postgraduate medical education in the Netherlands: rationale, process, and current status. Acad Med. 2021;96(7s):S29-35.

Keeley MG, Bray MJ, Bradley EB, Peterson CM, Waggoner-Fountain LA, Gusic ME. Fidelity to best practices in EPA implementation: outcomes supporting use of the core components framework from the University of Virginia entrustable professional activity program. Acad Med. 2022;97(11):1637–42.

St-Onge C, Boileau E, Langevin S, Nguyen LHP, Drescher O, Bergeron L, et al. Stakeholders’ perception on the implementation of developmental progress assessment: using the theoretical domains framework to document behavioral determinants. Adv Health Sci Educ. 2022;27(3):735–59.

Taylor DR, Park YS, Egan R, Chan MK, Karpinski J, Touchie C, et al. EQual, a novel rubric to evaluate entrustable professional activities for quality and structure. Acad Med. 2017;92(11S):S110-117.

Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. 2010;100 Suppl 1(Suppl 1):S40-46.

Frank JR, Snell L, Sherbino J. CanMEDS 2015 Physician competency framework. Ottawa: Royal College of Physicians & Surgeons of Canada; 2015.

Harden RM. Learning outcomes and instructional objectives: is there a difference? Med Teach. 2002;24(2):151–5.

Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): a tool to assess surgical competence. Acad Med. 2012;87(10):1401–7.

de Villiers MR, de Villiers PJ, Kent AP. The Delphi technique in health sciences education research. Med Teach. 2005;27(7):639–43.

Patton MQ, Fund RECM. Qualitative research & evaluation methods. SAGE Publications; 2002.

Sargeant J. Qualitative research part II: participants, analysis, and quality assurance. J Graduate Med Educ. 2012;4(1):1–3.

Ericsson KA, Simon HA. How to study thinking in everyday life: contrasting think-aloud protocols with descriptions and explanations of thinking. Mind Cult Act. 1998;5(3):178–86.

Lumivero. NVivo (Version 14). 2023. www.lumivero.com .

Krippendorff K. Content analysis: an introduction to its methodology. Sage; 2018.

Carter N, Bryant-Lukosius D, DiCenso A, Blythe J, Neville AJ. The use of triangulation in qualitative research. Oncol Nurs Forum. 2014;41(5):545–7.

Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 conference. Med Teach. 2011;33(3):206–14.

Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40(11):1102–9.

Göttgens I, Oertelt-Prigione S. The application of human-centered design approaches in health research and innovation: a narrative review of current practices. JMIR Mhealth Uhealth. 2021;9(12):e28102.

Bonnie LHA, Visser MRM, Bont J, Kramer AWM, van Dijk N. Trainers’ and trainees’ expectations of entrustable professional activities (EPAs) in a primary care training programme. Educ Prim Care. 2019;30(1):13–21.

van Loon KA, Bonnie LHA, van Dijk N, Scheele F. Benefits of EPAs at risk? The influence of the workplace environment on the uptake of EPAs in EPA-based curricula. Perspect Med Educ. 2021;10(4):200–6.

van Loon KA, Scheele F. Improving graduate medical education through faculty empowerment instead of detailed guidelines. Acad Med. 2021;96(2):173.

Peters S, Bussières A, Depreitere B, Vanholle S, Cristens J, Vermandere M, et al. Facilitating guideline implementation in primary health care practices. J Prim Care Community Health. 2020;11:2150132720916263.

Peters S, Sukumar K, Blanchard S, Ramasamy A, Malinowski J, Ginex P, et al. Trends in guideline implementation: an updated scoping review. Implement Sci. 2022;17(1):50.

Szulewski A, Braund H, Dagnone DJ, McEwen L, Dalgarno N, Schultz KW, et al. The assessment burden in competency-based medical education: how programs are adapting. Acad Med. 2023;98(11):1261–7.

Thaler RH. Nudge, not sludge. Science. 2018;361(6401):431.

Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52(6):654–63.

Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53(1):76–85.

Gaunt A, Patel A, Rusius V, Royle TJ, Markham DH, Pawlikowska T. “Playing the game”: how do surgical trainees seek feedback using workplace-based assessment? Med Educ. 2017;51(9):953–62.

Martin L, Sibbald M, Brandt Vegas D, Russell D, Govaerts M. The impact of entrustment assessments on feedback and learning: trainee perspectives. Med Educ. 2020;54(4):328–36.

Bray MJ, Bradley EB, Martindale JR, Gusic ME. Implementing systematic faculty development to support an EPA-Based program of assessment: strategies, outcomes, and lessons learned. Teach Learn Med. 2021;33(4):434–44.

Lövquist E, Shorten G, Aboulafia A. Virtual reality-based medical training and assessment: the multidisciplinary relationship between clinicians, educators and developers. Med Teach. 2012;34(1):59–64.

Norcini JJ. Peer assessment of competence. Med Educ. 2003;37(6):539–43.

Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE guide no. 31. Med Teach. 2007;29(9):855–71.

Slattery P, Saeri AK, Bragge P. Research co-design in health: a rapid overview of reviews. Health Res Policy Syst. 2020;18(1):17.

Download references

Acknowledgements

The authors would like to acknowledge the contribution of Mr. Guy Gielis, Mrs. An Stockmans, Mrs. Fran Timmers, and Mrs Karolina Bystram that assisted with coordination of the CCCs. We would also like to thank and acknowledge Prof. dr. Martin Valcke and Dr. Mieke Embo for facilitating this study through the SBO SCAFFOLD project(www.sbo-scaffold.com). Finally, we would like to thank the CCCs members and the trainers and trainees that participated in this study.

This work was supported by the Research Foundation Flanders (FWO) under Grant [S003219N]-SBO SCAFFOLD.

Author information

Authors and affiliations.

Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium

Vasiliki Andreou, Sanne Peters & Birgitte Schoenmakers

School of Health Sciences, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Melbourne, Australia

Sanne Peters

Department of Cellular and Molecular Medicine, KU Leuven, Leuven, Belgium

Jan Eggermont

Department of Public Health and Primary Care, KU Leuven, Box 7001, Kapucijnenvoer 7, Leuven, 3000, Belgium

Vasiliki Andreou

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (VA, SP, JE, BS) have contributed to designing the study. VA collected the data, led the analysis, and wrote the manuscript. BS analysed the data and critically reviewed the manuscript. SE and JE contributed to critically revising this manuscript. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Vasiliki Andreou .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Social and Societal Ethics Committee G-2022-5615-R2(MIN) , and all participants signed a informed consent prior to participation.

Consent for publication

All the participants gave their consent for publishing their data anonymously.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Andreou, V., Peters, S., Eggermont, J. et al. Co-designing Entrustable Professional Activities in General Practitioner’s training: a participatory research study. BMC Med Educ 24 , 549 (2024). https://doi.org/10.1186/s12909-024-05530-y

Download citation

Received : 25 December 2023

Accepted : 07 May 2024

Published : 17 May 2024

DOI : https://doi.org/10.1186/s12909-024-05530-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Postgraduate medical education
  • Curriculum design
  • EPA assessment
  • GP Training
  • Workplace-based assessment

BMC Medical Education

ISSN: 1472-6920

design research expert interview

Trinity College Dublin, the University of Dublin

News & Events

View the contact page for more contact and location information

Experts explore planning and design principles for quality of life and resilience for older people in long-term residential care

Posted on: 15 May 2024

The Health Research Board (HRB)-funded workshop is open to researchers, policy-makers, providers, design and healthcare professionals, and individuals interested in design and planning of residential long-term care settings in Ireland.

Experts explore planning and design principles for quality of life and resilience for older people in long-term residential care

Researchers from TrinityHaus Research Centre in Trinity’s School of Engineering will soon hold their first stakeholder workshop ( Friday 7 th June ) as part of their Health Research Board (HRB)-funded research project “Planning and design for quality of life and resilience in residential long-term care settings for older people in Ireland”.

The team, which also includes experts from Age Friendly Ireland, Age Action Ireland, Health Service Executive and The Centre for Excellence in Universal Design (CEUD), aims to provide research findings and recommendations related to the buildings and outdoor spaces (built environment) associated with long-term residential care (LTRC) settings for older people (nursing homes) in Ireland.

This project also involves a number of collaborators, including The London School of Economics and Political Science (LSE), IADNAM, Nursing Homes Ireland, Care Champions, O'Connell Mahon Architects, Maastricht University and Bill Benbow.

The main outcome will be the development of a set of planning and design guidelines for new settings and the adaptation and retrofit of existing settings, all underpinned by Universal Design principles.

The aims of the workshop are:

  • To present and disseminate initial findings on key research activities, as well as gather feedback from stakeholders
  • To initiate discussions on the translation of research findings into guidelines
  • To present the lived experience (application of daily clock methodology; and outputs from the ‘meaning of home’ workshop series organised and delivered in residential long-term care settings (organised and delivered by current Poetry Ireland Poet in Resident, Anne Tannam)

Understanding the needs and preferences of residents, families, and staff is critical to this research. 

During the workshop, researchers will share the overall engagement strategy implemented, and discuss some of the key outputs framed by expert interviews, online consultations, focus groups, and the adaptation of daily clock exercises to capture people’s perception of the built environment in residential long-term care. 

Stakeholders will have an opportunity to provide feedback on activities, and to experience some of the methods during the workshop.

 For more details and to register, visit the Eventbrite page .

Media Contact:

Thomas Deane | Media Relations | [email protected] | +353 1 896 4685

  • Open access
  • Published: 18 May 2024

Determinants of appropriate antibiotic and NSAID prescribing in unscheduled outpatient settings in the veterans health administration

  • Michael J. Ward 1 , 2 , 3 , 4 ,
  • Michael E. Matheny 1 , 4 , 5 , 6 ,
  • Melissa D. Rubenstein 3 ,
  • Kemberlee Bonnet 7 ,
  • Chloe Dagostino 7 ,
  • David G. Schlundt 7 ,
  • Shilo Anders 4 , 8 ,
  • Thomas Reese 4 &
  • Amanda S. Mixon 1 , 9  

BMC Health Services Research volume  24 , Article number:  640 ( 2024 ) Cite this article

Metrics details

Despite efforts to enhance the quality of medication prescribing in outpatient settings, potentially inappropriate prescribing remains common, particularly in unscheduled settings where patients can present with infectious and pain-related complaints. Two of the most commonly prescribed medication classes in outpatient settings with frequent rates of potentially inappropriate prescribing include antibiotics and nonsteroidal anti-inflammatory drugs (NSAIDs). In the setting of persistent inappropriate prescribing, we sought to understand a diverse set of perspectives on the determinants of inappropriate prescribing of antibiotics and NSAIDs in the Veterans Health Administration.

We conducted a qualitative study guided by the Consolidated Framework for Implementation Research and Theory of Planned Behavior. Semi-structured interviews were conducted with clinicians, stakeholders, and Veterans from March 1, 2021 through December 31, 2021 within the Veteran Affairs Health System in unscheduled outpatient settings at the Tennessee Valley Healthcare System. Stakeholders included clinical operations leadership and methodological experts. Audio-recorded interviews were transcribed and de-identified. Data coding and analysis were conducted by experienced qualitative methodologists adhering to the Consolidated Criteria for Reporting Qualitative Studies guidelines. Analysis was conducted using an iterative inductive/deductive process.

We conducted semi-structured interviews with 66 participants: clinicians ( N  = 25), stakeholders ( N  = 24), and Veterans ( N  = 17). We identified six themes contributing to potentially inappropriate prescribing of antibiotics and NSAIDs: 1) Perceived versus actual Veterans expectations about prescribing; 2) the influence of a time-pressured clinical environment on prescribing stewardship; 3) Limited clinician knowledge, awareness, and willingness to use evidence-based care; 4) Prescriber uncertainties about the Veteran condition at the time of the clinical encounter; 5) Limited communication; and 6) Technology barriers of the electronic health record and patient portal.

Conclusions

The diverse perspectives on prescribing underscore the need for interventions that recognize the detrimental impact of high workload on prescribing stewardship and the need to design interventions with the end-user in mind. This study revealed actionable themes that could be addressed to improve guideline concordant prescribing to enhance the quality of prescribing and to reduce patient harm.

Peer Review reports

Adverse drug events (ADEs) are the most common iatrogenic injury. [ 1 ] Efforts to reduce these events have primarily focused on the inpatient setting. However, the emergency department (ED), urgent care, and urgent primary care clinics are desirable targets for interventions to reduce ADEs because approximately 70% of all outpatient encounters occur in one of these settings. [ 2 ] Two of the most commonly prescribed drug classes during acute outpatient care visits that have frequent rates of potentially inappropriate prescribing include antibiotics and non-steroidal anti-inflammatory drugs (NSAIDs). [ 3 , 4 ]

An estimated 30% of all outpatient oral antibiotic prescriptions may be unnecessary. [ 5 , 6 ] The World Health Organization identified overuse of antibiotics and its resulting antimicrobial resistance as a global threat. [ 7 ] The Centers for Disease Control and Prevention (CDC) conservatively estimates that in the US there are nearly 3 million antibiotic-resistant infections that cause 48,000 deaths annually. [ 8 ] Antibiotics were the second most common source of adverse events with nearly one ADE resulting in an ED visit for every 100 prescriptions. [ 9 ] Inappropriate antibiotic prescriptions (e.g., antibiotic prescription for a viral infection) also contribute to resistance and iatrogenic infections such as C. difficile (antibiotic associated diarrhea) and Methicillin-resistant Staphylococcus aureus (MRSA) . [ 8 ] NSAID prescriptions, on the other hand, result in an ADE at more than twice the rate of antibiotics (2.2%), [ 10 ] are prescribed to patients at an already increased risk of potential ADEs, [ 4 , 11 ] and frequently interact with other medications. [ 12 ] Inappropriate NSAID prescriptions contribute to serious gastrointestinal, [ 13 ] renal, [ 14 ] and cardiovascular [ 15 , 16 ] ADEs such as gastrointestinal bleeding, acute kidney injury, and myocardial infarction or heart failure, respectively. Yet, the use of NSAIDs is ubiquitous; according to the CDC, between 2011 and 2014, 5% of the US population were prescribed an NSAID whereas an additional 2% take NSAIDs over the counter. [ 11 ]

Interventions to reduce inappropriate antibiotic prescribing commonly take the form of antimicrobial stewardship programs. However, no such national programs exist for NSAIDs, particularly in acute outpatient care settings. There is a substantial body of evidence supporting the evidence of such stewardship programs. [ 17 ] The CDC recognizes that such outpatient programs should consist of four core elements of antimicrobial stewardship, [ 18 ] including commitment, action for policy and practice, tracking and reporting, and education and expertise. However, the opportunities to extend antimicrobial stewardship in EDs are vast. Despite the effectiveness, there is a recognized need to understand which implementation strategies and how to implement multifaceted interventions. [ 19 ] Given the unique time-pressured environment of acute outpatient care settings, not all antimicrobial stewardship strategies work in these settings necessitating the development of approaches tailored to these environments. [ 19 , 20 ]

One particularly vulnerable population is within the Veterans Health Administration. With more than 9 million enrollees in the Veterans Health Administration, Veterans who receive care in Veteran Affairs (VA) hospitals and outpatient clinics may be particularly vulnerable to ADEs. Older Veterans have greater medical needs than younger patients, given their concomitant medical and mental health conditions as well as cognitive and social issues. Among Veterans seen in VA EDs and Urgent Care Clinics (UCCs), 50% are age 65 and older, [ 21 ] nearly three times the rate of non-VA emergency care settings (18%). [ 22 ] Inappropriate prescribing in ED and UCC settings is problematic with inappropriate antibiotic prescribing estimated to be higher than 40%. [ 23 ] In a sample of older Veterans discharged from VA ED and UCC settings, NSAIDs were found to be implicated in 77% of drug interactions. [ 24 ]

Learning from antimicrobial stewardship programs and applying to a broader base of prescribing in acute outpatient care settings, it is necessary to understand not only why potentially inappropriate prescribing remains a problem for antibiotics, but for medications (e.g., NSAIDs) which have received little stewardship focus previously. This understanding is essential to develop and implement interventions to reduce iatrogenic harm for vulnerable patients seen in unscheduled settings. In the setting of the Veterans Health Administration, we sought to use these two drug classes (antibiotics and NSAIDs) that have frequent rates of inappropriate prescribing in unscheduled outpatient care settings, to understand a diverse set of perspectives on why potentially inappropriate prescribing continues to occur.

Selection of participants

Participants were recruited from three groups in outpatient settings representing emergency care, urgent care, and urgent primary care in the VA: 1) Clinicians-VA clinicians such as physicians, advanced practice providers, and pharmacists 2) Stakeholders-VA and non-VA clinical operational and clinical content experts such as local and regional medical directors, national clinical, research, and administrative leadership in emergency care, primary care, and pharmacy including geriatrics; and 3) Veterans seeking unscheduled care for infectious or pain symptoms.

Clinicians and stakeholders were recruited using email, informational flyers, faculty/staff meetings, national conferences, and snowball sampling, when existing participants identify additional potential research subjects for recruitment. [ 25 ] Snowball sampling is useful for identifying and recruiting participants who may not be readily apparent to investigators and/or hard to reach. Clinician inclusion criteria consisted of: 1) at least 1 year of VA experience; and 2) ≥ 1 clinical shift in the last 30 days at any VA ED, urgent care, or primary care setting in which unscheduled visits occur. Veterans were recruited in-person at the VA by key study personnel. Inclusion criteria consisted of: 1) clinically stable as determined by the treating clinician; 2) 18 years or older; and 3) seeking care for infectious or pain symptoms in the local VA Tennessee Valley Healthcare System (TVHS). TVHS includes an ED at the Nashville campus with over 30,000 annual visits, urgent care clinic in Murfreesboro, TN with approximately 15,000 annual visits, and multiple primary care locations throughout the middle Tennessee region. This study was approved by the VA TVHS Institutional Review Board as minimal risk.

Data collection

Semi-structured interview guides (Supplemental Table 1) were developed using the Consolidated Framework for Implementation Research (CFIR) [ 26 ] and the Theory of Planned Behavior [ 27 , 28 ] to understand attitudes and beliefs as they relate to behaviors, and potential determinants of a future intervention. Interview guides were modified and finalized by conducting pilot interviews with three members of each participant group. Interview guides were tailored to each group of respondents and consisted of questions relating to: 1) determinants of potentially inappropriate prescribing; and 2) integration into practice (Table. 1 ). Clinicians were also asked about knowledge and awareness of evidence-based prescribing practices for antibiotics and NSAIDs. The interviewer asked follow-up questions to elicit clarity of responses and detail.

Each interview was conducted by a trained interviewer (MDR). Veteran interviews were conducted in-person while Veterans waited for clinical care so as not to disrupt clinical operations. Interviews with clinicians and stakeholders were scheduled virtually. All interviews (including in-person) were recorded and transcribed in a manner compliant with VA information security policies using Microsoft Teams (Redmond, WA). The audio-recorded interviews were transcribed and de-identified by a transcriptionist and stored securely behind the VA firewall using Microsoft Teams. Study personnel maintained a recording log on a password-protected server and each participant was assigned a unique participant ID number. Once 15 interviews were conducted per group, we planned to review interviews with the study team to discuss content, findings, and to decide collectively when thematic saturation was achieved, the point at which no new information was obtained. [ 29 ] If not achieved, we planned to conduct at least 2 additional interviews prior to group review for saturation. We estimated that approximately 20–25 interviews per group were needed to achieve thematic saturation.

Qualitative data coding and analysis was managed by the Vanderbilt University Qualitative Research Core. A hierarchical coding system (Supplemental Table 2) was developed and refined using an iterative inductive/deductive approach [ 30 , 31 , 32 ] guided by a combination of: 1) Consolidated Framework for Implementation Research (CFIR) [ 26 ]; 2) the Theory of Planned Behavior [ 27 , 28 ]; 3) interview guide questions; and 4) a preliminary review of the transcripts. Eighteen major categories (Supplemental Table 3) were identified and were further divided into subcategories, with some subcategories having additional levels of hierarchical division. Definitions and rules were written for the use of each of the coding categories. The process was iterative in that the coding system was both theoretically informed and derived from the qualitative data. The coding system was finalized after it was piloted by the coders. Data coding and analysis met the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. [ 33 ]

Four experienced qualitative coders were trained by independently coding two transcripts from each of the three participant categories. Coding was then compared, and any discrepancies resolved by reconciliation. After establishing reliability in using the coding system, the coders divided and independently coded the remaining transcripts in sequential order. Each statement was treated as a separate quote and could be assigned up to 21 different codes. Coded transcripts were combined and sorted by code.

Following thematic saturation, the frequency of each code was calculated to understand the distribution of quotes. Quotes were then cross-referenced with coding as a barrier to understand potential determinants of inappropriate prescribing. A thematic analysis of the barriers was conducted and presented in an iterative process with the research team of qualitative methodologists and clinicians to understand the nuances and refine the themes and subthemes from the coded transcripts. Transcripts, quotations, and codes were managed using Microsoft Excel and SPSS version 28.0.

We approached 132 individuals and 66 (50%) agreed to be interviewed. Participants included 25 clinicians, 24 stakeholders, and 17 Veterans whose demographic characteristics are presented in Table 2 . The clinicians were from 14 VA facilities throughout the US and 20 physicians, and five advanced practice providers. Of the clinicians, 21 (84%) worked in either an ED or urgent care while the remainder practiced in primary care. The 24 stakeholders included 13 (54%) clinical service chiefs or deputy chief (including medical directors), five (21%) national directors, and six (25%) experts in clinical content and methodology. The 17 Veterans interviewed included 15 (88%) who were seen for pain complaints.

Results are organized by the six thematic categories with several subthemes in each category. Themes and subthemes are presented in Table 3  and are visually represented in Fig.  1 . The six themes were: 1) perceived versus actual Veterans expectations about prescribing, 2) the influence of a time-pressured clinical environment on prescribing stewardship, 3) limited clinician knowledge, awareness, and willingness to use evidence-based care, 4) uncertainties about the Veteran condition at the time of the clinical encounter, 5) limited communication, and 6) technology barriers.

figure 1

Visual representation of themes and subthemes from 66 clinician, stakeholder, and Veteran interviews

Theme 1: Perception that Veterans routinely expect a medication from their visit, despite clinical inappropriateness

According to clinicians, Veterans frequently expect to receive a prescription even when this decision conflicts with good clinical practice.

Certainly lots of people would say you know if you feel like you’re up against some strong expectations from the patients or caregivers or families around the utility of an antibiotic when it’s probably not indicated…In the emergency department the bias is to act and assume the worst and assume like the worst for the clinical trajectory for the patient rather than the reverse. [Clinician 49, Physician, ED]

In addition, stakeholders further stated that patient prescription expectations are quite influential and are likely shaped by Veterans’ prior experiences.

I think the patients, particularly for antibiotics, have strong feelings about whether they should or shouldn’t get something prescribed. [Stakeholder 34] You know I think the biggest challenge, I think, is adjusting patients’ expectations because you know they got better the last time they were doing an antibiotic. [Stakeholder 64]

Patient satisfaction and clinician workload may also influence the clinician’s prescription decision.

We have a lot of patients that come in with back pain or knee pain or something. We’ll get an x-ray and see there’s nothing actually wrong physically that can be identified on x-ray at least and you have to do something. Otherwise, patient satisfaction will dip, and patients leave angry. [Clinician 28, Physician, urgent care clinic] For some clinicians it’s just easier to prescribe an antibiotic when they know that’s the patient’s expectation and it shortens their in-room discussion and evaluation. [Clinician 55, Physician, ED]

Despite clinician perception, Veterans communicated that they did not necessarily expect a prescription and were instead focused on the clinical interaction and the clinician’s decision.

I’m not sure if they’ll give me [unintelligible] a prescription or what they’ll do. I don’t care as long as they stop the pain. [Patient 40, urgent care clinic] I don’t expect to [receive a prescription], but I mean whatever the doctor finds is wrong with me I will follow what he says. [Patient 31, ED]

Theme 2: Hectic clinical environments and unique practice conditions in unscheduled settings provide little time to focus on prescribing practices

Clinicians and stakeholders reported that the time-constrained clinical environment and need to move onto the next patient were major challenges to prescribing stewardship.

The number one reason is to get a patient out of your office or exam bay and move on to the next one. [Stakeholder 28] It takes a lot of time and you have to be very patient and understanding. So, you end up having to put a fair bit of emotional investment and intelligence into an encounter to not prescribe. [Stakeholder 1]

Stakeholders also noted that unique shift conditions and clinician perceptions that their patients were “different” might influence prescribing practices.

A common pushback was ‘well my patients are different.’ [Stakeholder 4] Providers who worked different types of shifts, so if you happened to work on a Monday when the clinics were open and had more adults from the clinics you were more likely to prescribe antibiotics than if you worked over night and had fewer patients. Providers who worked primarily holidays or your Friday prescribing pattern may be very different if you could get them into a primary care provider the next day. [Stakeholder 22]

Clinicians also reported that historical practices in the clinical environment practices may also contribute to inappropriate prescribing.

I came from working in the [outpatient] Clinic as a new grad and they’re very strict about prescribing only according to evidence-based practice. And then when I came here things are with other colleagues are a little more loose with that type of thing. It can be difficult because you start to adopt that practice to. [Clinician 61, Nurse Practitioner, ED]

Theme 3: Clinician knowledge, awareness, and willingness to use evidence-based care

Stakeholders felt that clinicians had a lack of knowledge about prescribing of NSAIDs and antibiotics.

Sometimes errors are a lack of knowledge or awareness of the need to maybe specifically dose for let’s say impaired kidney function or awareness of current up to date current antibiotic resistance patterns in the location that might inform a more tailored antibiotic choice for a given condition. [Stakeholder 37] NSAIDs are very commonly used in the emergency department for patients of all ages…the ED clinician is simply not being aware that for specific populations this is not recommended and again just doing routine practice for patients of all ages and not realizing that for older patients you actually probably should not be using NSAIDs. [Stakeholder 40]

Some clinicians may be unwilling to change their prescribing practices due to outright resistance, entrenched habits, or lack of interest in doing so.

It sounds silly but there’s always some opposition to people being mandated to do something. But there are some people who would look and go ‘okay we already have a handle on that so why do we need something else? I know who prescribes inappropriately and who doesn’t. Is this a requirement, am I evaluated on it? That would come from supervisors. Is this one more thing on my annual review?’ [Stakeholder 28] If people have entrenched habits that are difficult to change and are physicians are very individualistic people who think that they are right more often than the non-physician because of their expensive training and perception of professionalism. [Stakeholder 4]

Theme 4: Uncertainty about whether an adverse event will occur

Clinicians cited the challenge of understanding the entirety of a Veteran’s condition, potential drug-drug interactions, and existing comorbidities in knowing whether an NSAID prescription may result in an adverse event.

It’s oftentimes a judgement call if someone has renal function that’s right at the precipice of being too poor to merit getting NSAIDs that may potentially cause issues. [Clinician 43, Physician, inpatient and urgent care] It depends on what the harm is. So, for instance, you can’t always predict allergic reactions. Harm from the non-steroidals would be more if you didn’t pre-identify risk factors for harm. So, they have ulcer disease, they have kidney problems where a non-steroidal would not be appropriate for that patient. Or potential for a drug-drug interaction between that non-steroid and another medication in particular. [Clinician 16, Physician, ED]

Rather than be concerned about the adverse events resulting from the medication itself, stakeholders identified the uncertainty that clinicians experience about whether a Veteran may experience an adverse event from an infection if nothing is done. This uncertainty contributes to the prescription of an antibiotic.

My experience in working with providers at the VA over the years is that they worry more about the consequences of not treating an infection than about the consequences of the antibiotic itself. [Stakeholder 19] Sometimes folks like to practice conservatively and they’ll say even though I didn’t really see any hard evidence of a bacterial infection, the patient’s older and sicker and they didn’t want to risk it. [Stakeholder 16]

Theme 5: Limited communication during and after the clinical encounter

The role and type of communication about prescribing depended upon the respondent. Clinicians identified inadequate communication and coordination with the Veteran’s primary care physician during the clinical encounter.

I would like to have a little more communication with the primary doctors. They don’t seem to be super interested in talking to anyone in the emergency room about their patients… A lot of times you don’t get an answer from the primary doctor or you get I’m busy in clinic. You can just pick something or just do what you think is right. [Clinician 25, Physician, ED]

Alternatively, stakeholders identified post-encounter patient outcome and clinical performance feedback as potential barriers.

Physicians tend to think that they are doing their best for every individual patient and without getting patient by patient feedback there is a strong cognitive bias to think well there must have been some exception and reason that I did it in this setting. [Stakeholder 34] It’s really more their own awareness of like their clinical performance and how they’re doing. [Stakeholder 40]

Veterans, however, prioritized communication during the clinical encounter. They expressed the need for clear and informative communication with the clinician, and the need for the clinician to provide a rationale for the choice and medication-specific details along with a need to ask any questions.

I expect him to tell me why I’m taking it, what it should do, and probably the side effects. [Patient 25, ED] I’d like to have a better description of how to take it because I won’t remember all the time and sometimes what they put on the bottle is not quite as clear. [Patient 22, ED]

Veterans reported their desire for a simple way to learn about medication information. They provided feedback on the current approaches to educational materials about prescriptions.

Probably most pamphlets that people get they’re not going to pay attention to them. Websites can be overwhelming. [Patient 3, ED] Posters can be offsetting. If you’re sick, you’re not going to read them…if you’re sick you may glance at that poster and disregard it. So, you’re not really going to see it but if you give them something in the hand people will tend to look at it because it’s in their hand. [Patient 19, ED] It would be nice if labels or something just told me what I needed to know. You know take this exactly when and reminds me here’s why you’re taking it for and just real clear and not small letters. [Patient 7, ED]

Theme 6: Technology barriers limited the usefulness of clinical decision support for order checking and patient communication tools

Following the decision to prescribe a medication, clinicians complained that electronic health record pop-ups with clinical decision support warnings for potential safety concerns (e.g., drug-drug interactions) were both excessive and not useful in a busy clinical environment.

The more the pop ups, the more they get ignored. So, it’s finding that sweet spot right where you’re not constantly having to click out of something because you’re so busy. Particularly in our clinical setting where we have very limited amount of time to read the little monograph. Most of the time you click ‘no’ and off you go. (Clinician 16, Physician, ED) Some of these mechanisms like the EMR [electronic medical record] or pop-up decision-making windows really limit your time. If you know the guidelines appropriately and doing the right thing, even if you’re doing the right thing it takes you a long time to get through something. (Clinician 19, Physician, Primary care clinic)

For post-encounter communication that builds on Theme 5 about patient communication, patients reported finding using the VA patient portal (MyHealtheVet) challenging for post-event communication with their primary care physician and to review the medications they were prescribed.

I’ve got to get help to get onto MyHealtheVet but I would probably like to try and use that, but I haven’t been on it in quite some time. [Patient 22, ED] I tried it [MyHealtheVet] once and it’s just too complicated so I’m not going to deal with it. [Patient 37, Urgent care]

This work examined attitudes and perceptions of barriers to appropriate prescribing of antibiotics and NSAIDs in unscheduled outpatient care settings in the Veterans Health Administration. Expanding on prior qualitative work on antimicrobial stewardship programs, we also included an examination of NSAID prescribing, a medication class which has received little attention focused on prescribing stewardship. This work seeks to advance the understanding of fundamental problems underlying prescribing stewardship to facilitate interventions designed to improve not only the decision to prescribe antibiotics and NSAIDs, but enhances the safety checks once a decision to prescribe is made. Specifically, we identified six themes during these interviews: perceived versus actual Veteran expectations about prescribing, the influence of a time-pressured clinical environment on prescribing stewardship, limited clinician knowledge, awareness, and willingness to use evidence-based care, uncertainties about the Veteran condition at the time of the clinical encounter, limited communication, and technology barriers.

Sensitive to patient expectations, clinicians believed that Veterans would be dissatisfied if they did not receive an antibiotic prescription, [ 34 ] even though most patients presenting to the ED for upper respiratory tract infections do not expect antibiotics. [ 35 ] However, recent work by Staub et al. found that among patients with respiratory tract infections, receipt of an antibiotic was not independently associated with improved satisfaction. [ 36 ] Instead, they found that receipt of antibiotics had to match the patient’s expectations to affect patient satisfaction and recommended that clinicians communicate with their patients about prescribing expectations. This finding complements our results in the present study and the importance of communication about expectations is similarly important for NSAID prescribing as well.

A commitment to stewardship and modification of clinician behavior may be compromised by the time-pressured clinical environment, numerous potential drug interactions, comorbidities of a vulnerable Veteran population, and normative practices. The decision to prescribe medications such as antibiotics is a complex clinical decision and may be influenced by both clinical and non-clinical factors. [ 34 , 37 , 38 ] ED crowding, which occurs when the demand for services exceeds a system’s ability to provide care, [ 39 ] is a well-recognized manifestation of a chaotic clinical environment and is associated with detrimental effects on the hospital system and patient outcomes. [ 40 , 41 ] The likelihood that congestion and wait times will improve is unlikely as the COVID-19 pandemic has exacerbated the already existing crowding and boarding crisis in EDs. [ 42 , 43 ]

Another theme was the uncertainty in the anticipation of adverse events that was exacerbated by the lack of a feedback loop. Feedback on clinical care processes and patient outcomes is uncommonly provided in emergency care settings, [ 44 ] yet may provide an opportunity to change clinician behavior, particularly for antimicrobial stewardship. [ 45 ] However, the frequent use of ineffective feedback strategies [ 46 ] compromises the ability to implement effective feedback interventions; feedback must be specific [ 47 ] and address the Intention-to-Action gap [ 48 ] by including co-interventions to address recipient characteristics (i.e., beliefs and capabilities) and context to maximize impact. Without these, feedback may be ineffective.

An additional barrier identified from this work is the limited communication with primary care following discharge. A 2017 National Quality Forum report on ED care transitions [ 49 ] recommended that EDs and their supporting hospital systems should expand infrastructure and enhance health information technology to support care transitions as Veterans may not understand discharge instructions, may not receive post-ED or urgent care, [ 50 , 51 , 52 ] or may not receive a newly prescribed medication. [ 24 ] While there are existing mechanisms to communicate between the ED and primary care teams such as notifications when a Veteran presents to the ED and when an emergency clinician copies a primary care physician on a note, these mechanisms are insufficient to address care transition gaps and are variable in best practice use. To address this variability, the VA ED PACT Tool was developed using best practices (standardized processes, "closed-loop" communication, embedding into workflow) to facilitate and standardize communication between VA EDs and follow-up care clinicians. [ 53 ] While the ED PACT Tool is implemented at the Greater Los Angeles VA and can create a care coordination order upon ED discharge, its use is not yet widely adopted throughout the VA.

In the final theme about technology barriers, once the decision has been made to prescribe a medication, existing electronic tools that are key components of existing stewardship interventions designed to curtail potentially inappropriate prescriptions may be compromised by their lack of usability. For example, clinician and stakeholder interview respondents described how usability concerns were exacerbated in a time-pressured clinical environment (e.g., electronic health record clinical decision support tools). Clinical decision support is an effective tool to improve healthcare process measures in a diverse group of clinical environments; [ 54 ] however, usability remains a barrier when alerts must be frequently overridden. [ 55 , 56 ] Alert fatigue, as expressed in our interviews for order checking and recognized within the VA’s EHR, [ 57 , 58 ] may contribute to excessive overrides reducing the benefit of clinical decision support, [ 56 , 59 ] there was a notable lack of discussion about the decision to initiate appropriate prescriptions, which is a key action of the CDC’s outpatient antibiotic stewardship campaign. [ 18 ] Thus, a potentially more effective, albeit challenging approach, is to “nudge” clinicians towards appropriate prescribing and away from the initial decision to prescribe (e.g., inappropriate antibiotic prescribing for viral upper respiratory tract infections) with either default order sets for symptom management or to enhance prescription decisions through reminders about potential contraindications to specific indications (e.g., high risk comorbidities). Beyond EHR-based solutions that might change clinician behavior, the CDC’s outpatient antibiotic stewardship program provides a framework to change the normative practices around inappropriate prescribing and includes a commitment to appropriate prescribing, action for policy and change, tracking and reporting, and education and expertise. [ 18 ]

Another technical barrier faces patients through patient-facing electronic tools such as the VA’s MyHealtheVet portal, which was developed to enhance patient communication following care transitions and to allow Veterans to review their medications and to communicate with their primary care clinical team. Patient portals can be an effective tool for medication adherence [ 60 ] and offer promise to provide patient education [ 61 ] following a clinical encounter. However, they are similarly limited by usability concerns, representing an adoption barrier to broader Veteran use after unscheduled outpatient care visits [ 62 ], particularly in an older patient population.

These interviews further underscored that lack of usability of clinical decision support for order checking that arises from ineffective design and is a key barrier preventing health information technology from reaching its promise of improving patient safety. [ 63 ] A common and recognized reason for these design challenges include the failure to place the user (i.e., acute care clinician) at the center of the design process resulting in underutilization, workarounds, [ 64 ] and unintended consequences, [ 65 ] all of which diminish patient safety practices and fail to change clinician behavior (i.e., prescribing). Complex adaptive systems work best when the relative strengths of humans (e.g., context sensitivity, situation specificity) are properly integrated with the information processing power of computerized systems. [ 66 ] One potential approach to address usability concerns is through the integration of user-centered design into technology design represents an opportunity to design more clinician- and patient-centric systems of care to advance prescribing stewardship interventions that may have lacked broader adoption previously. As antimicrobial stewardship and additional prescribing stewardship efforts focus on time-pressured environments where usability is essential to adoption, taking a user-centered design approach to not only the development of electronic tools but also in addressing the identified barriers in prescribing represents a promising approach to enhance the quality of prescribing.

Limitations

The study findings should be considered in light of its limitations. First, the setting for this work was the Veterans Health Administration, the largest integrated health system in the US. Also, while we focused on the stewardship of two drug classes, there are numerous additional drug classes that are prescribed in these settings. Studies in other settings or on other drug classes may not generalize to other settings and drug classes. Second, while clinicians and stakeholder perspectives included diverse, national representation, the Veterans interviewed were local to the Tennessee Valley Healthcare System. Given the concurrent COVID-19 pandemic at the time of enrollment, most of the Veterans were seen for pain-related complaints, and only two infectious-related complaints were included. However, we also asked them about antibiotic prescribing. Clinician and stakeholder narratives may not completely reflect their practice patterns as their responses could be influenced by social desirability bias. Third, responses may be subject to recall bias and may influence the data collected. Finally, the themes and subthemes identified may overlap and have potential interactions. While we used an iterative process to identify discrete themes and subthemes, prescription decisions represent a complex decision process that are influenced by numerous patient and contextual factors and may not be completely independent.

Despite numerous interventions to improve the quality of prescribing, the appropriate prescription of antibiotics and NSAIDs in unscheduled outpatient care settings remains a challenge. Using the Veterans Health Administration, this study found that challenges to high quality prescribing include perceived Veteran expectations about receipt of medications, a hectic clinical environment deprioritizing stewardship, limited clinician knowledge, awareness, and willingness to use evidence-based care, uncertainty about the potential for adverse events, limited communication, and technology barriers. Findings from these interviews suggest that interventions should consider the detrimental impact of high workload on prescribing stewardship, clinician workflow, the initial decision to prescribe medications, and incorporate end-users into the intervention design process. Doing so is a promising approach to enhance adoption of high quality prescribing practices in order to improve the quality and patient outcomes from NSAID and antibiotic prescribing.

Availability of data and materials

De-identified datasets used and/or analysed during the current study will be made available from the corresponding author on reasonable request.

Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377–384.

Article   CAS   PubMed   Google Scholar  

Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620–9.

Article   PubMed   Google Scholar  

Palms DL, Hicks LA, Bartoces M, et al. Comparison of antibiotic prescribing in retail clinics, urgent care centers, emergency departments, and traditional ambulatory care settings in the United States. Jama Intern Med. 2018;178(9):1267–9.

Article   PubMed   PubMed Central   Google Scholar  

Davis JS, Lee HY, Kim J, et al. Use of non-steroidal anti-inflammatory drugs in US adults: changes over time and by demographic. Open Heart. 2017;4(1):e000550.

Fleming-Dutra KE, Hersh AL, Shapiro DJ, et al. Prevalence of inappropriate antibiotic prescriptions among US ambulatory care visits, 2010–2011. JAMA. 2016;315(17):1864–73.

Shively NR, Buehrle DJ, Clancy CJ, Decker BK. Prevalence of Inappropriate Antibiotic Prescribing in Primary Care Clinics within a Veterans Affairs Health Care System. Antimicrob Agents Chemother. 2018;62(8):e00337–18. https://doi.org/10.1128/AAC.00337-18 .  https://pubmed.ncbi.nlm.nih.gov/29967028/ .

World Health Organization. Global antimicrobial resistance and use surveillance system (GLASS) report: 2022. 2022.

Centers for Disease Control and Prevention. COVID-19: U.S. Impact on Antimicrobial Resistance, Special Report 2022. Atlanta: U.S. Department of Health and Human Services, CDC; 2022.

Google Scholar  

Shehab N, Lovegrove MC, Geller AI, Rose KO, Weidle NJ, Budnitz DS. US emergency department visits for outpatient adverse drug events, 2013–2014. JAMA. 2016;316(20):2115–25.

Fassio V, Aspinall SL, Zhao X, et al. Trends in opioid and nonsteroidal anti-inflammatory use and adverse events. Am J Manag Care. 2018;24(3):e61–72.

PubMed   Google Scholar  

Centers for Disease Control and Prevention. Chronic Kidney Disease Surveillance System—United States. http://www.cdc.gov/ckd . Accessed 21 March 2023.

Cahir C, Fahey T, Teeling M, Teljeur C, Feely J, Bennett K. Potentially inappropriate prescribing and cost outcomes for older people: a national population study. Br J Clin Pharmacol. 2010;69(5):543–52.

Gabriel SE, Jaakkimainen L, Bombardier C. Risk for Serious Gastrointestinal Complications Related to Use of Nonsteroidal Antiinflammatory Drugs - a Metaanalysis. Ann Intern Med. 1991;115(10):787–96.

Zhang X, Donnan PT, Bell S, Guthrie B. Non-steroidal anti-inflammatory drug induced acute kidney injury in the community dwelling general population and people with chronic kidney disease: systematic review and meta-analysis. BMC Nephrol. 2017;18(1):256.

McGettigan P, Henry D. Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. PLoS Med. 2011;8(9): e1001098.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Holt A, Strange JE, Nouhravesh N, et al. Heart Failure Following Anti-Inflammatory Medications in Patients With Type 2 Diabetes Mellitus. J Am Coll Cardiol. 2023;81(15):1459–70.

Davey P, Marwick CA, Scott CL, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev. 2017;2(2):CD003543.

Sanchez GV, Fleming-Dutra KE, Roberts RM, Hicks LA. Core Elements of Outpatient Antibiotic Stewardship. MMWR Recomm Rep. 2016;65(6):1–12.

May L, Martin Quiros A, Ten Oever J, Hoogerwerf J, Schoffelen T, Schouten J. Antimicrobial stewardship in the emergency department: characteristics and evidence for effectiveness of interventions. Clin Microbiol Infect. 2021;27(2):204–9.

May L, Cosgrove S, L'Archeveque M, et al. A call to action for antimicrobial stewardship in the emergency department: approaches and strategies. Ann Emerg Med. 2013;62(1):69–77 e62.

Veterans Health Administration Emergency Medicine Management Tool. EDIS GeriatricsAgeReport v3.

Cairns C KK, Santo L. National Hospital Ambulatory Medical Care Survey: 2020 emergency department summary tables. NHAMCS Factsheets - EDs Web site. https://www.cdc.gov/nchs/data/nhamcs/web_tables/2020-nhamcs-ed-web-tables-508.pdf . Accessed 20 Dec 2022.

Lowery JL, Alexander B, Nair R, Heintz BH, Livorsi DJ. Evaluation of antibiotic prescribing in emergency departments and urgent care centers across the Veterans’ Health Administration. Infect Control Hosp Epidemiol. 2021;42(6):694–701.

Hastings SN, Sloane RJ, Goldberg KC, Oddone EZ, Schmader KE. The quality of pharmacotherapy in older veterans discharged from the emergency department or urgent care clinic. J Am Geriatr Soc. 2007;55(9):1339–48.

Goodman LA. Snowball sampling. The annals of mathematical statistics. 1961. pp. 148–170.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

Article   Google Scholar  

Ajzen I. The theory of planned behaviour: reactions and reflections. Psychol Health. 2011;26(9):1113–27.  https://doi.org/10.1080/08870446.2011.613995 .  https://www.tandfonline.com/doi/full/10.1080/08870446.2011.613995 .

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Azungah T. Qualitative research: deductive and inductive approaches to data analysis. Qual Res J. 2018;18(4):383–400.

Tjora A. Qualitative research as stepwise-deductive induction. Routledge; 2018.  https://www.routledge.com/Qualitative-Research-as-Stepwise-Deductive-Induction/Tjora/p/book/9781138304499 .

Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2006;5(1):80–92.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Patel A, Pfoh ER, Misra Hebert AD, et al. Attitudes of High Versus Low Antibiotic Prescribers in the Management of Upper Respiratory Tract Infections: a Mixed Methods Study. J Gen Intern Med. 2020;35(4):1182–8.

May L, Gudger G, Armstrong P, et al. Multisite exploration of clinical decision making for antibiotic use by emergency medicine providers using quantitative and qualitative methods. Infect Control Hosp Epidemiol. 2014;35(9):1114–25.

Staub MB, Pellegrino R, Gettler E, et al. Association of antibiotics with veteran visit satisfaction and antibiotic expectations for upper respiratory tract infections. Antimicrob Steward Healthc Epidemiol. 2022;2(1): e100.

Schroeck JL, Ruh CA, Sellick JA Jr, Ott MC, Mattappallil A, Mergenhagen KA. Factors associated with antibiotic misuse in outpatient treatment for upper respiratory tract infections. Antimicrob Agents Chemother. 2015;59(7):3848–52.

Hruza HR, Velasquez T, Madaras-Kelly KJ, Fleming-Dutra KE, Samore MH, Butler JM. Evaluation of clinicians’ knowledge, attitudes, and planned behaviors related to an intervention to improve acute respiratory infection management. Infect Control Hosp Epidemiol. 2020;41(6):672–9.

American College of Emergency Physicians Policy Statement. Crowding. https://www.acep.org/globalassets/new-pdfs/policy-statements/crowding.pdf . Published 2019. Accessed 11 Oct 2023.

Bernstein SL, Aronsky D, Duseja R, et al. The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10.

Rasouli HR, Esfahani AA, Nobakht M, et al. Outcomes of crowding in emergency departments; a systematic review. Arch Acad Emerg Med. 2019;7(1):e52.

PubMed   PubMed Central   Google Scholar  

Janke AT, Melnick ER, Venkatesh AK. Monthly Rates of Patients Who Left Before Accessing Care in US Emergency Departments, 2017–2021. JAMA Netw Open. 2022;5(9): e2233708.

Janke AT, Melnick ER, Venkatesh AK. Hospital Occupancy and Emergency Department Boarding During the COVID-19 Pandemic. JAMA Netw Open. 2022;5(9): e2233964.

Lavoie CF, Plint AC, Clifford TJ, Gaboury I. “I never hear what happens, even if they die”: a survey of emergency physicians about outcome feedback. CJEM. 2009;11(6):523–8.

Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259. https://doi.org/10.1002/14651858.CD000259.pub3 .

Hysong SJ, SoRelle R, Hughes AM. Prevalence of Effective Audit-and-Feedback Practices in Primary Care Settings: A Qualitative Examination Within Veterans Health Administration. Hum Factors. 2022;64(1):99–108.

Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): a framework for specifying behaviour. Implement Sci. 2019;14(1):102.

Desveaux L, Ivers NM, Devotta K, Ramji N, Weyman K, Kiran T. Unpacking the intention to action gap: a qualitative study understanding how physicians engage with audit and feedback. Implement Sci. 2021;16(1):19.

National Quality Forum. Emergency Department Transitions of Care: A Quality Measurement Framework—Final Report: DHHS contract HHSM‐500–2012–000091, Task Order HHSM‐500‐T0025. Washington, DC: National Quality Forum; 2017.

Kyriacou DN, Handel D, Stein AC, Nelson RR. Brief report: factors affecting outpatient follow-up compliance of emergency department patients. J Gen Intern Med. 2005;20(10):938–42.

Vukmir RB, Kremen R, Ellis GL, DeHart DA, Plewa MC, Menegazzi J. Compliance with emergency department referral: the effect of computerized discharge instructions. Ann Emerg Med. 1993;22(5):819–23.

Engel KG, Heisler M, Smith DM, Robinson CH, Forman JH, Ubel PA. Patient comprehension of emergency department care and instructions: are patients aware of when they do not understand? Ann Emerg Med. 2009;53(4):454–461 e415.

Cordasco KM, Saifu HN, Song HS, et al. The ED-PACT Tool Initiative: Communicating Veterans’ Care Needs After Emergency Department Visits. J Healthc Qual. 2020;42(3):157–65.

Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.

Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians’ decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163(21):2625–31.

van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138–47.

Shah T, Patel-Teague S, Kroupa L, Meyer AND, Singh H. Impact of a national QI programme on reducing electronic health record notifications to clinicians. BMJ Qual Saf. 2019;28(1):10–4.

Lin CP, Payne TH, Nichol WP, Hoey PJ, Anderson CL, Gennari JH. Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs’ Computerized Patient Record System. J Am Med Inform Assoc. 2008;15(5):620–6.

Middleton B, Bloomrosen M, Dente MA, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013;20(e1):e2-8.

Han HR, Gleason KT, Sun CA, et al. Using Patient Portals to Improve Patient Outcomes: Systematic Review. JMIR Hum Factors. 2019;6(4): e15038.

Johnson AM, Brimhall AS, Johnson ET, et al. A systematic review of the effectiveness of patient education through patient portals. JAMIA Open. 2023;6(1):ooac085.

Lazard AJ, Watkins I, Mackert MS, Xie B, Stephens KK, Shalev H. Design simplicity influences patient portal use: the role of aesthetic evaluations for technology acceptance. J Am Med Inform Assoc. 2016;23(e1):e157-161.

IOM. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: NAP;2012.

Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc. 2008;15(4):408–23.

Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007;14(4):415–23.

Hollnagel E, Woods D. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Boca Raton: CRC Press; 2006.

Download references

Acknowledgements

This material is based upon work supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development (I01HX003057). The content is solely the responsibility of the authors and does not necessarily represent the official views of the VA.

Author information

Authors and affiliations.

Education, and Clinical Center (GRECC), VA , Geriatric Research, Tennessee Valley Healthcare System, 2525 West End Avenue, Ste. 1430, Nashville, TN, 37203, USA

Michael J. Ward, Michael E. Matheny & Amanda S. Mixon

Medicine Service, Tennessee Valley Healthcare System, Nashville, TN, USA

Michael J. Ward

Department of Emergency Medicine, Vanderbilt University Medical Center, Nashville, TN, USA

Michael J. Ward & Melissa D. Rubenstein

Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA

Michael J. Ward, Michael E. Matheny, Shilo Anders & Thomas Reese

Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA

Michael E. Matheny

Division of General Internal Medicine & Public Health, Vanderbilt University Medical Center, Nashville, TN, USA

Department of Psychology, Vanderbilt University, Nashville, TN, USA

Kemberlee Bonnet, Chloe Dagostino & David G. Schlundt

Center for Research and Innovation in Systems Safety, Vanderbilt University Medical Center, Nashville, TN, USA

Shilo Anders

Section of Hospital Medicine, Vanderbilt University Medical Center, Nashville, TN, USA

Amanda S. Mixon

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: MJW, ASM, MEM, DS, SA. Methodology: MJW, ASM, MEM, DS, KB, SA, TR. Formal analysis: KB, DS, CD, MJW. Investigation: MJW, MDR, DS. Resources: MJW, MEM. Writing—Original Draft. Preparation: MJW, ASM, KB, MDR. Writing—Review & Editing: All investigators. Supervision: MJW, ASM, MEM. Funding acquisition: MJW, MEM.

Corresponding author

Correspondence to Michael J. Ward .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the VA Tennessee Valley Healthcare System Institutional Review Board as minimal risk (#1573619). A waiver of informed consent was approved and each subject was verbally consented prior to interviews. The IRB determined that all requirements set forth in 38CFR16.111 in accordance for human subjects research have been satisfied. All the methods were carried out according the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ward, M.J., Matheny, M.E., Rubenstein, M.D. et al. Determinants of appropriate antibiotic and NSAID prescribing in unscheduled outpatient settings in the veterans health administration. BMC Health Serv Res 24 , 640 (2024). https://doi.org/10.1186/s12913-024-11082-0

Download citation

Received : 11 October 2023

Accepted : 07 May 2024

Published : 18 May 2024

DOI : https://doi.org/10.1186/s12913-024-11082-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Non-Steroidal Anti-Inflammatory Drugs
  • Antibiotics
  • Qualitative Methods
  • Emergency Department
  • Urgent Care
  • Primary Care
  • Prescribing Stewardship

BMC Health Services Research

ISSN: 1472-6963

design research expert interview

Cyberattack on major health-tech company was caused by weak security infrastructure, Northeastern cybersecurity experts say

  • Search Search

Headshot of Cesareo Contreras

  • Copy Link Link Copied!

A sign outside of a UnitedHealth Group building.

A nearly weeklong cyberattack at Change Healthcare has caused prescription delays at thousands of pharmacies throughout the country, highlighting the fragility of our health care systems and their reliance on third-party software makers for key infrastructure, says Kevin Fu, a Northeastern college of engineering professor and cybersecurity expert.   

“I think it’s really a house of cards,” says Fu. “I think a lot of times companies, whether they are big or small, don’t realize how much they depend upon thousands of pieces of software. This particular [software] happens to be keystone to the whole practice of the delivery of health care. It’s deeply embedded into pharmacies. That’s why we are seeing these outages.” 

Change Healthcare is a health-tech company that provides thousands of pharmacies and health care providers in the U.S. with tools that allow them to process claims and other essential payment and revenue management practices. The company reported it was under a cyberattack last Wednesday. 

Headshot of Kevin Fu.

A day later, it informed the U.S. Securities and Exchange Commission of the incident, noting that it had “identified a suspected nation-state associated cyber security threat actor who had gained access to some of the Change Healthcare information technology systems.” 

In response to the attack, the company, which is a subsidiary of United Healthcare, took its systems offline as it worked to investigate and resolve the issue, causing prescription delays at pharmacies like CVS and Walgreens.  

As of Tuesday, Feb. 27, its systems remain offline , but 90% of the pharmacies affected by the attack have found workarounds to continue to provide services to customers, according to a statement Change Healthcare’s parent company, UnitedHealth, provided to CNBC.

Reuters has reported the attack was carried out by hackers who are part of the notorious ransomware gang Blackcat. Change Healthcare representatives, however, have not confirmed that or shared more details on the attackers. 

Fu says the fact that the company had to shut down its systems at all is a major indication that its systems were not designed properly with cybersecurity in mind. 

“If the cybersecurity designs were done right, we wouldn’t have needed to pull the plug, but there’s quite a lot of legacy software out there that is simply not resilient against an adversary,” he says. “Essential clinical functions need to be available for performing, whether or not the network goes down. … But today, the way things are written it’s all too common that if one piece goes down, the entire house of cards falls as well.” 

Aanjhan Ranganathan , a professor in the Khoury College of Computers Sciences and cybersecurity expert, says these attacks highlight the need for systems that are more distributed, less tied down, and more flexible and resilient in the face of attack. 

Featured Stories

Aerin Frankel saving a shot on net in a PWHL game.

Legendary Northeastern hockey goalie has Boston three wins away from inaugural PWHL title

An illustration of four people standing in a room in front of 6 TV screens of diffeerent sizes that show Elizabeth Holmes on them.

We’re addicted to ‘true crime’ stories. This class investigates why

Headshot of Cassandra McKenzie.

Northeastern’s Cassandra McKenzie recognized by city of Boston as ‘trailblazer’ for women in construction

A panel of people sitting in front of a Northeastern branded backdrop.

More researchers needed to rid the internet of harmful material, UK communications boss says at Northeastern conference

“I think the biggest lesson again and again that these attacks are teaching us is the requirement for decentralized systems, being able to not have a single point of failure.” 

Building these kinds of systems is not easy, Ranganathan explains, as it often requires operators to rethink and rebuild their networking systems from the ground up. 

“It’s one of those things where you always go for functionality and you don’t build systems with security and privacy by design,” he says. “There has been a recent trend with building systems with privacy and security by design.” 

But what does a decentralized cybersecurity system look like? 

“For example, you could first of all, not store everything in one place,” says Ranganathan. “You could store all critical data in multiple places with different keys. There are ways in which you can store parts of the data in different places, and even if one part is inaccessible, you can recover that part based on information that you have in other places. By doing this you are forcing an attacker to successfully target more than one endpoint.” 

He adds, “You’re kind of building the infrastructure in such a way that there is no one place to take down the entire system. You have to take down many different parts of the puzzle to actually cause any impact.” 

Science & Technology

design research expert interview

Recent Stories

design research expert interview

IMAGES

  1. How to conduct an interview for research

    design research expert interview

  2. Design Thinking Expert Interview Questions, Template & Tips

    design research expert interview

  3. Research design Three experts revised the interview guide checking its

    design research expert interview

  4. A Guide for Case Study Interview Presentations for Beginners

    design research expert interview

  5. 7 ways to prepare for a design research interview

    design research expert interview

  6. How to Design an Effective Interview Process

    design research expert interview

VIDEO

  1. Research Design, Research Method: What's the Difference?

  2. Preparing Effective UX/UI Product Design Interview Stories with Design Thinking

  3. Design and Development Engineer Interview Questions

  4. R&D Engineering Interview Questions

  5. UXR USER RESEARCH JOB INTERVIEW QUESTIONS TIPS & SCRIPT

  6. What is Research Design

COMMENTS

  1. How to run a Design Research Interview

    Give you and your participants time to get comfortable with the format of the interview. Open the discussion with broad questions, you'll narrow in later. Where do you live, what do you do for a ...

  2. 30 Design Researcher Interview Questions and Answers

    Interviews for design research positions often go beyond generic questions; they delve deep into your thought processes, creativity, analytical skills, and ability to empathize with users. ... The InterviewPrep Team is a highly skilled and diverse assembly of career counselors and subject matter experts. Leveraging decades of experience, they ...

  3. Design Thinking Expert Interview Questions, Template & Tips

    The template is a Design Thinking tool that can be used by up to four stakeholders. This is because multiple expert interviews provide a more accurate form of diverse data. Before getting to the questions, make sure you and your interview subjects are prepared. Provide the experts with context around the why and what of the interview session ...

  4. 12 Design Research methods to get inspired by users

    Learn more about the structure of a Design Research Interview here. Expert interview As with the user interview, it's a 1-2 hour interview in the expert's normal environment.

  5. Introduction: Expert Interviews

    Abstract. Before we go any further, we would like to begin by providing the reader with a step-by-step introduction to the methodological debate surrounding expert interviews. In doing so, we will start with a brief discussion of the generally accepted advantages and risks of expert interviews in research practice (1).

  6. 'The problem-centred expert interview'. Combining qualitative

    The epistemological interest in expert knowledge. Based on the existing literature, Bogner and Menz (Citation 2009) distinguish three types of expert interviews according to their epistemological functions.The first type is the exploratory expert interview, which is frequently used to gain knowledge and orientation in unknown or hardly known fields.. This helps to structure a complex field and ...

  7. How to Conduct an Effective Interview; A Guide to Interview Design in

    Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...

  8. Types of Interviews in Research

    There are several types of interviews, often differentiated by their level of structure. Structured interviews have predetermined questions asked in a predetermined order. Unstructured interviews are more free-flowing. Semi-structured interviews fall in between. Interviews are commonly used in market research, social science, and ethnographic ...

  9. 7 ways to prepare for a design research interview

    7. Remember your role. The most important think you can do during design research is to listen. Don't get wrapped up in the discussion guide you created or your own hypothesis of how things should work. Move the interview at a pace that is comfortable for the participants and just be patient.

  10. How To Do Qualitative Interviews For Research

    If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel, delay, or not produce useful data. 5. Not keeping your golden thread front of mind. We touched on this a little earlier, but it is a key point that should be central to your entire research process.

  11. PDF Introduction: Expert Interviews

    stitutes an expert, the differences between the various forms of expert interviews and their role in research design, as well as the specifics of interviewing and interaction in comparison to other qualitative inter-view forms. The use of expert interviews has long been popular in social research. The

  12. Expert Interview

    Schedule the interviews with the selected experts, ensuring that adequate time has been allotted for each session. During the interview, ask open-ended questions and encourage the experts to provide detailed, nuanced answers. Take thorough notes or record the conversation with the participant's consent. 7.

  13. Design Research Methods: In-Depth Interviews

    In-depth interviews are one of the most common qualitative research methods used in design thinking and human-centered design processes. They allow you to gather a lot of information at once, with relative logistical ease. In-depth interviews are a form of ethnographic research, where researchers observe participants in their real-life environment.

  14. What is Design Research?

    What is Design Research? Design research is the practice of gaining insights by observing users and understanding industry and market shifts. For example, in service design it involves designers' using ethnography—an area of anthropology—to access study participants, to gain the best insights and so be able to start to design popular ...

  15. How to get the most out of an interview with a subject matter expert

    So if you have the chance to interview an expert, take a second to think about your interview process and what you plan to ask. You might find that a little preparation can yield a ton of valuable insights. Kai Wong is a UX Designer, Author, and Data Visualization advocate. His latest book, Data Persuasion, talks about learning Data ...

  16. Talking to People III: Expert Interviews and Elite Interviews

    An expert interview is a qualitative semi-structured or open interview with a person holding 'expert knowledge'. It is a method often used in policy analysis, be it as part of a more comprehensive set of methods or as a stand-alone method. The methodological literature on expert interviews in the field of communication policy studies is ...

  17. Qualitative Research: Semi-structured Expert Interview

    The successful completion of expert interviews, according to Mayer (2008), depends significantly on the flexible handling of the interview guideline by asking the right questions (theme complexes) at the right time and not by using the guideline as a standardized scheme of conduct. ... The research design and the research questions were ...

  18. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  19. PDF Why Do We Speak to Experts? Reviving the Strength of the Expert

    strategies serve to make qualitative expert interviews even more effective research tools. Using Expert Interviews Broadly understood, experts have specific knowledge about an issue, development, or event. Hence, following Dexter's(2006) classic understanding, an expert is any person who has specialized information on or who has

  20. 'The problem-centred expert interview'. Combining qualitative

    According to Bogner and Menz (2009), the theory-generating expert interview is not linked to a specific interview design or technique. Instead, the authors propose selecting the interview design in accordance with the actual research needs and remaining flexible during the research process.

  21. PDF CONDUCTING IN-DEPTH INTERVIEWS: A Guide for Designing and Conducting In

    The process for conducting in-depth interviews follows the same general process as is followed for other research: plan, develop instruments, collect data, analyze data, and disseminate findings. More detailed steps are given below. 1. Plan. • Identify stakeholders who will be involved.

  22. 'The problem-centred expert interview'. Combining qualitative

    The expert interview as a method of qualitative empirical research has been a widely-discussed qualitative method in political and social research since the early 1990s. Mainly cited in the

  23. How to Design Interview Questions for Primary Research

    Define your research question. 2. Choose your interview type. 3. Write your interview questions. 4. Test and refine your interview questions. 5. Conduct your interviews.

  24. Community-based participatory-research through co-design: supporting

    Process of co-production and preparation for co-design. Co-production was chosen as the planning method for the study, as the inclusion of community members (Rocky Bay Lived experience experts and Staff) in each step of the research process would increase buy-in and make the research more likely to meet their needs [].An example of co-planning (part of co-production) includes the study ...

  25. Concepts of lines of therapy in cancer treatment: findings from an

    Qualitative expert interviews [7, 8] were conducted by posing open questions within a semi-structured framework . An interview manual delineated this framework and was developed based on existing literature about oncological LOTs and associated concepts (see Additional File 1). Before the interviews, the interview manual was pre-tested with an ...

  26. Collaborative and life cycle-based project delivery for environmentally

    Design/methodology/approach. In order to realize the objective of this study, the development of a theoretical framework based on the literature review was followed by a qualitative study in which 21 semi-structured interviews were conducted with Finnish project professionals representing clients, design/planning experts, constructors and building operation/maintenance experts to explore their ...

  27. Co-designing Entrustable Professional Activities in General

    In medical education, Entrustable Professional Activities (EPAs) have been gaining momentum for the last decade. Such novel educational interventions necessitate accommodating competing needs, those of curriculum designers, and those of users in practice, in order to be successfully implemented. We employed a participatory research design, engaging diverse stakeholders in designing an EPA ...

  28. Experts explore planning and design principles for quality of life and

    Researchers from TrinityHaus Research Centre in Trinity's School of Engineering will soon hold their first stakeholder workshop (Friday 7 th June) as part of their Health Research Board (HRB)-funded research project "Planning and design for quality of life and resilience in residential long-term care settings for older people in Ireland".The team, which also includes experts from Age ...

  29. Determinants of appropriate antibiotic and NSAID prescribing in

    Semi-structured interview guides (Supplemental Table 1) were developed using the Consolidated Framework for Implementation Research (CFIR) and the Theory of Planned Behavior [27, 28] to understand attitudes and beliefs as they relate to behaviors, and potential determinants of a future intervention. Interview guides were modified and finalized ...

  30. What Caused the Cyberattack on Change Healthcare?

    Cyberattack on major health-tech company was caused by weak security infrastructure, Northeastern cybersecurity experts say. by Cesareo Contreras. February 27, 2024. Change Healthcare's systems have been offline since Wednesday, causing disruptions at major pharmacies in the United States. (AP Photo/Jim Mone, File)