Examples Lab

7 Examples of Justification (of a project or research)

The justification to the part of a research project that sets out the reasons that motivated the research. The justification is the section that explains the importance and the reasons that led the researcher to carry out the work.

The justification explains to the reader why and why the chosen topic was investigated. In general, the reasons that the researcher can give in a justification may be that his work allows to build or refute theories; bring a new approach or perspective on the subject; contribute to the solution of a specific problem (social, economic, environmental, etc.) that affects certain people; generate meaningful and reusable empirical data; clarify the causes and consequences of a specific phenomenon of interest; among other.

Among the criteria used to write a justification, the usefulness of the research for other academics or for other social sectors (public officials, companies, sectors of civil society), the significance in time that it may have, the contribution of new research tools or techniques, updating of existing knowledge, among others. Also, the language should be formal and descriptive.

Examples of justification

  • This research will focus on studying the reproduction habits of salmon in the Mediterranean region of Europe, since due to recent ecological changes in the water and temperatures of the region produced by human economic activity , the behavior of these animals has been modified. Thus, the present work would allow to show the changes that the species has developed to adapt to the new circumstances of its ecosystem, and to deepen the theoretical knowledge about accelerated adaptation processes, in addition to offering a comprehensive look at the environmental damage caused by growth. unsustainable economic, helping to raise awareness of the local population.
  • We therefore propose to investigate the evolution of the theoretical conceptions of class struggle and economic structure throughout the work of Antonio Gramsci, since we consider that previous analyzes have overlooked the fundamentally dynamic and unstable conception of human society that is present. in the works of Gramsci, and that is of vital importance to fully understand the author’s thought.
  • The reasons that led us to investigate the effects of regular use of cell phones on the health of middle-class young people under 18 years of age are centered on the fact that this vulnerable sector of the population is exposed to a greater extent than the rest of society to risks that the continuous use of cell phone devices may imply, due to their cultural and social habits. We intend then to help alert about these dangers, as well as to generate knowledge that helps in the treatment of the effects produced by the abuse in the use of this technology.
  • We believe that by means of a detailed analysis of the evolution of financial transactions carried out in the main stock exchanges of the world during the period 2005-2010, as well as the inquiry about how financial and banking agents perceived the situation of the financial system, it will allow us to clarify the economic mechanisms that enable the development of an economic crisis of global dimensions such as the one that the world experienced since 2009, and thus improve the design of regulatory and counter-cyclical public policies that favor the stability of the local and international financial system.
  • Our study about the applications and programs developed through the three analyzed programming languages ​​(Java, C ++ and Haskell), can allow us to clearly distinguish the potential that each of these languages ​​(and similar languages) present for solving specific problems. , in a specific area of ​​activity. This would allow not only to increase efficiency in relation to long-term development projects, but to plan coding strategies with better results in projects that are already working, and to improve teaching plans for teaching programming and computer science.
  • This in-depth study on the expansion of the Chinese empire under the Xia dynasty, will allow to clarify the socioeconomic, military and political processes that allowed the consolidation of one of the oldest states in history, and also understand the expansion of metallurgical and administrative technologies along the coastal region of the Pacific Ocean. The deep understanding of these phenomena will allow us to clarify this little-known period in Chinese history, which was of vital importance for the social transformations that the peoples of the region went through during the period.
  • Research on the efficacy of captropil in the treatment of cardiovascular conditions (in particular hypertension and heart failure) will allow us to determine if angiotensin is of vital importance in the processes of blocking the protein peptidase, or if by the On the contrary, these effects can be attributed to other components present in the formula of drugs frequently prescribed to patients after medical consultation.

Related posts:

  • Research Project: Information and examples
  • 15 Examples of Empirical Knowledge
  • 10 Paragraphs about Social Networks
  • 15 Examples of Quotes
  • What are the Elements of Knowledge?

Privacy Overview

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Justify Your Methods in a Thesis or Dissertation

How to Justify Your Methods in a Thesis or Dissertation

4-minute read

  • 1st May 2023

Writing a thesis or dissertation is hard work. You’ve devoted countless hours to your research, and you want your results to be taken seriously. But how does your professor or evaluating committee know that they can trust your results? You convince them by justifying your research methods.

What Does Justifying Your Methods Mean?

In simple terms, your methods are the tools you use to obtain your data, and the justification (which is also called the methodology ) is the analysis of those tools. In your justification, your goal is to demonstrate that your research is both rigorously conducted and replicable so your audience recognizes that your results are legitimate.

The formatting and structure of your justification will depend on your field of study and your institution’s requirements, but below, we’ve provided questions to ask yourself as you outline your justification.

Why Did You Choose Your Method of Gathering Data?

Does your study rely on quantitative data, qualitative data, or both? Certain types of data work better for certain studies. How did you choose to gather that data? Evaluate your approach to collecting data in light of your research question. Did you consider any alternative approaches? If so, why did you decide not to use them? Highlight the pros and cons of various possible methods if necessary. Research results aren’t valid unless the data are valid, so you have to convince your reader that they are.

How Did You Evaluate Your Data?

Collecting your data was only the first part of your study. Once you had them, how did you use them? Do your results involve cross-referencing? If so, how was this accomplished? Which statistical analyses did you run, and why did you choose them? Are they common in your field? How did you make sure your data were statistically significant ? Is your effect size small, medium, or large? Numbers don’t always lend themselves to an obvious outcome. Here, you want to provide a clear link between the Methods and Results sections of your paper.

Did You Use Any Unconventional Approaches in Your Study?

Most fields have standard approaches to the research they use, but these approaches don’t work for every project. Did you use methods that other fields normally use, or did you need to come up with a different way of obtaining your data? Your reader will look at unconventional approaches with a more critical eye. Acknowledge the limitations of your method, but explain why the strengths of the method outweigh those limitations.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

What Relevant Sources Can You Cite?

You can strengthen your justification by referencing existing research in your field. Citing these references can demonstrate that you’ve followed established practices for your type of research. Or you can discuss how you decided on your approach by evaluating other studies. Highlight the use of established techniques, tools, and measurements in your study. If you used an unconventional approach, justify it by providing evidence of a gap in the existing literature.

Two Final Tips:

●  When you’re writing your justification, write for your audience. Your purpose here is to provide more than a technical list of details and procedures. This section should focus more on the why and less on the how .

●  Consider your methodology as you’re conducting your research. Take thorough notes as you work to make sure you capture all the necessary details correctly. Eliminating any possible confusion or ambiguity will go a long way toward helping your justification.

In Conclusion:

Your goal in writing your justification is to explain not only the decisions you made but also the reasoning behind those decisions. It should be overwhelmingly clear to your audience that your study used the best possible methods to answer your research question. Properly justifying your methods will let your audience know that your research was effective and its results are valid.

Want more writing tips? Check out Proofed’s Writing Tips and Academic Writing Tips blogs. And once you’ve written your thesis or dissertation, consider sending it to us. Our editors will be happy to check your grammar, spelling, and punctuation to make sure your document is the best it can be. Check out our services for free .

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

2-minute read

How to Cite the CDC in APA

If you’re writing about health issues, you might need to reference the Centers for Disease...

5-minute read

Six Product Description Generator Tools for Your Product Copy

Introduction If you’re involved with ecommerce, you’re likely familiar with the often painstaking process of...

3-minute read

What Is a Content Editor?

Are you interested in learning more about the role of a content editor and the...

The Benefits of Using an Online Proofreading Service

Proofreading is important to ensure your writing is clear and concise for your readers. Whether...

6 Online AI Presentation Maker Tools

Creating presentations can be time-consuming and frustrating. Trying to construct a visually appealing and informative...

What Is Market Research?

No matter your industry, conducting market research helps you keep up to date with shifting...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Rationale of the Study in Research (Examples)

thesis justification sample

What is the Rationale of the Study?

The rationale of the study is the justification for taking on a given study. It explains the reason the study was conducted or should be conducted. This means the study rationale should explain to the reader or examiner why the study is/was necessary. It is also sometimes called the “purpose” or “justification” of a study. While this is not difficult to grasp in itself, you might wonder how the rationale of the study is different from your research question or from the statement of the problem of your study, and how it fits into the rest of your thesis or research paper. 

The rationale of the study links the background of the study to your specific research question and justifies the need for the latter on the basis of the former. In brief, you first provide and discuss existing data on the topic, and then you tell the reader, based on the background evidence you just presented, where you identified gaps or issues and why you think it is important to address those. The problem statement, lastly, is the formulation of the specific research question you choose to investigate, following logically from your rationale, and the approach you are planning to use to do that.

Table of Contents:

How to write a rationale for a research paper , how do you justify the need for a research study.

  • Study Rationale Example: Where Does It Go In Your Paper?

The basis for writing a research rationale is preliminary data or a clear description of an observation. If you are doing basic/theoretical research, then a literature review will help you identify gaps in current knowledge. In applied/practical research, you base your rationale on an existing issue with a certain process (e.g., vaccine proof registration) or practice (e.g., patient treatment) that is well documented and needs to be addressed. By presenting the reader with earlier evidence or observations, you can (and have to) convince them that you are not just repeating what other people have already done or said and that your ideas are not coming out of thin air. 

Once you have explained where you are coming from, you should justify the need for doing additional research–this is essentially the rationale of your study. Finally, when you have convinced the reader of the purpose of your work, you can end your introduction section with the statement of the problem of your research that contains clear aims and objectives and also briefly describes (and justifies) your methodological approach. 

When is the Rationale for Research Written?

The author can present the study rationale both before and after the research is conducted. 

  • Before conducting research : The study rationale is a central component of the research proposal . It represents the plan of your work, constructed before the study is actually executed.
  • Once research has been conducted : After the study is completed, the rationale is presented in a research article or  PhD dissertation  to explain why you focused on this specific research question. When writing the study rationale for this purpose, the author should link the rationale of the research to the aims and outcomes of the study.

What to Include in the Study Rationale

Although every study rationale is different and discusses different specific elements of a study’s method or approach, there are some elements that should be included to write a good rationale. Make sure to touch on the following:

  • A summary of conclusions from your review of the relevant literature
  • What is currently unknown (gaps in knowledge)
  • Inconclusive or contested results  from previous studies on the same or similar topic
  • The necessity to improve or build on previous research, such as to improve methodology or utilize newer techniques and/or technologies

There are different types of limitations that you can use to justify the need for your study. In applied/practical research, the justification for investigating something is always that an existing process/practice has a problem or is not satisfactory. Let’s say, for example, that people in a certain country/city/community commonly complain about hospital care on weekends (not enough staff, not enough attention, no decisions being made), but you looked into it and realized that nobody ever investigated whether these perceived problems are actually based on objective shortages/non-availabilities of care or whether the lower numbers of patients who are treated during weekends are commensurate with the provided services.

In this case, “lack of data” is your justification for digging deeper into the problem. Or, if it is obvious that there is a shortage of staff and provided services on weekends, you could decide to investigate which of the usual procedures are skipped during weekends as a result and what the negative consequences are. 

In basic/theoretical research, lack of knowledge is of course a common and accepted justification for additional research—but make sure that it is not your only motivation. “Nobody has ever done this” is only a convincing reason for a study if you explain to the reader why you think we should know more about this specific phenomenon. If there is earlier research but you think it has limitations, then those can usually be classified into “methodological”, “contextual”, and “conceptual” limitations. To identify such limitations, you can ask specific questions and let those questions guide you when you explain to the reader why your study was necessary:

Methodological limitations

  • Did earlier studies try but failed to measure/identify a specific phenomenon?
  • Was earlier research based on incorrect conceptualizations of variables?
  • Were earlier studies based on questionable operationalizations of key concepts?
  • Did earlier studies use questionable or inappropriate research designs?

Contextual limitations

  • Have recent changes in the studied problem made previous studies irrelevant?
  • Are you studying a new/particular context that previous findings do not apply to?

Conceptual limitations

  • Do previous findings only make sense within a specific framework or ideology?

Study Rationale Examples

Let’s look at an example from one of our earlier articles on the statement of the problem to clarify how your rationale fits into your introduction section. This is a very short introduction for a practical research study on the challenges of online learning. Your introduction might be much longer (especially the context/background section), and this example does not contain any sources (which you will have to provide for all claims you make and all earlier studies you cite)—but please pay attention to how the background presentation , rationale, and problem statement blend into each other in a logical way so that the reader can follow and has no reason to question your motivation or the foundation of your research.

Background presentation

Since the beginning of the Covid pandemic, most educational institutions around the world have transitioned to a fully online study model, at least during peak times of infections and social distancing measures. This transition has not been easy and even two years into the pandemic, problems with online teaching and studying persist (reference needed) . 

While the increasing gap between those with access to technology and equipment and those without access has been determined to be one of the main challenges (reference needed) , others claim that online learning offers more opportunities for many students by breaking down barriers of location and distance (reference needed) .  

Rationale of the study

Since teachers and students cannot wait for circumstances to go back to normal, the measures that schools and universities have implemented during the last two years, their advantages and disadvantages, and the impact of those measures on students’ progress, satisfaction, and well-being need to be understood so that improvements can be made and demographics that have been left behind can receive the support they need as soon as possible.

Statement of the problem

To identify what changes in the learning environment were considered the most challenging and how those changes relate to a variety of student outcome measures, we conducted surveys and interviews among teachers and students at ten institutions of higher education in four different major cities, two in the US (New York and Chicago), one in South Korea (Seoul), and one in the UK (London). Responses were analyzed with a focus on different student demographics and how they might have been affected differently by the current situation.

How long is a study rationale?

In a research article bound for journal publication, your rationale should not be longer than a few sentences (no longer than one brief paragraph). A  dissertation or thesis  usually allows for a longer description; depending on the length and nature of your document, this could be up to a couple of paragraphs in length. A completely novel or unconventional approach might warrant a longer and more detailed justification than an approach that slightly deviates from well-established methods and approaches.

Consider Using Professional Academic Editing Services

Now that you know how to write the rationale of the study for a research proposal or paper, you should make use of our free AI grammar checker , Wordvice AI, or receive professional academic proofreading services from Wordvice, including research paper editing services and manuscript editing services to polish your submitted research documents.

You can also find many more articles, for example on writing the other parts of your research paper , on choosing a title , or on making sure you understand and adhere to the author instructions before you submit to a journal, on the Wordvice academic resources pages.

eyeglasses with gray frames on the top of notebook

How to Write a Compelling Justification of Your Research

When it comes to conducting research, a well-crafted justification is crucial. It not only helps you convince others of the importance and relevance of your work but also serves as a roadmap for your own research journey. In this blog post, we will focus on the art of writing compelling justifications, highlighting common pitfalls that juniors tend to fall into and providing an example of how to write a justification properly.

The Importance of a Strong Justification

Before we delve into the dos and don’ts of writing a justification, let’s first understand why it is so important. A strong justification sets the stage for your research by clearly outlining its purpose, significance, and potential impact. It helps you answer the question, “Why is this research worth pursuing?” and provides a solid foundation for the rest of your work.

Pitfalls to Avoid

As junior researchers, it’s common to make certain mistakes when writing a justification. Here are a few pitfalls to watch out for:

  • Lack of Clarity: One of the biggest mistakes is failing to clearly articulate the problem or research question. Make sure your justification clearly explains what you intend to investigate and why it matters.
  • Insufficient Background: Providing a strong background is essential to demonstrate your knowledge of existing literature and the context of your research. Avoid the trap of assuming that your readers are already familiar with the topic.
  • Weak Significance: Your justification should emphasize the significance of your research. Highlight the potential benefits, practical applications, or theoretical contributions that your work can offer.
  • Lack of Originality: It’s important to showcase the novelty of your research. Avoid simply replicating previous studies or rehashing existing ideas. Instead, highlight the unique aspects of your approach or the gaps in current knowledge that your research aims to fill.

Writing a Proper Justification

Now that we’ve covered the common pitfalls, let’s take a look at an example of how to write a proper justification. Imagine you are conducting research on the low proportion of uncontrolled hypertension in a specific population. Here’s how you could structure your justification:

Introduction: Begin by providing an overview of the problem and its significance. Explain why uncontrolled hypertension is a critical health issue and the potential consequences it can have on individuals and society.

Background: Offer a comprehensive review of the existing literature on hypertension, highlighting the current knowledge gaps and limitations. Discuss the prevalence of uncontrolled hypertension and the factors contributing to its low proportion in the specific population you are studying.

Objectives: Clearly state the objectives of your research. For example, your objectives could be to identify the barriers to hypertension control, evaluate the effectiveness of current interventions, and propose strategies to improve the management of uncontrolled hypertension.

Methodology: Briefly describe the research methods you plan to employ, such as surveys, interviews, or data analysis. Explain how these methods will help you address the research objectives and fill the existing knowledge gaps.

Expected Outcomes: Highlight the potential outcomes and impact of your research. Discuss how your findings could contribute to improving hypertension control rates, enhancing healthcare policies, or guiding future research in this field.

Conclusion: Summarize the main points of your justification and reiterate the significance of your research. Emphasize why your work is unique and necessary to advance knowledge and address the problem of low proportion of uncontrolled hypertension.

Remember, a compelling justification should be concise, persuasive, and grounded in evidence. It should convince your audience that your research is not only relevant but also necessary. By avoiding common pitfalls and following a structured approach, you can craft a justification that captivates readers and sets the stage for a successful research endeavor.

Share your love

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Grad Coach

How To Write The Methodology Chapter

The what, why & how explained simply (with examples).

By: Jenna Crossley (PhD) | Reviewed By: Dr. Eunice Rautenbach | September 2021 (Updated April 2023)

So, you’ve pinned down your research topic and undertaken a review of the literature – now it’s time to write up the methodology section of your dissertation, thesis or research paper . But what exactly is the methodology chapter all about – and how do you go about writing one? In this post, we’ll unpack the topic, step by step .

Overview: The Methodology Chapter

  • The purpose  of the methodology chapter
  • Why you need to craft this chapter (really) well
  • How to write and structure the chapter
  • Methodology chapter example
  • Essential takeaways

What (exactly) is the methodology chapter?

The methodology chapter is where you outline the philosophical underpinnings of your research and outline the specific methodological choices you’ve made. The point of the methodology chapter is to tell the reader exactly how you designed your study and, just as importantly, why you did it this way.

Importantly, this chapter should comprehensively describe and justify all the methodological choices you made in your study. For example, the approach you took to your research (i.e., qualitative, quantitative or mixed), who  you collected data from (i.e., your sampling strategy), how you collected your data and, of course, how you analysed it. If that sounds a little intimidating, don’t worry – we’ll explain all these methodological choices in this post .

Free Webinar: Research Methodology 101

Why is the methodology chapter important?

The methodology chapter plays two important roles in your dissertation or thesis:

Firstly, it demonstrates your understanding of research theory, which is what earns you marks. A flawed research design or methodology would mean flawed results. So, this chapter is vital as it allows you to show the marker that you know what you’re doing and that your results are credible .

Secondly, the methodology chapter is what helps to make your study replicable. In other words, it allows other researchers to undertake your study using the same methodological approach, and compare their findings to yours. This is very important within academic research, as each study builds on previous studies.

The methodology chapter is also important in that it allows you to identify and discuss any methodological issues or problems you encountered (i.e., research limitations ), and to explain how you mitigated the impacts of these. Every research project has its limitations , so it’s important to acknowledge these openly and highlight your study’s value despite its limitations . Doing so demonstrates your understanding of research design, which will earn you marks. We’ll discuss limitations in a bit more detail later in this post, so stay tuned!

Need a helping hand?

thesis justification sample

How to write up the methodology chapter

First off, it’s worth noting that the exact structure and contents of the methodology chapter will vary depending on the field of research (e.g., humanities, chemistry or engineering) as well as the university . So, be sure to always check the guidelines provided by your institution for clarity and, if possible, review past dissertations from your university. Here we’re going to discuss a generic structure for a methodology chapter typically found in the sciences.

Before you start writing, it’s always a good idea to draw up a rough outline to guide your writing. Don’t just start writing without knowing what you’ll discuss where. If you do, you’ll likely end up with a disjointed, ill-flowing narrative . You’ll then waste a lot of time rewriting in an attempt to try to stitch all the pieces together. Do yourself a favour and start with the end in mind .

Section 1 – Introduction

As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims . As we’ve discussed many times on the blog, your methodology needs to align with your research aims, objectives and research questions. Therefore, it’s useful to frontload this component to remind the reader (and yourself!) what you’re trying to achieve.

In this section, you can also briefly mention how you’ll structure the chapter. This will help orient the reader and provide a bit of a roadmap so that they know what to expect. You don’t need a lot of detail here – just a brief outline will do.

The intro provides a roadmap to your methodology chapter

Section 2 – The Methodology

The next section of your chapter is where you’ll present the actual methodology. In this section, you need to detail and justify the key methodological choices you’ve made in a logical, intuitive fashion. Importantly, this is the heart of your methodology chapter, so you need to get specific – don’t hold back on the details here. This is not one of those “less is more” situations.

Let’s take a look at the most common components you’ll likely need to cover. 

Methodological Choice #1 – Research Philosophy

Research philosophy refers to the underlying beliefs (i.e., the worldview) regarding how data about a phenomenon should be gathered , analysed and used . The research philosophy will serve as the core of your study and underpin all of the other research design choices, so it’s critically important that you understand which philosophy you’ll adopt and why you made that choice. If you’re not clear on this, take the time to get clarity before you make any further methodological choices.

While several research philosophies exist, two commonly adopted ones are positivism and interpretivism . These two sit roughly on opposite sides of the research philosophy spectrum.

Positivism states that the researcher can observe reality objectively and that there is only one reality, which exists independently of the observer. As a consequence, it is quite commonly the underlying research philosophy in quantitative studies and is oftentimes the assumed philosophy in the physical sciences.

Contrasted with this, interpretivism , which is often the underlying research philosophy in qualitative studies, assumes that the researcher performs a role in observing the world around them and that reality is unique to each observer . In other words, reality is observed subjectively .

These are just two philosophies (there are many more), but they demonstrate significantly different approaches to research and have a significant impact on all the methodological choices. Therefore, it’s vital that you clearly outline and justify your research philosophy at the beginning of your methodology chapter, as it sets the scene for everything that follows.

The research philosophy is at the core of the methodology chapter

Methodological Choice #2 – Research Type

The next thing you would typically discuss in your methodology section is the research type. The starting point for this is to indicate whether the research you conducted is inductive or deductive .

Inductive research takes a bottom-up approach , where the researcher begins with specific observations or data and then draws general conclusions or theories from those observations. Therefore these studies tend to be exploratory in terms of approach.

Conversely , d eductive research takes a top-down approach , where the researcher starts with a theory or hypothesis and then tests it using specific observations or data. Therefore these studies tend to be confirmatory in approach.

Related to this, you’ll need to indicate whether your study adopts a qualitative, quantitative or mixed  approach. As we’ve mentioned, there’s a strong link between this choice and your research philosophy, so make sure that your choices are tightly aligned . When you write this section up, remember to clearly justify your choices, as they form the foundation of your study.

Methodological Choice #3 – Research Strategy

Next, you’ll need to discuss your research strategy (also referred to as a research design ). This methodological choice refers to the broader strategy in terms of how you’ll conduct your research, based on the aims of your study.

Several research strategies exist, including experimental , case studies , ethnography , grounded theory, action research , and phenomenology . Let’s take a look at two of these, experimental and ethnographic, to see how they contrast.

Experimental research makes use of the scientific method , where one group is the control group (in which no variables are manipulated ) and another is the experimental group (in which a specific variable is manipulated). This type of research is undertaken under strict conditions in a controlled, artificial environment (e.g., a laboratory). By having firm control over the environment, experimental research typically allows the researcher to establish causation between variables. Therefore, it can be a good choice if you have research aims that involve identifying causal relationships.

Ethnographic research , on the other hand, involves observing and capturing the experiences and perceptions of participants in their natural environment (for example, at home or in the office). In other words, in an uncontrolled environment.  Naturally, this means that this research strategy would be far less suitable if your research aims involve identifying causation, but it would be very valuable if you’re looking to explore and examine a group culture, for example.

As you can see, the right research strategy will depend largely on your research aims and research questions – in other words, what you’re trying to figure out. Therefore, as with every other methodological choice, it’s essential to justify why you chose the research strategy you did.

Methodological Choice #4 – Time Horizon

The next thing you’ll need to detail in your methodology chapter is the time horizon. There are two options here: cross-sectional and longitudinal . In other words, whether the data for your study were all collected at one point in time (cross-sectional) or at multiple points in time (longitudinal).

The choice you make here depends again on your research aims, objectives and research questions. If, for example, you aim to assess how a specific group of people’s perspectives regarding a topic change over time , you’d likely adopt a longitudinal time horizon.

Another important factor to consider is simply whether you have the time necessary to adopt a longitudinal approach (which could involve collecting data over multiple months or even years). Oftentimes, the time pressures of your degree program will force your hand into adopting a cross-sectional time horizon, so keep this in mind.

Methodological Choice #5 – Sampling Strategy

Next, you’ll need to discuss your sampling strategy . There are two main categories of sampling, probability and non-probability sampling.

Probability sampling involves a random (and therefore representative) selection of participants from a population, whereas non-probability sampling entails selecting participants in a non-random  (and therefore non-representative) manner. For example, selecting participants based on ease of access (this is called a convenience sample).

The right sampling approach depends largely on what you’re trying to achieve in your study. Specifically, whether you trying to develop findings that are generalisable to a population or not. Practicalities and resource constraints also play a large role here, as it can oftentimes be challenging to gain access to a truly random sample. In the video below, we explore some of the most common sampling strategies.

Methodological Choice #6 – Data Collection Method

Next up, you’ll need to explain how you’ll go about collecting the necessary data for your study. Your data collection method (or methods) will depend on the type of data that you plan to collect – in other words, qualitative or quantitative data.

Typically, quantitative research relies on surveys , data generated by lab equipment, analytics software or existing datasets. Qualitative research, on the other hand, often makes use of collection methods such as interviews , focus groups , participant observations, and ethnography.

So, as you can see, there is a tight link between this section and the design choices you outlined in earlier sections. Strong alignment between these sections, as well as your research aims and questions is therefore very important.

Methodological Choice #7 – Data Analysis Methods/Techniques

The final major methodological choice that you need to address is that of analysis techniques . In other words, how you’ll go about analysing your date once you’ve collected it. Here it’s important to be very specific about your analysis methods and/or techniques – don’t leave any room for interpretation. Also, as with all choices in this chapter, you need to justify each choice you make.

What exactly you discuss here will depend largely on the type of study you’re conducting (i.e., qualitative, quantitative, or mixed methods). For qualitative studies, common analysis methods include content analysis , thematic analysis and discourse analysis . In the video below, we explain each of these in plain language.

For quantitative studies, you’ll almost always make use of descriptive statistics , and in many cases, you’ll also use inferential statistical techniques (e.g., correlation and regression analysis). In the video below, we unpack some of the core concepts involved in descriptive and inferential statistics.

In this section of your methodology chapter, it’s also important to discuss how you prepared your data for analysis, and what software you used (if any). For example, quantitative data will often require some initial preparation such as removing duplicates or incomplete responses . Similarly, qualitative data will often require transcription and perhaps even translation. As always, remember to state both what you did and why you did it.

Section 3 – The Methodological Limitations

With the key methodological choices outlined and justified, the next step is to discuss the limitations of your design. No research methodology is perfect – there will always be trade-offs between the “ideal” methodology and what’s practical and viable, given your constraints. Therefore, this section of your methodology chapter is where you’ll discuss the trade-offs you had to make, and why these were justified given the context.

Methodological limitations can vary greatly from study to study, ranging from common issues such as time and budget constraints to issues of sample or selection bias . For example, you may find that you didn’t manage to draw in enough respondents to achieve the desired sample size (and therefore, statistically significant results), or your sample may be skewed heavily towards a certain demographic, thereby negatively impacting representativeness .

In this section, it’s important to be critical of the shortcomings of your study. There’s no use trying to hide them (your marker will be aware of them regardless). By being critical, you’ll demonstrate to your marker that you have a strong understanding of research theory, so don’t be shy here. At the same time, don’t beat your study to death . State the limitations, why these were justified, how you mitigated their impacts to the best degree possible, and how your study still provides value despite these limitations .

Section 4 – Concluding Summary

Finally, it’s time to wrap up the methodology chapter with a brief concluding summary. In this section, you’ll want to concisely summarise what you’ve presented in the chapter. Here, it can be a good idea to use a figure to summarise the key decisions, especially if your university recommends using a specific model (for example, Saunders’ Research Onion ).

Importantly, this section needs to be brief – a paragraph or two maximum (it’s a summary, after all). Also, make sure that when you write up your concluding summary, you include only what you’ve already discussed in your chapter; don’t add any new information.

Keep it simple

Methodology Chapter Example

In the video below, we walk you through an example of a high-quality research methodology chapter from a dissertation. We also unpack our free methodology chapter template so that you can see how best to structure your chapter.

Wrapping Up

And there you have it – the methodology chapter in a nutshell. As we’ve mentioned, the exact contents and structure of this chapter can vary between universities , so be sure to check in with your institution before you start writing. If possible, try to find dissertations or theses from former students of your specific degree program – this will give you a strong indication of the expectations and norms when it comes to the methodology chapter (and all the other chapters!).

Also, remember the golden rule of the methodology chapter – justify every choice ! Make sure that you clearly explain the “why” for every “what”, and reference credible methodology textbooks or academic sources to back up your justifications.

If you need a helping hand with your research methodology (or any other component of your research), be sure to check out our private coaching service , where we hold your hand through every step of the research journey. Until next time, good luck!

thesis justification sample

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Quantitative results chapter in a dissertation

50 Comments

DAUDI JACKSON GYUNDA

highly appreciated.

florin

This was very helpful!

Nophie

This was helpful

mengistu

Thanks ,it is a very useful idea.

Thanks ,it is very useful idea.

Lucia

Thank you so much, this information is very useful.

Shemeka Hodge-Joyce

Thank you very much. I must say the information presented was succinct, coherent and invaluable. It is well put together and easy to comprehend. I have a great guide to create the research methodology for my dissertation.

james edwin thomson

Highly clear and useful.

Amir

I understand a bit on the explanation above. I want to have some coach but I’m still student and don’t have any budget to hire one. A lot of question I want to ask.

Henrick

Thank you so much. This concluded my day plan. Thank you so much.

Najat

Thanks it was helpful

Karen

Great information. It would be great though if you could show us practical examples.

Patrick O Matthew

Thanks so much for this information. God bless and be with you

Atugonza Zahara

Thank you so so much. Indeed it was helpful

Joy O.

This is EXCELLENT!

I was totally confused by other explanations. Thank you so much!.

keinemukama surprise

justdoing my research now , thanks for the guidance.

Yucong Huang

Thank uuuu! These contents are really valued for me!

Thokozani kanyemba

This is powerful …I really like it

Hend Zahran

Highly useful and clear, thank you so much.

Harry Kaliza

Highly appreciated. Good guide

Fateme Esfahani

That was helpful. Thanks

David Tshigomana

This is very useful.Thank you

Kaunda

Very helpful information. Thank you

Peter

This is exactly what I was looking for. The explanation is so detailed and easy to comprehend. Well done and thank you.

Shazia Malik

Great job. You just summarised everything in the easiest and most comprehensible way possible. Thanks a lot.

Rosenda R. Gabriente

Thank you very much for the ideas you have given this will really help me a lot. Thank you and God Bless.

Eman

Such great effort …….very grateful thank you

Shaji Viswanathan

Please accept my sincere gratitude. I have to say that the information that was delivered was congruent, concise, and quite helpful. It is clear and straightforward, making it simple to understand. I am in possession of an excellent manual that will assist me in developing the research methods for my dissertation.

lalarie

Thank you for your great explanation. It really helped me construct my methodology paper.

Daniel sitieney

thank you for simplifieng the methodoly, It was realy helpful

Kayode

Very helpful!

Nathan

Thank you for your great explanation.

Emily Kamende

The explanation I have been looking for. So clear Thank you

Abraham Mafuta

Thank you very much .this was more enlightening.

Jordan

helped me create the in depth and thorough methodology for my dissertation

Nelson D Menduabor

Thank you for the great explaination.please construct one methodology for me

I appreciate you for the explanation of methodology. Please construct one methodology on the topic: The effects influencing students dropout among schools for my thesis

This helped me complete my methods section of my dissertation with ease. I have managed to write a thorough and concise methodology!

ASHA KIUNGA

its so good in deed

leslie chihope

wow …what an easy to follow presentation. very invaluable content shared. utmost important.

Ahmed khedr

Peace be upon you, I am Dr. Ahmed Khedr, a former part-time professor at Al-Azhar University in Cairo, Egypt. I am currently teaching research methods, and I have been dealing with your esteemed site for several years, and I found that despite my long experience with research methods sites, it is one of the smoothest sites for evaluating the material for students, For this reason, I relied on it a lot in teaching and translated most of what was written into Arabic and published it on my own page on Facebook. Thank you all… Everything I posted on my page is provided with the names of the writers of Grad coach, the title of the article, and the site. My best regards.

Daniel Edwards

A remarkably simple and useful guide, thank you kindly.

Magnus Mahenge

I real appriciate your short and remarkable chapter summary

Olalekan Adisa

Bravo! Very helpful guide.

Arthur Margraf

Only true experts could provide such helpful, fantastic, and inspiring knowledge about Methodology. Thank you very much! God be with you and us all!

Aruni Nilangi

highly appreciate your effort.

White Label Blog Content

This is a very well thought out post. Very informative and a great read.

FELEKE FACHA

THANKS SO MUCH FOR SHARING YOUR NICE IDEA

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

learnonline

Research proposal, thesis, exegesis, and journal article writing for business, social science and humanities (BSSH) research degree candidates

Topic outline, introduction and research justification.

thesis justification sample

Introduction and research justification, business, social sciences, humanities

Introduction.

  • Signalling the topic in the first sentence
  • The research justification or 'problem' statement 
  • The 'field' of literature
  • Summary of contrasting areas of research
  • Summary of the 'gap' in the literature
  • Research aims and objectives

Summary of the research design

Example research proposal introductions.

This topic outlines the steps in the introduction of the research proposal. As discussed in the first topic in this series of web resources, there are three key elements or conceptual steps within the main body of the research proposal. In this resource, these elements are referred to as the research justification, the literature review and the research design. These three steps also structure, typically, but not always in this order, the proposal introduction which contains an outline of the proposed research.

These steps pertain to the key questions of reviewers:

  • What problem or issue does the research address? (research justification)
  • How will the research contribute to existing knowledge? (the 'gap' in the literature, sometimes referred to as the research 'significance')
  • How will the research achieve its stated objectives? (the research design)

Reviewers look to find a summary of the case for the research in the introduction, which, in essence, involves providing summary answers to each of the questions above.

The introduction of the research proposal usually includes the following content:

  • a research justification or statement of a problem (which also serves to introduce the topic)
  • a summary of the key point in the literature review (a summary of what is known and how the research aims to contribute to what is known)
  • the research aim or objective
  • a summary of the research design
  • concise definitions of any contested or specialised terms that will be used throughout the proposal (provided the first time the term is used).

This topic will consider how to write about each of these in turn.

Signaling the topic in the first sentence

The first task of the research proposal is to signal the area of the research or 'topic' so the reader knows what subject will be discussed in the proposal. This step is ideally accomplished in the opening sentence or the opening paragraph of the research proposal. It is also indicated in the title of the research proposal. It is important not to provide tangential information in the opening sentence or title because this may mislead the reader about the core subject of the proposal.

A ‘topic’ includes:

thesis justification sample

  • the context or properties of the subject (the particular aspect or properties of the subject that are of interest).

Questions to consider in helping to clarify the topic:

  • What is the focus of my research?
  • What do I want to understand?
  • What domain/s of activity does it pertain to?
  • What will I investigate in order to shed light on my focus?

The research justification or the ‘problem’ statement

The goal of the first step of the research proposal is to get your audience's attention; to show them why your research matters, and to make them want to know more about your research. The first step within the research proposal is sometimes referred to as the research justification or the statement of the 'problem'. This step involves providing the reader with critical background or contextual information that introduces the topic area, and indicates why the research is important. Research proposals often open by outlining a central concern, issue, question or conundrum to which the research relates.

The research justification should be provided in an accessible and direct manner in the introductory section of the research proposal. The number of words required to complete this first conceptual step will vary widely depending on the project.

Writing about the research justification, like writing about the literature and your research design, is a creative process involving careful decision making on your part. The research justification should lead up to the topic of your research and frame your research, and, when you write your thesis, exegesis or journal article conclusion, you will again return to the research justification to wrap up the implications of your research. That is to say, your conclusions will refer back to the problem and reflect on what the findings suggest about how we should treat the problem. For this reason, you may find the need to go back and reframe your research justification as your research and writing progresses.

The most common way of establishing the importance of the research is to refer to a real world problem. Research may aim to produce knowledge that will ultimately be used to:

  • advance national and organisational goals (health, clean environment, quality education),
  • improve policies and regulations,
  • manage risk,
  • contribute to economic development,
  • promote peace and prosperity,
  • promote democracy,
  • test assumptions (theoretical, popular, policy) about human behaviour, the economy, society,
  • understand human behaviour, the economy and social experience,
  • understand or critique social processes and values.

Examples of 'research problems' in opening sentences and paragraphs of research writing

Management The concept of meritocracy is one replicated and sustained in much discourse around organisational recruitment, retention and promotion. Women have a firm belief in the concept of merit, believing that hard work, education and talent will in the end be rewarded (McNamee and Miller, 2004). This belief in workplace meritocracy could in part be due to the advertising efforts of employers themselves, who, since the early 1990s, attempt to attract employees through intensive branding programs and aggressive advertising which emphasise equality of opportunity. The statistics, however, are less than convincing, with 2008 data from the Equal Employment for Women in the Workplace agency signalling that women are disproportionately represented in senior management levels compared to men, and that the numbers of women at Chief Executive Officer level in corporate Australia have actually decreased (Equal Opportunity for Women Agency, 2008). Women, it seems, are still unable to shatter the glass ceiling and are consistently overlooked at executive level.

Psychology Tension-type headache is extremely prevalent and is associated with significant personal and social costs.

Education One of the major challenges of higher education health programs is developing the cognitive abilities that will assist undergraduate students' clinical decision making. This is achieved by stimulating enquiry analysis, creating independent judgement and developing cognitive skills that are in line with graduate practice (Hollingworth and McLoughlin 2001; Bedard, 1996).

Visual arts In the East, the traditional idea of the body was not as something separate from the mind. In the West, however, the body is still perceived as separate, as a counterpart of the mind. The body is increasingly at the centre of the changing cultural environment, particularly the increasingly visual culture exemplified by the ubiquity of the image, the emergence of virtual reality, voyeurism and surveillance culture. Within the contemporary visual environment, the body's segregation from the mind has become more intense than ever, conferring upon the body a 'being watched' or 'manufacturable' status, further undermining the sense of the body as an integral part of our being.

thesis justification sample

Literature review summary

The next step following the research justification in the introduction is the literature review summary statement. This part of the introduction summarises the literature review section of the research proposal, providing a concise statement that signals the field of research and the rationale for the research question or aim.

It can be helpful to think about the literature review element as comprised of four parts. The first is a reference to the field or discipline the research will contribute to. The second is a summary of the main questions, approaches or accepted conclusions in your topic area in the field or discipline at present ('what is known'). This summary of existing research acts as a contrast to highlight the significance of the third part, your statement of a 'gap'. The fourth part rephrases this 'gap' in the form of a research question, aim, objective or hypothesis.

For example

Scholars writing about ... (the problem area) in the field of ... (discipline or sub-discipline, part one) have observed that ... ('what is known', part two). Others describe ... ('what is known', part two). A more recent perspective chronicles changes that, in broad outline, parallel those that have occurred in ... ('what is known', part two). This study differs from these approaches in that it considers ... ('gap', research focus, part three). This research draws on ... to consider ... (research objective, part four).  

More information about writing these four parts of the literature review summary is provided below.

1. The 'field' of literature

The field of research is the academic discipline within which your research is situated, and to which it will contribute. Some fields grow out of a single discipline, others are multidisciplinary. The field or discipline is linked to university courses and research, academic journals, conferences and other academic associations, and some book publishers. It also describes the expertise of thesis supervisors and examiners. 

The discipline defines the kinds of approaches, theories, methods and styles of writing adopted by scholars and researchers working within them.

For a list of academic disciplines have a look at the wikipedia site at: https://en.wikipedia.org/wiki/List_of_academic_disciplines

The field or discipline is not the same as the topic of the research. The topic is the subject matter or foci of your research. Disciplines or 'fields' refer to globally recognised areas of research and scholarship.

The field or discipline the research aims to contribute to can be signalled in a few key words within the literature review summary, or possibly earlier withn the research justification.

Sentence stems to signal the field of research 

  • Within the field of ... there is now agreement that ... .
  • The field of ... is marked by ongoing debate about ... .
  • Following analysis of ... the field of ... turned to an exploration of ... .

2. A summary of contrasting areas of research or what is 'known'

The newness or significance of what you are doing is typically established in a contrast or dialogue with other research and scholarship. The 'gap' (or hole in the donut) only becomes apparent by the surrounding literature (or donut). Sometimes a contrast is provided to show that you are working in a different area to what has been done before, or to show that you are building on previous work, or perhaps working on an unresolved issue within a discipline. It might also be that the approaches of other disciplines on the same problem area or focus are introduced to highlight a new angle on the topic.

3. The summary of the 'gap' in the literature

The 'gap' in the field typically refers to the explanation provided to support the research question. Questions or objectives grow out of areas of uncertainty, or gaps, in the field of research. In most cases, you will not know what the gap in knowledge is until you have reviewed the literature and written up a good part of the literature review section of the proposal. It is often not possible therefore to confidently write the 'gap' statement until you have done considerable work on the literature review. Once your literature review section is sufficiently developed, you can summarise the missing piece of knowledge in a brief statement in the introduction.

Sentence stems for summarising a 'gap' in the literature

Indicate a gap in the previous research by raising a question about it, or extending previous knowledge in some way:

  • However, there is little information/attention/work/data/research on … .
  • However, few studies/investigations/researchers/attempt to … .

Often steps two and three blend together in the same sentence, as in the sentence stems below.

Sentence stems which both introduce research in the field (what is 'known') and summarise a 'gap'

  • The research has tended to focus on …(introduce existing field foci), rather than on … ('gap').
  • These studies have emphasised that … …(introduce what is known), but it remains unclear whether … ('gap').
  • Although considerable research has been devoted to … (introduce field areas), rather less attention has been paid to … ('gap').

The 'significance' of the research

When writing the research proposal, it is useful to think about the research justification and the  ‘gap in the literature’ as two distinct conceptual elements, each of which must be established separately. Stating a real world problem or outlining a conceptual or other conundrum or concern is typically not, in itself, enough to justify the research. Similarly, establishing that there is a gap in the literature is often not enough on its own to persuade the reader that the research is important. In the first case, reviewers may still wonder ‘perhaps the problem or concern has already been addressed in the literature’, or, in the second, ‘so little has been done on this focus, but perhaps the proposed research is not important’? The proposal will ideally establish that the research is important, and that it will provide something new to the field of knowledge.

In effect, the research justification and the literature review work together to establish the benefit, contribution or 'significance' of the research. The 'significance' of the research is established not in a statement to be incorporated into the proposal, but as something the first two sections of the proposal work to establish. Research is significant when it pertains to something important, and when it provides new knowledge or insights within a field of knowledge.

4. The research aim or objective

The research aim is usually expressed as a concise statement at the close of the literature review. It may be referred to as an objective, a question or an aim. These terms are often used interchangeably to refer to the focus of the investigation. The research focus is the question at the heart of the research, designed to produce new knowledge. To avoid confusing the reader about the purpose of the research it is best to express it as either an aim, or an objective, or a question. It is also important to frame the aims of the research in a succinct manner; no more than three dot points say. And the aim/objective/question should be framed in more or less the same way wherever it appears in the proposal. This ensures the research focus is clear.

Language use

Research generally aims to produce knowledge, as opposed to say recommendations, policy or social change. Research may support policy or social change, and eventually produce it in some of its applications, but it does not typically produce it (with the possible exception of action research). For this reason, aims and objectives are framed in terms of knowledge production, using phrases like:

  • to increase understanding, insight, clarity;
  • to evaluate and critique;
  • to test models, theory, or strategies.

These are all knowledge outcomes that can be achieved within the research process.

Reflecting your social philosophy in the research aim

A well written research aim typically carries within it information about the philosophical approach the research will take, even if the researcher is not themselves aware of it, or if the proposal does not discuss philosophy or social theory at any length. If you are interested in social theory, you might consider framing your aim such that it reflects your philosophical or theoretical approach. Since your philosophical approach reflects your beliefs about how 'valid' knowledge can be gained, and therefore the types of questions you ask, it follows that it will be evident within your statement of the research aim. Researchers, variously, hold that knowledge of the world arises through:

  • observations of phenomena (measurements of what we can see, hear, taste, touch);
  • the interactions between interpreting human subjects and objective phenomena in the world;
  • ideology shaped by power, which we may be unconscious of, and which must be interrogated and replaced with knowledge that reflects people's true interests; 
  • the structure of language and of the unconscious;
  • the play of historical relations between human actions, institutional practices and prevailing discourses;
  • metaphoric and other linguistic relations established within language and text.

The philosophical perspective underpinning your research is then reflected in the research aim. For example, depending upon your philosophical perspective, you may aim to find out about:

  • observable phenomenon or facts;
  • shared cultural meanings of practices, rituals, events that determine how objective phenomena are interpreted and experienced;
  • social structures and political ideologies that shape experience and distort authentic or empowered experience;
  • the structure of language;
  • the historical evolution of networks of discursive and extra-discursive practices;
  • emerging or actual phenomenon untainted by existing representation.

You might check your aim statement to ensure it reflects the philosophical perspective you claim to adopt in your proposal. Check that there are not contradictions in your philosophical claims and that you are consistent in your approach. For assistance with this you may find the Social philosophy of research resources helpful.

Sentence stems for aims and objectives

  • The purpose of this research project is to … .
  • The purpose of this investigation is to … .
  • The aim of this research project is to … .
  • This study is designed to … .

The next step or key element in the research proposal is the research design. The research design explains how the research aims will be achieved. Within the introduction a summary of the overall research design can make the project more accessible to the reader.

The summary statement of the research design within the introduction might include:

  • the method/s that will be used (interviews, surveys, video observation, diary recording);
  • if the research will be phased, how many phases, and what methods will be used in each phase;
  • brief reference to how the data will be analysed.

The statement of the research design is often the last thing discussed in the research proposal introduction.

NB. It is not necessary to explain that a literature review and a detailed ouline of the methods and methodology will follow because academic readers will assume this.

Title: Aboriginal cultural values and economic sustainability: A case study of agro-forestry in a remote Aboriginal community

Further examples can be found at the end of this topic, and in the drop down for this topic in the left menu. 

In summary, the introduction contains a problem statement, or explanation of why the research is important to the world, a summary of the literature review, and a summary of the research design. The introduction enables the reviewer, as well as yourself and your supervisory team, to assess the logical connections between the research justification, the 'gap' in the literature, research aim and the research design without getting lost in the detail of the project. In this sense, the introduction serves as a kind of map or abstract of the proposed research as well as of the main body of the research proposal.

The following questions may be useful in assessing your research proposal introduction.

  • Have I clearly signalled the research topic in the key words and phrases used in the first sentence and title of the research proposal?
  • Have I explained why my research matters, the problem or issue that underlies the research in the opening sentences,  paragraphs and page/s?
  • Have I used literature, examples or other evidence to substantiate my understanding of the key issues?
  • Have I explained the problem in a way that grabs the reader’s attention and concern?
  • Have I indicated the field/s within which my research is situated using key words that are recognised by other scholars?
  • Have I provided a summary of previous research and outlined a 'gap' in the literature?
  • Have I provided a succinct statement of the objectives or aims of my research?
  • Have I provided a summary of the research phases and methods?

This resource was developed by Wendy Bastalich.

File icon

How to write a fantastic thesis introduction (+15 examples)

Photo of Master Academia

The thesis introduction, usually chapter 1, is one of the most important chapters of a thesis. It sets the scene. It previews key arguments and findings. And it helps the reader to understand the structure of the thesis. In short, a lot is riding on this first chapter. With the following tips, you can write a powerful thesis introduction.

Disclosure: This post may contain affiliate links, which means I may earn a small commission if you make a purchase using the links below at no additional cost to you . I only recommend products or services that I truly believe can benefit my audience. As always, my opinions are my own.

Elements of a fantastic thesis introduction

Open with a (personal) story, begin with a problem, define a clear research gap, describe the scientific relevance of the thesis, describe the societal relevance of the thesis, write down the thesis’ core claim in 1-2 sentences, support your argument with sufficient evidence, consider possible objections, address the empirical research context, give a taste of the thesis’ empirical analysis, hint at the practical implications of the research, provide a reading guide, briefly summarise all chapters to come, design a figure illustrating the thesis structure.

An introductory chapter plays an integral part in every thesis. The first chapter has to include quite a lot of information to contextualise the research. At the same time, a good thesis introduction is not too long, but clear and to the point.

A powerful thesis introduction does the following:

  • It captures the reader’s attention.
  • It presents a clear research gap and emphasises the thesis’ relevance.
  • It provides a compelling argument.
  • It previews the research findings.
  • It explains the structure of the thesis.

In addition, a powerful thesis introduction is well-written, logically structured, and free of grammar and spelling errors. Reputable thesis editors can elevate the quality of your introduction to the next level. If you are in search of a trustworthy thesis or dissertation editor who upholds high-quality standards and offers efficient turnaround times, I recommend the professional thesis and dissertation editing service provided by Editage . 

This list can feel quite overwhelming. However, with some easy tips and tricks, you can accomplish all these goals in your thesis introduction. (And if you struggle with finding the right wording, have a look at academic key phrases for introductions .)

Ways to capture the reader’s attention

A powerful thesis introduction should spark the reader’s interest on the first pages. A reader should be enticed to continue reading! There are three common ways to capture the reader’s attention.

An established way to capture the reader’s attention in a thesis introduction is by starting with a story. Regardless of how abstract and ‘scientific’ the actual thesis content is, it can be useful to ease the reader into the topic with a short story.

This story can be, for instance, based on one of your study participants. It can also be a very personal account of one of your own experiences, which drew you to study the thesis topic in the first place.

Start by providing data or statistics

Data and statistics are another established way to immediately draw in your reader. Especially surprising or shocking numbers can highlight the importance of a thesis topic in the first few sentences!

So if your thesis topic lends itself to being kick-started with data or statistics, you are in for a quick and easy way to write a memorable thesis introduction.

The third established way to capture the reader’s attention is by starting with the problem that underlies your thesis. It is advisable to keep the problem simple. A few sentences at the start of the chapter should suffice.

Usually, at a later stage in the introductory chapter, it is common to go more in-depth, describing the research problem (and its scientific and societal relevance) in more detail.

You may also like: Minimalist writing for a better thesis

Emphasising the thesis’ relevance

A good thesis is a relevant thesis. No one wants to read about a concept that has already been explored hundreds of times, or that no one cares about.

Of course, a thesis heavily relies on the work of other scholars. However, each thesis is – and should be – unique. If you want to write a fantastic thesis introduction, your job is to point out this uniqueness!

In academic research, a research gap signifies a research area or research question that has not been explored yet, that has been insufficiently explored, or whose insights and findings are outdated.

Every thesis needs a crystal-clear research gap. Spell it out instead of letting your reader figure out why your thesis is relevant.

* This example has been taken from an actual academic paper on toxic behaviour in online games: Liu, J. and Agur, C. (2022). “After All, They Don’t Know Me” Exploring the Psychological Mechanisms of Toxic Behavior in Online Games. Games and Culture 1–24, DOI: 10.1177/15554120221115397

The scientific relevance of a thesis highlights the importance of your work in terms of advancing theoretical insights on a topic. You can think of this part as your contribution to the (international) academic literature.

Scientific relevance comes in different forms. For instance, you can critically assess a prominent theory explaining a specific phenomenon. Maybe something is missing? Or you can develop a novel framework that combines different frameworks used by other scholars. Or you can draw attention to the context-specific nature of a phenomenon that is discussed in the international literature.

The societal relevance of a thesis highlights the importance of your research in more practical terms. You can think of this part as your contribution beyond theoretical insights and academic publications.

Why are your insights useful? Who can benefit from your insights? How can your insights improve existing practices?

thesis justification sample

Formulating a compelling argument

Arguments are sets of reasons supporting an idea, which – in academia – often integrate theoretical and empirical insights. Think of an argument as an umbrella statement, or core claim. It should be no longer than one or two sentences.

Including an argument in the introduction of your thesis may seem counterintuitive. After all, the reader will be introduced to your core claim before reading all the chapters of your thesis that led you to this claim in the first place.

But rest assured: A clear argument at the start of your thesis introduction is a sign of a good thesis. It works like a movie teaser to generate interest. And it helps the reader to follow your subsequent line of argumentation.

The core claim of your thesis should be accompanied by sufficient evidence. This does not mean that you have to write 10 pages about your results at this point.

However, you do need to show the reader that your claim is credible and legitimate because of the work you have done.

A good argument already anticipates possible objections. Not everyone will agree with your core claim. Therefore, it is smart to think ahead. What criticism can you expect?

Think about reasons or opposing positions that people can come up with to disagree with your claim. Then, try to address them head-on.

Providing a captivating preview of findings

Similar to presenting a compelling argument, a fantastic thesis introduction also previews some of the findings. When reading an introduction, the reader wants to learn a bit more about the research context. Furthermore, a reader should get a taste of the type of analysis that will be conducted. And lastly, a hint at the practical implications of the findings encourages the reader to read until the end.

If you focus on a specific empirical context, make sure to provide some information about it. The empirical context could be, for instance, a country, an island, a school or city. Make sure the reader understands why you chose this context for your research, and why it fits to your research objective.

If you did all your research in a lab, this section is obviously irrelevant. However, in that case you should explain the setup of your experiment, etcetera.

The empirical part of your thesis centers around the collection and analysis of information. What information, and what evidence, did you generate? And what are some of the key findings?

For instance, you can provide a short summary of the different research methods that you used to collect data. Followed by a short overview of how you analysed this data, and some of the key findings. The reader needs to understand why your empirical analysis is worth reading.

You already highlighted the practical relevance of your thesis in the introductory chapter. However, you should also provide a preview of some of the practical implications that you will develop in your thesis based on your findings.

Presenting a crystal clear thesis structure

A fantastic thesis introduction helps the reader to understand the structure and logic of your whole thesis. This is probably the easiest part to write in a thesis introduction. However, this part can be best written at the very end, once everything else is ready.

A reading guide is an essential part in a thesis introduction! Usually, the reading guide can be found toward the end of the introductory chapter.

The reading guide basically tells the reader what to expect in the chapters to come.

In a longer thesis, such as a PhD thesis, it can be smart to provide a summary of each chapter to come. Think of a paragraph for each chapter, almost in the form of an abstract.

For shorter theses, which also have a shorter introduction, this step is not necessary.

Especially for longer theses, it tends to be a good idea to design a simple figure that illustrates the structure of your thesis. It helps the reader to better grasp the logic of your thesis.

thesis justification sample

Get new content delivered directly to your inbox!

Subscribe and receive Master Academia's quarterly newsletter.

The most useful academic social networking sites for PhD students

10 reasons not to do a master's degree, related articles.

thesis justification sample

5 inspiring PhD thesis acknowledgement examples

thesis justification sample

The top 10 thesis defense questions (+ how to prepare strong answers)

Featured blog post image for Left your dissertation too late - ways to take action now

Left your dissertation too late? Ways to take action now

thesis justification sample

Theoretical vs. conceptual frameworks: Simple definitions and an overview of key differences

SIPS logo

  • Previous Article
  • Next Article

Six Approaches to Justify Sample Sizes

Six ways to evaluate which effect sizes are interesting, the value of information, what is your inferential goal, additional considerations when designing an informative study, competing interests, data availability, sample size justification.

ORCID logo

[email protected]

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Guest Access
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Daniël Lakens; Sample Size Justification. Collabra: Psychology 5 January 2022; 8 (1): 33267. doi: https://doi.org/10.1525/collabra.33267

Download citation file:

  • Ris (Zotero)
  • Reference Manager

An important step when designing an empirical study is to justify the sample size that will be collected. The key aim of a sample size justification for such studies is to explain how the collected data is expected to provide valuable information given the inferential goals of the researcher. In this overview article six approaches are discussed to justify the sample size in a quantitative empirical study: 1) collecting data from (almost) the entire population, 2) choosing a sample size based on resource constraints, 3) performing an a-priori power analysis, 4) planning for a desired accuracy, 5) using heuristics, or 6) explicitly acknowledging the absence of a justification. An important question to consider when justifying sample sizes is which effect sizes are deemed interesting, and the extent to which the data that is collected informs inferences about these effect sizes. Depending on the sample size justification chosen, researchers could consider 1) what the smallest effect size of interest is, 2) which minimal effect size will be statistically significant, 3) which effect sizes they expect (and what they base these expectations on), 4) which effect sizes would be rejected based on a confidence interval around the effect size, 5) which ranges of effects a study has sufficient power to detect based on a sensitivity power analysis, and 6) which effect sizes are expected in a specific research area. Researchers can use the guidelines presented in this article, for example by using the interactive form in the accompanying online Shiny app, to improve their sample size justification, and hopefully, align the informational value of a study with their inferential goals.

Scientists perform empirical studies to collect data that helps to answer a research question. The more data that is collected, the more informative the study will be with respect to its inferential goals. A sample size justification should consider how informative the data will be given an inferential goal, such as estimating an effect size, or testing a hypothesis. Even though a sample size justification is sometimes requested in manuscript submission guidelines, when submitting a grant to a funder, or submitting a proposal to an ethical review board, the number of observations is often simply stated , but not justified . This makes it difficult to evaluate how informative a study will be. To prevent such concerns from emerging when it is too late (e.g., after a non-significant hypothesis test has been observed), researchers should carefully justify their sample size before data is collected.

Researchers often find it difficult to justify their sample size (i.e., a number of participants, observations, or any combination thereof). In this review article six possible approaches are discussed that can be used to justify the sample size in a quantitative study (see Table 1 ). This is not an exhaustive overview, but it includes the most common and applicable approaches for single studies. 1 The first justification is that data from (almost) the entire population has been collected. The second justification centers on resource constraints, which are almost always present, but rarely explicitly evaluated. The third and fourth justifications are based on a desired statistical power or a desired accuracy. The fifth justification relies on heuristics, and finally, researchers can choose a sample size without any justification. Each of these justifications can be stronger or weaker depending on which conclusions researchers want to draw from the data they plan to collect.

All of these approaches to the justification of sample sizes, even the ‘no justification’ approach, give others insight into the reasons that led to the decision for a sample size in a study. It should not be surprising that the ‘heuristics’ and ‘no justification’ approaches are often unlikely to impress peers. However, it is important to note that the value of the information that is collected depends on the extent to which the final sample size allows a researcher to achieve their inferential goals, and not on the sample size justification that is chosen.

The extent to which these approaches make other researchers judge the data that is collected as informative depends on the details of the question a researcher aimed to answer and the parameters they chose when determining the sample size for their study. For example, a badly performed a-priori power analysis can quickly lead to a study with very low informational value. These six justifications are not mutually exclusive, and multiple approaches can be considered when designing a study.

The informativeness of the data that is collected depends on the inferential goals a researcher has, or in some cases, the inferential goals scientific peers will have. A shared feature of the different inferential goals considered in this review article is the question which effect sizes a researcher considers meaningful to distinguish. This implies that researchers need to evaluate which effect sizes they consider interesting. These evaluations rely on a combination of statistical properties and domain knowledge. In Table 2 six possibly useful considerations are provided. This is not intended to be an exhaustive overview, but it presents common and useful approaches that can be applied in practice. Not all evaluations are equally relevant for all types of sample size justifications. The online Shiny app accompanying this manuscript provides researchers with an interactive form that guides researchers through the considerations for a sample size justification. These considerations often rely on the same information (e.g., effect sizes, the number of observations, the standard deviation, etc.) so these six considerations should be seen as a set of complementary approaches that can be used to evaluate which effect sizes are of interest.

To start, researchers should consider what their smallest effect size of interest is. Second, although only relevant when performing a hypothesis test, researchers should consider which effect sizes could be statistically significant given a choice of an alpha level and sample size. Third, it is important to consider the (range of) effect sizes that are expected. This requires a careful consideration of the source of this expectation and the presence of possible biases in these expectations. Fourth, it is useful to consider the width of the confidence interval around possible values of the effect size in the population, and whether we can expect this confidence interval to reject effects we considered a-priori plausible. Fifth, it is worth evaluating the power of the test across a wide range of possible effect sizes in a sensitivity power analysis. Sixth, a researcher can consider the effect size distribution of related studies in the literature.

Since all scientists are faced with resource limitations, they need to balance the cost of collecting each additional datapoint against the increase in information that datapoint provides. This is referred to as the value of information   (Eckermann et al., 2010) . Calculating the value of information is notoriously difficult (Detsky, 1990) . Researchers need to specify the cost of collecting data, and weigh the costs of data collection against the increase in utility that having access to the data provides. From a value of information perspective not every data point that can be collected is equally valuable (J. Halpern et al., 2001; Wilson, 2015) . Whenever additional observations do not change inferences in a meaningful way, the costs of data collection can outweigh the benefits.

The value of additional information will in most cases be a non-monotonic function, especially when it depends on multiple inferential goals. A researcher might be interested in comparing an effect against a previously observed large effect in the literature, a theoretically predicted medium effect, and the smallest effect that would be practically relevant. In such a situation the expected value of sampling information will lead to different optimal sample sizes for each inferential goal. It could be valuable to collect informative data about a large effect, with additional data having less (or even a negative) marginal utility, up to a point where the data becomes increasingly informative about a medium effect size, with the value of sampling additional information decreasing once more until the study becomes increasingly informative about the presence or absence of a smallest effect of interest.

Because of the difficulty of quantifying the value of information, scientists typically use less formal approaches to justify the amount of data they set out to collect in a study. Even though the cost-benefit analysis is not always made explicit in reported sample size justifications, the value of information perspective is almost always implicitly the underlying framework that sample size justifications are based on. Throughout the subsequent discussion of sample size justifications, the importance of considering the value of information given inferential goals will repeatedly be highlighted.

Measuring (Almost) the Entire Population

In some instances it might be possible to collect data from (almost) the entire population under investigation. For example, researchers might use census data, are able to collect data from all employees at a firm or study a small population of top athletes. Whenever it is possible to measure the entire population, the sample size justification becomes straightforward: the researcher used all the data that is available.

Resource Constraints

A common reason for the number of observations in a study is that resource constraints limit the amount of data that can be collected at a reasonable cost (Lenth, 2001) . In practice, sample sizes are always limited by the resources that are available. Researchers practically always have resource limitations, and therefore even when resource constraints are not the primary justification for the sample size in a study, it is always a secondary justification.

Despite the omnipresence of resource limitations, the topic often receives little attention in texts on experimental design (for an example of an exception, see Bulus and Dong (2021) ). This might make it feel like acknowledging resource constraints is not appropriate, but the opposite is true: Because resource limitations always play a role, a responsible scientist carefully evaluates resource constraints when designing a study. Resource constraint justifications are based on a trade-off between the costs of data collection, and the value of having access to the information the data provides. Even if researchers do not explicitly quantify this trade-off, it is revealed in their actions. For example, researchers rarely spend all the resources they have on a single study. Given resource constraints, researchers are confronted with an optimization problem of how to spend resources across multiple research questions.

Time and money are two resource limitations all scientists face. A PhD student has a certain time to complete a PhD thesis, and is typically expected to complete multiple research lines in this time. In addition to time limitations, researchers have limited financial resources that often directly influence how much data can be collected. A third limitation in some research lines is that there might simply be a very small number of individuals from whom data can be collected, such as when studying patients with a rare disease. A resource constraint justification puts limited resources at the center of the justification for the sample size that will be collected, and starts with the resources a scientist has available. These resources are translated into an expected number of observations ( N ) that a researcher expects they will be able to collect with an amount of money in a given time. The challenge is to evaluate whether collecting N observations is worthwhile. How do we decide if a study will be informative, and when should we conclude that data collection is not worthwhile?

When evaluating whether resource constraints make data collection uninformative, researchers need to explicitly consider which inferential goals they have when collecting data (Parker & Berman, 2003) . Having data always provides more knowledge about the research question than not having data, so in an absolute sense, all data that is collected has value. However, it is possible that the benefits of collecting the data are outweighed by the costs of data collection.

It is most straightforward to evaluate whether data collection has value when we know for certain that someone will make a decision, with or without data. In such situations any additional data will reduce the error rates of a well-calibrated decision process, even if only ever so slightly. For example, without data we will not perform better than a coin flip if we guess which of two conditions has a higher true mean score on a measure. With some data, we can perform better than a coin flip by picking the condition that has the highest mean. With a small amount of data we would still very likely make a mistake, but the error rate is smaller than without any data. In these cases, the value of information might be positive, as long as the reduction in error rates is more beneficial than the cost of data collection.

Another way in which a small dataset can be valuable is if its existence eventually makes it possible to perform a meta-analysis (Maxwell & Kelley, 2011) . This argument in favor of collecting a small dataset requires 1) that researchers share the data in a way that a future meta-analyst can find it, and 2) that there is a decent probability that someone will perform a high-quality meta-analysis that will include this data in the future (S. D. Halpern et al., 2002) . The uncertainty about whether there will ever be such a meta-analysis should be weighed against the costs of data collection.

One way to increase the probability of a future meta-analysis is if researchers commit to performing this meta-analysis themselves, by combining several studies they have performed into a small-scale meta-analysis (Cumming, 2014) . For example, a researcher might plan to repeat a study for the next 12 years in a class they teach, with the expectation that after 12 years a meta-analysis of 12 studies would be sufficient to draw informative inferences (but see ter Schure and Grünwald (2019) ). If it is not plausible that a researcher will collect all the required data by themselves, they can attempt to set up a collaboration where fellow researchers in their field commit to collecting similar data with identical measures. If it is not likely that sufficient data will emerge over time to reach the inferential goals, there might be no value in collecting the data.

Even if a researcher believes it is worth collecting data because a future meta-analysis will be performed, they will most likely perform a statistical test on the data. To make sure their expectations about the results of such a test are well-calibrated, it is important to consider which effect sizes are of interest, and to perform a sensitivity power analysis to evaluate the probability of a Type II error for effects of interest. From the six ways to evaluate which effect sizes are interesting that will be discussed in the second part of this review, it is useful to consider the smallest effect size that can be statistically significant, the expected width of the confidence interval around the effect size, and effects that can be expected in a specific research area, and to evaluate the power for these effect sizes in a sensitivity power analysis. If a decision or claim is made, a compromise power analysis is worthwhile to consider when deciding upon the error rates while planning the study. When reporting a resource constraints sample size justification it is recommended to address the five considerations in Table 3 . Addressing these points explicitly facilitates evaluating if the data is worthwhile to collect. To make it easier to address all relevant points explicitly, an interactive form to implement the recommendations in this manuscript can be found at https://shiny.ieis.tue.nl/sample_size_justification/ .

A-priori Power Analysis

When designing a study where the goal is to test whether a statistically significant effect is present, researchers often want to make sure their sample size is large enough to prevent erroneous conclusions for a range of effect sizes they care about. In this approach to justifying a sample size, the value of information is to collect observations up to the point that the probability of an erroneous inference is, in the long run, not larger than a desired value. If a researcher performs a hypothesis test, there are four possible outcomes:

A false positive (or Type I error), determined by the α level. A test yields a significant result, even though the null hypothesis is true.

A false negative (or Type II error), determined by β , or 1 - power. A test yields a non-significant result, even though the alternative hypothesis is true.

A true negative, determined by 1- α . A test yields a non-significant result when the null hypothesis is true.

A true positive, determined by 1- β . A test yields a significant result when the alternative hypothesis is true.

Given a specified effect size, alpha level, and power, an a-priori power analysis can be used to calculate the number of observations required to achieve the desired error rates, given the effect size. 3   Figure 1 illustrates how the statistical power increases as the number of observations (per group) increases in an independent t test with a two-sided alpha level of 0.05. If we are interested in detecting an effect of d = 0.5, a sample size of 90 per condition would give us more than 90% power. Statistical power can be computed to determine the number of participants, or the number of items (Westfall et al., 2014) but can also be performed for single case studies (Ferron & Onghena, 1996; McIntosh & Rittmo, 2020)  

graphic

Although it is common to set the Type I error rate to 5% and aim for 80% power, error rates should be justified (Lakens, Adolfi, et al., 2018) . As explained in the section on compromise power analysis, the default recommendation to aim for 80% power lacks a solid justification. In general, the lower the error rates (and thus the higher the power), the more informative a study will be, but the more resources are required. Researchers should carefully weigh the costs of increasing the sample size against the benefits of lower error rates, which would probably make studies designed to achieve a power of 90% or 95% more common for articles reporting a single study. An additional consideration is whether the researcher plans to publish an article consisting of a set of replication and extension studies, in which case the probability of observing multiple Type I errors will be very low, but the probability of observing mixed results even when there is a true effect increases (Lakens & Etz, 2017) , which would also be a reason to aim for studies with low Type II error rates, perhaps even by slightly increasing the alpha level for each individual study.

Figure 2 visualizes two distributions. The left distribution (dashed line) is centered at 0. This is a model for the null hypothesis. If the null hypothesis is true a statistically significant result will be observed if the effect size is extreme enough (in a two-sided test either in the positive or negative direction), but any significant result would be a Type I error (the dark grey areas under the curve). If there is no true effect, formally statistical power for a null hypothesis significance test is undefined. Any significant effects observed if the null hypothesis is true are Type I errors, or false positives, which occur at the chosen alpha level. The right distribution (solid line) is centered on an effect of d = 0.5. This is the specified model for the alternative hypothesis in this study, illustrating the expectation of an effect of d = 0.5 if the alternative hypothesis is true. Even though there is a true effect, studies will not always find a statistically significant result. This happens when, due to random variation, the observed effect size is too close to 0 to be statistically significant. Such results are false negatives (the light grey area under the curve on the right). To increase power, we can collect a larger sample size. As the sample size increases, the distributions become more narrow, reducing the probability of a Type II error. 4

graphic

It is important to highlight that the goal of an a-priori power analysis is not to achieve sufficient power for the true effect size. The true effect size is unknown. The goal of an a-priori power analysis is to achieve sufficient power, given a specific assumption of the effect size a researcher wants to detect. Just like a Type I error rate is the maximum probability of making a Type I error conditional on the assumption that the null hypothesis is true, an a-priori power analysis is computed under the assumption of a specific effect size. It is unknown if this assumption is correct. All a researcher can do is to make sure their assumptions are well justified. Statistical inferences based on a test where the Type II error rate is controlled are conditional on the assumption of a specific effect size. They allow the inference that, assuming the true effect size is at least as large as that used in the a-priori power analysis, the maximum Type II error rate in a study is not larger than a desired value.

This point is perhaps best illustrated if we consider a study where an a-priori power analysis is performed both for a test of the presence of an effect, as for a test of the absence of an effect. When designing a study, it essential to consider the possibility that there is no effect (e.g., a mean difference of zero). An a-priori power analysis can be performed both for a null hypothesis significance test, as for a test of the absence of a meaningful effect, such as an equivalence test that can statistically provide support for the null hypothesis by rejecting the presence of effects that are large enough to matter (Lakens, 2017; Meyners, 2012; Rogers et al., 1993) . When multiple primary tests will be performed based on the same sample, each analysis requires a dedicated sample size justification. If possible, a sample size is collected that guarantees that all tests are informative, which means that the collected sample size is based on the largest sample size returned by any of the a-priori power analyses.

For example, if the goal of a study is to detect or reject an effect size of d = 0.4 with 90% power, and the alpha level is set to 0.05 for a two-sided independent t test, a researcher would need to collect 133 participants in each condition for an informative null hypothesis test, and 136 participants in each condition for an informative equivalence test. Therefore, the researcher should aim to collect 272 participants in total for an informative result for both tests that are planned. This does not guarantee a study has sufficient power for the true effect size (which can never be known), but it guarantees the study has sufficient power given an assumption of the effect a researcher is interested in detecting or rejecting. Therefore, an a-priori power analysis is useful, as long as a researcher can justify the effect sizes they are interested in.

If researchers correct the alpha level when testing multiple hypotheses, the a-priori power analysis should be based on this corrected alpha level. For example, if four tests are performed, an overall Type I error rate of 5% is desired, and a Bonferroni correction is used, the a-priori power analysis should be based on a corrected alpha level of .0125.

An a-priori power analysis can be performed analytically, or by performing computer simulations. Analytic solutions are faster but less flexible. A common challenge researchers face when attempting to perform power analyses for more complex or uncommon tests is that available software does not offer analytic solutions. In these cases simulations can provide a flexible solution to perform power analyses for any test (Morris et al., 2019) . The following code is an example of a power analysis in R based on 10000 simulations for a one-sample t test against zero for a sample size of 20, assuming a true effect of d = 0.5. All simulations consist of first randomly generating data based on assumptions of the data generating mechanism (e.g., a normal distribution with a mean of 0.5 and a standard deviation of 1), followed by a test performed on the data. By computing the percentage of significant results, power can be computed for any design.

p <- numeric(10000) # to store p-values for (i in 1:10000) { #simulate 10k tests x <- rnorm(n = 20, mean = 0.5, sd = 1) p[i] <- t.test(x)$p.value # store p-value } sum(p < 0.05) / 10000 # Compute power

There is a wide range of tools available to perform power analyses. Whichever tool a researcher decides to use, it will take time to learn how to use the software correctly to perform a meaningful a-priori power analysis. Resources to educate psychologists about power analysis consist of book-length treatments (Aberson, 2019; Cohen, 1988; Julious, 2004; Murphy et al., 2014) , general introductions (Baguley, 2004; Brysbaert, 2019; Faul et al., 2007; Maxwell et al., 2008; Perugini et al., 2018) , and an increasing number of applied tutorials for specific tests (Brysbaert & Stevens, 2018; DeBruine & Barr, 2019; P. Green & MacLeod, 2016; Kruschke, 2013; Lakens & Caldwell, 2021; Schoemann et al., 2017; Westfall et al., 2014) . It is important to be trained in the basics of power analysis, and it can be extremely beneficial to learn how to perform simulation-based power analyses. At the same time, it is often recommended to enlist the help of an expert, especially when a researcher lacks experience with a power analysis for a specific test.

When reporting an a-priori power analysis, make sure that the power analysis is completely reproducible. If power analyses are performed in R it is possible to share the analysis script and information about the version of the package. In many software packages it is possible to export the power analysis that is performed as a PDF file. For example, in G*Power analyses can be exported under the ‘protocol of power analysis’ tab. If the software package provides no way to export the analysis, add a screenshot of the power analysis to the supplementary files.

graphic

The reproducible report needs to be accompanied by justifications for the choices that were made with respect to the values used in the power analysis. If the effect size used in the power analysis is based on previous research the factors presented in Table 5 (if the effect size is based on a meta-analysis) or Table 6 (if the effect size is based on a single study) should be discussed. If an effect size estimate is based on the existing literature, provide a full citation, and preferably a direct quote from the article where the effect size estimate is reported. If the effect size is based on a smallest effect size of interest, this value should not just be stated, but justified (e.g., based on theoretical predictions or practical implications, see Lakens, Scheel, and Isager (2018) ). For an overview of all aspects that should be reported when describing an a-priori power analysis, see Table 4 .

Planning for Precision

Some researchers have suggested to justify sample sizes based on a desired level of precision of the estimate (Cumming & Calin-Jageman, 2016; Kruschke, 2018; Maxwell et al., 2008) . The goal when justifying a sample size based on precision is to collect data to achieve a desired width of the confidence interval around a parameter estimate. The width of the confidence interval around the parameter estimate depends on the standard deviation and the number of observations. The only aspect a researcher needs to justify for a sample size justification based on accuracy is the desired width of the confidence interval with respect to their inferential goal, and their assumption about the population standard deviation of the measure.

If a researcher has determined the desired accuracy, and has a good estimate of the true standard deviation of the measure, it is straightforward to calculate the sample size needed for a desired level of accuracy. For example, when measuring the IQ of a group of individuals a researcher might desire to estimate the IQ score within an error range of 2 IQ points for 95% of the observed means, in the long run. The required sample size to achieve this desired level of accuracy (assuming normally distributed data) can be computed by:

where N is the number of observations, z is the critical value related to the desired confidence interval, sd is the standard deviation of IQ scores in the population, and error is the width of the confidence interval within which the mean should fall, with the desired error rate. In this example, (1.96 × 15 / 2)^2 = 216.1 observations. If a researcher desires 95% of the means to fall within a 2 IQ point range around the true population mean, 217 observations should be collected. If a desired accuracy for a non-zero mean difference is computed, accuracy is based on a non-central t -distribution. For these calculations an expected effect size estimate needs to be provided, but it has relatively little influence on the required sample size (Maxwell et al., 2008) . It is also possible to incorporate uncertainty about the observed effect size in the sample size calculation, known as assurance   (Kelley & Rausch, 2006) . The MBESS package in R provides functions to compute sample sizes for a wide range of tests (Kelley, 2007) .

What is less straightforward is to justify how a desired level of accuracy is related to inferential goals. There is no literature that helps researchers to choose a desired width of the confidence interval. Morey (2020) convincingly argues that most practical use-cases of planning for precision involve an inferential goal of distinguishing an observed effect from other effect sizes (for a Bayesian perspective, see Kruschke (2018) ). For example, a researcher might expect an effect size of r = 0.4 and would treat observed correlations that differ more than 0.2 (i.e., 0.2 < r < 0.6) differently, in that effects of r = 0.6 or larger are considered too large to be caused by the assumed underlying mechanism (Hilgard, 2021) , while effects smaller than r = 0.2 are considered too small to support the theoretical prediction. If the goal is indeed to get an effect size estimate that is precise enough so that two effects can be differentiated with high probability, the inferential goal is actually a hypothesis test, which requires designing a study with sufficient power to reject effects (e.g., testing a range prediction of correlations between 0.2 and 0.6).

If researchers do not want to test a hypothesis, for example because they prefer an estimation approach over a testing approach, then in the absence of clear guidelines that help researchers to justify a desired level of precision, one solution might be to rely on a generally accepted norm of precision to aim for. This norm could be based on ideas about a certain resolution below which measurements in a research area no longer lead to noticeably different inferences. Just as researchers normatively use an alpha level of 0.05, they could plan studies to achieve a desired confidence interval width around the observed effect that is determined by a norm. Future work is needed to help researchers choose a confidence interval width when planning for accuracy.

When a researcher uses a heuristic, they are not able to justify their sample size themselves, but they trust in a sample size recommended by some authority. When I started as a PhD student in 2005 it was common to collect 15 participants in each between subject condition. When asked why this was a common practice, no one was really sure, but people trusted there was a justification somewhere in the literature. Now, I realize there was no justification for the heuristics we used. As Berkeley (1735) already observed: “Men learn the elements of science from others: And every learner hath a deference more or less to authority, especially the young learners, few of that kind caring to dwell long upon principles, but inclining rather to take them upon trust: And things early admitted by repetition become familiar: And this familiarity at length passeth for evidence.”

Some papers provide researchers with simple rules of thumb about the sample size that should be collected. Such papers clearly fill a need, and are cited a lot, even when the advice in these articles is flawed. For example, Wilson VanVoorhis and Morgan (2007) translate an absolute minimum of 50+8 observations for regression analyses suggested by a rule of thumb examined in S. B. Green (1991) into the recommendation to collect ~50 observations. Green actually concludes in his article that “In summary, no specific minimum number of subjects or minimum ratio of subjects-to-predictors was supported”. He does discuss how a general rule of thumb of N = 50 + 8 provided an accurate minimum number of observations for the ‘typical’ study in the social sciences because these have a ‘medium’ effect size, as Green claims by citing Cohen (1988) . Cohen actually didn’t claim that the typical study in the social sciences has a ‘medium’ effect size, and instead said (1988, p. 13) : “Many effects sought in personality, social, and clinical-psychological research are likely to be small effects as here defined”. We see how a string of mis-citations eventually leads to a misleading rule of thumb.

Rules of thumb seem to primarily emerge due to mis-citations and/or overly simplistic recommendations. Simonsohn, Nelson, and Simmons (2011) recommended that “Authors must collect at least 20 observations per cell”. A later recommendation by the same authors presented at a conference suggested to use n > 50, unless you study large effects (Simmons et al., 2013) . Regrettably, this advice is now often mis-cited as a justification to collect no more than 50 observations per condition without considering the expected effect size. If authors justify a specific sample size (e.g., n = 50) based on a general recommendation in another paper, either they are mis-citing the paper, or the paper they are citing is flawed.

Another common heuristic is to collect the same number of observations as were collected in a previous study. This strategy is not recommended in scientific disciplines with widespread publication bias, and/or where novel and surprising findings from largely exploratory single studies are published. Using the same sample size as a previous study is only a valid approach if the sample size justification in the previous study also applies to the current study. Instead of stating that you intend to collect the same sample size as an earlier study, repeat the sample size justification, and update it in light of any new information (such as the effect size in the earlier study, see Table 6 ).

Peer reviewers and editors should carefully scrutinize rules of thumb sample size justifications, because they can make it seem like a study has high informational value for an inferential goal even when the study will yield uninformative results. Whenever one encounters a sample size justification based on a heuristic, ask yourself: ‘Why is this heuristic used?’ It is important to know what the logic behind a heuristic is to determine whether the heuristic is valid for a specific situation. In most cases, heuristics are based on weak logic, and not widely applicable. It might be possible that fields develop valid heuristics for sample size justifications. For example, it is possible that a research area reaches widespread agreement that effects smaller than d = 0.3 are too small to be of interest, and all studies in a field use sequential designs (see below) that have 90% power to detect a d = 0.3. Alternatively, it is possible that a field agrees that data should be collected with a desired level of accuracy, irrespective of the true effect size. In these cases, valid heuristics would exist based on generally agreed goals of data collection. For example, Simonsohn (2015) suggests to design replication studies that have 2.5 times as large sample sizes as the original study, as this provides 80% power for an equivalence test against an equivalence bound set to the effect the original study had 33% power to detect, assuming the true effect size is 0. As original authors typically do not specify which effect size would falsify their hypothesis, the heuristic underlying this ‘small telescopes’ approach is a good starting point for a replication study with the inferential goal to reject the presence of an effect as large as was described in an earlier publication. It is the responsibility of researchers to gain the knowledge to distinguish valid heuristics from mindless heuristics, and to be able to evaluate whether a heuristic will yield an informative result given the inferential goal of the researchers in a specific study, or not.

No Justification

It might sound like a contradictio in terminis , but it is useful to distinguish a final category where researchers explicitly state they do not have a justification for their sample size. Perhaps the resources were available to collect more data, but they were not used. A researcher could have performed a power analysis, or planned for precision, but they did not. In those cases, instead of pretending there was a justification for the sample size, honesty requires you to state there is no sample size justification. This is not necessarily bad. It is still possible to discuss the smallest effect size of interest, the minimal statistically detectable effect, the width of the confidence interval around the effect size, and to plot a sensitivity power analysis, in relation to the sample size that was collected. If a researcher truly had no specific inferential goals when collecting the data, such an evaluation can perhaps be performed based on reasonable inferential goals peers would have when they learn about the existence of the collected data.

Do not try to spin a story where it looks like a study was highly informative when it was not. Instead, transparently evaluate how informative the study was given effect sizes that were of interest, and make sure that the conclusions follow from the data. The lack of a sample size justification might not be problematic, but it might mean that a study was not informative for most effect sizes of interest, which makes it especially difficult to interpret non-significant effects, or estimates with large uncertainty.

The inferential goal of data collection is often in some way related to the size of an effect. Therefore, to design an informative study, researchers will want to think about which effect sizes are interesting. First, it is useful to consider three effect sizes when determining the sample size. The first is the smallest effect size a researcher is interested in, the second is the smallest effect size that can be statistically significant (only in studies where a significance test will be performed), and the third is the effect size that is expected. Beyond considering these three effect sizes, it can be useful to evaluate ranges of effect sizes. This can be done by computing the width of the expected confidence interval around an effect size of interest (for example, an effect size of zero), and examine which effects could be rejected. Similarly, it can be useful to plot a sensitivity curve and evaluate the range of effect sizes the design has decent power to detect, as well as to consider the range of effects for which the design has low power. Finally, there are situations where it is useful to consider a range of effect that is likely to be observed in a specific research area.

What is the Smallest Effect Size of Interest?

The strongest possible sample size justification is based on an explicit statement of the smallest effect size that is considered interesting. A smallest effect size of interest can be based on theoretical predictions or practical considerations. For a review of approaches that can be used to determine a smallest effect size of interest in randomized controlled trials, see Cook et al.  (2014) and Keefe et al.  (2013) , for reviews of different methods to determine a smallest effect size of interest, see King (2011) and Copay, Subach, Glassman, Polly, and Schuler (2007) , and for a discussion focused on psychological research, see Lakens, Scheel, et al.  (2018) .

It can be challenging to determine the smallest effect size of interest whenever theories are not very developed, or when the research question is far removed from practical applications, but it is still worth thinking about which effects would be too small to matter. A first step forward is to discuss which effect sizes are considered meaningful in a specific research line with your peers. Researchers will differ in the effect sizes they consider large enough to be worthwhile (Murphy et al., 2014) . Just as not every scientist will find every research question interesting enough to study, not every scientist will consider the same effect sizes interesting enough to study, and different stakeholders will differ in which effect sizes are considered meaningful (Kelley & Preacher, 2012) .

Even though it might be challenging, there are important benefits of being able to specify a smallest effect size of interest. The population effect size is always uncertain (indeed, estimating this is typically one of the goals of the study), and therefore whenever a study is powered for an expected effect size, there is considerable uncertainty about whether the statistical power is high enough to detect the true effect in the population. However, if the smallest effect size of interest can be specified and agreed upon after careful deliberation, it becomes possible to design a study that has sufficient power (given the inferential goal to detect or reject the smallest effect size of interest with a certain error rate). A smallest effect of interest may be subjective (one researcher might find effect sizes smaller than d = 0.3 meaningless, while another researcher might still be interested in effects larger than d = 0.1), and there might be uncertainty about the parameters required to specify the smallest effect size of interest (e.g., when performing a cost-benefit analysis), but after a smallest effect size of interest has been determined, a study can be designed with a known Type II error rate to detect or reject this value. For this reason an a-priori power based on a smallest effect size of interest is generally preferred, whenever researchers are able to specify one (Aberson, 2019; Albers & Lakens, 2018; Brown, 1983; Cascio & Zedeck, 1983; Dienes, 2014; Lenth, 2001) .

The Minimal Statistically Detectable Effect

The minimal statistically detectable effect, or the critical effect size, provides information about the smallest effect size that, if observed, would be statistically significant given a specified alpha level and sample size (Cook et al., 2014) . For any critical t value (e.g., t = 1.96 for α = 0.05, for large sample sizes) we can compute a critical mean difference (Phillips et al., 2001) , or a critical standardized effect size. For a two-sided independent t test the critical mean difference is:

and the critical standardized mean difference is:

In Figure 4 the distribution of Cohen’s d is plotted for 15 participants per group when the true effect size is either d = 0 or d = 0.5. This figure is similar to Figure 2 , with the addition that the critical d is indicated. We see that with such a small number of observations in each group only observed effects larger than d = 0.75 will be statistically significant. Whether such effect sizes are interesting, and can realistically be expected, should be carefully considered and justified.

graphic

G*Power provides the critical test statistic (such as the critical t value) when performing a power analysis. For example, Figure 5 shows that for a correlation based on a two-sided test, with α = 0.05, and N = 30, only effects larger than r = 0.361 or smaller than r = -0.361 can be statistically significant. This reveals that when the sample size is relatively small, the observed effect needs to be quite substantial to be statistically significant.

graphic

It is important to realize that due to random variation each study has a probability to yield effects larger than the critical effect size, even if the true effect size is small (or even when the true effect size is 0, in which case each significant effect is a Type I error). Computing a minimal statistically detectable effect is useful for a study where no a-priori power analysis is performed, both for studies in the published literature that do not report a sample size justification (Lakens, Scheel, et al., 2018) , as for researchers who rely on heuristics for their sample size justification.

It can be informative to ask yourself whether the critical effect size for a study design is within the range of effect sizes that can realistically be expected. If not, then whenever a significant effect is observed in a published study, either the effect size is surprisingly larger than expected, or more likely, it is an upwardly biased effect size estimate. In the latter case, given publication bias, published studies will lead to biased effect size estimates. If it is still possible to increase the sample size, for example by ignoring rules of thumb and instead performing an a-priori power analysis, then do so. If it is not possible to increase the sample size, for example due to resource constraints, then reflecting on the minimal statistically detectable effect should make it clear that an analysis of the data should not focus on p values, but on the effect size and the confidence interval (see Table 3 ).

It is also useful to compute the minimal statistically detectable effect if an ‘optimistic’ power analysis is performed. For example, if you believe a best case scenario for the true effect size is d = 0.57 and use this optimistic expectation in an a-priori power analysis, effects smaller than d = 0.4 will not be statistically significant when you collect 50 observations in a two independent group design. If your worst case scenario for the alternative hypothesis is a true effect size of d = 0.35 your design would not allow you to declare a significant effect if effect size estimates close to the worst case scenario are observed. Taking into account the minimal statistically detectable effect size should make you reflect on whether a hypothesis test will yield an informative answer, and whether your current approach to sample size justification (e.g., the use of rules of thumb, or letting resource constraints determine the sample size) leads to an informative study, or not.

What is the Expected Effect Size?

Although the true population effect size is always unknown, there are situations where researchers have a reasonable expectation of the effect size in a study, and want to use this expected effect size in an a-priori power analysis. Even if expectations for the observed effect size are largely a guess, it is always useful to explicitly consider which effect sizes are expected. A researcher can justify a sample size based on the effect size they expect, even if such a study would not be very informative with respect to the smallest effect size of interest. In such cases a study is informative for one inferential goal (testing whether the expected effect size is present or absent), but not highly informative for the second goal (testing whether the smallest effect size of interest is present or absent).

There are typically three sources for expectations about the population effect size: a meta-analysis, a previous study, or a theoretical model. It is tempting for researchers to be overly optimistic about the expected effect size in an a-priori power analysis, as higher effect size estimates yield lower sample sizes, but being too optimistic increases the probability of observing a false negative result. When reviewing a sample size justification based on an a-priori power analysis, it is important to critically evaluate the justification for the expected effect size used in power analyses.

Using an Estimate from a Meta-Analysis

In a perfect world effect size estimates from a meta-analysis would provide researchers with the most accurate information about which effect size they could expect. Due to widespread publication bias in science, effect size estimates from meta-analyses are regrettably not always accurate. They can be biased, sometimes substantially so. Furthermore, meta-analyses typically have considerable heterogeneity, which means that the meta-analytic effect size estimate differs for subsets of studies that make up the meta-analysis. So, although it might seem useful to use a meta-analytic effect size estimate of the effect you are studying in your power analysis, you need to take great care before doing so.

If a researcher wants to enter a meta-analytic effect size estimate in an a-priori power analysis, they need to consider three things (see Table 5 ). First, the studies included in the meta-analysis should be similar enough to the study they are performing that it is reasonable to expect a similar effect size. In essence, this requires evaluating the generalizability of the effect size estimate to the new study. It is important to carefully consider differences between the meta-analyzed studies and the planned study, with respect to the manipulation, the measure, the population, and any other relevant variables.

Second, researchers should check whether the effect sizes reported in the meta-analysis are homogeneous. If not, and there is considerable heterogeneity in the meta-analysis, it means not all included studies can be expected to have the same true effect size estimate. A meta-analytic estimate should be used based on the subset of studies that most closely represent the planned study. Note that heterogeneity remains a possibility (even direct replication studies can show heterogeneity when unmeasured variables moderate the effect size in each sample (Kenny & Judd, 2019; Olsson-Collentine et al., 2020) ), so the main goal of selecting similar studies is to use existing data to increase the probability that your expectation is accurate, without guaranteeing it will be.

Third, the meta-analytic effect size estimate should not be biased. Check if the bias detection tests that are reported in the meta-analysis are state-of-the-art, or perform multiple bias detection tests yourself (Carter et al., 2019) , and consider bias corrected effect size estimates (even though these estimates might still be biased, and do not necessarily reflect the true population effect size).

Using an Estimate from a Previous Study

If a meta-analysis is not available, researchers often rely on an effect size from a previous study in an a-priori power analysis. The first issue that requires careful attention is whether the two studies are sufficiently similar. Just as when using an effect size estimate from a meta-analysis, researchers should consider if there are differences between the studies in terms of the population, the design, the manipulations, the measures, or other factors that should lead one to expect a different effect size. For example, intra-individual reaction time variability increases with age, and therefore a study performed on older participants should expect a smaller standardized effect size than a study performed on younger participants. If an earlier study used a very strong manipulation, and you plan to use a more subtle manipulation, a smaller effect size should be expected. Finally, effect sizes do not generalize to studies with different designs. For example, the effect size for a comparison between two groups is most often not similar to the effect size for an interaction in a follow-up study where a second factor is added to the original design (Lakens & Caldwell, 2021) .

Even if a study is sufficiently similar, statisticians have warned against using effect size estimates from small pilot studies in power analyses. Leon, Davis, and Kraemer (2011) write:

Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples.

The two main reasons researchers should be careful when using effect sizes from studies in the published literature in power analyses is that effect size estimates from studies can differ from the true population effect size due to random variation, and that publication bias inflates effect sizes. Figure 6 shows the distribution for η p 2 for a study with three conditions with 25 participants in each condition when the null hypothesis is true and when there is a ‘medium’ true effect of η p 2 = 0.0588 (Richardson, 2011) . As in Figure 4 the critical effect size is indicated, which shows observed effects smaller than η p 2 = 0.08 will not be significant with the given sample size. If the null hypothesis is true effects larger than η p 2 = 0.08 will be a Type I error (the dark grey area), and when the alternative hypothesis is true effects smaller than η p 2 = 0.08 will be a Type II error (light grey area). It is clear all significant effects are larger than the true effect size ( η p 2 = 0.0588), so power analyses based on a significant finding (e.g., because only significant results are published in the literature) will be based on an overestimate of the true effect size, introducing bias.

graphic

But even if we had access to all effect sizes (e.g., from pilot studies you have performed yourself) due to random variation the observed effect size will sometimes be quite small. Figure 6 shows it is quite likely to observe an effect of η p 2 = 0.01 in a small pilot study, even when the true effect size is 0.0588. Entering an effect size estimate of η p 2 = 0.01 in an a-priori power analysis would suggest a total sample size of 957 observations to achieve 80% power in a follow-up study. If researchers only follow up on pilot studies when they observe an effect size in the pilot study that, when entered into a power analysis, yields a sample size that is feasible to collect for the follow-up study, these effect size estimates will be upwardly biased, and power in the follow-up study will be systematically lower than desired (Albers & Lakens, 2018) .

In essence, the problem with using small studies to estimate the effect size that will be entered into an a-priori power analysis is that due to publication bias or follow-up bias the effect sizes researchers end up using for their power analysis do not come from a full F distribution, but from what is known as a truncated   F distribution (Taylor & Muller, 1996) . For example, imagine if there is extreme publication bias in the situation illustrated in Figure 6 . The only studies that would be accessible to researchers would come from the part of the distribution where η p 2 > 0.08, and the test result would be statistically significant. It is possible to compute an effect size estimate that, based on certain assumptions, corrects for bias. For example, imagine we observe a result in the literature for a One-Way ANOVA with 3 conditions, reported as F (2, 42) = 0.017, η p 2 = 0.176. If we would take this effect size at face value and enter it as our effect size estimate in an a-priori power analysis, the suggested sample size to achieve 80% power would suggest we need to collect 17 observations in each condition.

However, if we assume bias is present, we can use the BUCSS R package (S. F. Anderson et al., 2017) to perform a power analysis that attempts to correct for bias. A power analysis that takes bias into account (under a specific model of publication bias, based on a truncated F distribution where only significant results are published) suggests collecting 73 participants in each condition. It is possible that the bias corrected estimate of the non-centrality parameter used to compute power is zero, in which case it is not possible to correct for bias using this method. As an alternative to formally modeling a correction for publication bias whenever researchers assume an effect size estimate is biased, researchers can simply use a more conservative effect size estimate, for example by computing power based on the lower limit of a 60% two-sided confidence interval around the effect size estimate, which Perugini, Gallucci, and Costantini (2014) refer to as safeguard power . Both these approaches lead to a more conservative power analysis, but not necessarily a more accurate power analysis. It is simply not possible to perform an accurate power analysis on the basis of an effect size estimate from a study that might be biased and/or had a small sample size (Teare et al., 2014) . If it is not possible to specify a smallest effect size of interest, and there is great uncertainty about which effect size to expect, it might be more efficient to perform a study with a sequential design (discussed below).

To summarize, an effect size from a previous study in an a-priori power analysis can be used if three conditions are met (see Table 6 ). First, the previous study is sufficiently similar to the planned study. Second, there was a low risk of bias (e.g., the effect size estimate comes from a Registered Report, or from an analysis for which results would not have impacted the likelihood of publication). Third, the sample size is large enough to yield a relatively accurate effect size estimate, based on the width of a 95% CI around the observed effect size estimate. There is always uncertainty around the effect size estimate, and entering the upper and lower limit of the 95% CI around the effect size estimate might be informative about the consequences of the uncertainty in the effect size estimate for an a-priori power analysis.

Using an Estimate from a Theoretical Model

When your theoretical model is sufficiently specific such that you can build a computational model, and you have knowledge about key parameters in your model that are relevant for the data you plan to collect, it is possible to estimate an effect size based on the effect size estimate derived from a computational model. For example, if one had strong ideas about the weights for each feature stimuli share and differ on, it could be possible to compute predicted similarity judgments for pairs of stimuli based on Tversky’s contrast model (Tversky, 1977) , and estimate the predicted effect size for differences between experimental conditions. Although computational models that make point predictions are relatively rare, whenever they are available, they provide a strong justification of the effect size a researcher expects.

Compute the Width of the Confidence Interval around the Effect Size

If a researcher can estimate the standard deviation of the observations that will be collected, it is possible to compute an a-priori estimate of the width of the 95% confidence interval around an effect size (Kelley, 2007) . Confidence intervals represent a range around an estimate that is wide enough so that in the long run the true population parameter will fall inside the confidence intervals 100 - α percent of the time. In any single study the true population effect either falls in the confidence interval, or it doesn’t, but in the long run one can act as if the confidence interval includes the true population effect size (while keeping the error rate in mind). Cumming (2013) calls the difference between the observed effect size and the upper bound of the 95% confidence interval (or the lower bound of the 95% confidence interval) the margin of error.

If we compute the 95% CI for an effect size of d = 0 based on the t statistic and sample size (Smithson, 2003) , we see that with 15 observations in each condition of an independent t test the 95% CI ranges from d = -0.72 to d = 0.72 5 . The margin of error is half the width of the 95% CI, 0.72. A Bayesian estimator who uses an uninformative prior would compute a credible interval with the same (or a very similar) upper and lower bound (Albers et al., 2018; Kruschke, 2011) , and might conclude that after collecting the data they would be left with a range of plausible values for the population effect that is too large to be informative. Regardless of the statistical philosophy you plan to rely on when analyzing the data, the evaluation of what we can conclude based on the width of our interval tells us that with 15 observation per group we will not learn a lot.

One useful way of interpreting the width of the confidence interval is based on the effects you would be able to reject if the true effect size is 0. In other words, if there is no effect, which effects would you have been able to reject given the collected data, and which effect sizes would not be rejected, if there was no effect? Effect sizes in the range of d = 0.7 are findings such as “People become aggressive when they are provoked”, “People prefer their own group to other groups”, and “Romantic partners resemble one another in physical attractiveness” (Richard et al., 2003) . The width of the confidence interval tells you that you can only reject the presence of effects that are so large, if they existed, you would probably already have noticed them. If it is true that most effects that you study are realistically much smaller than d = 0.7, there is a good possibility that we do not learn anything we didn’t already know by performing a study with n = 15. Even without data, in most research lines we would not consider certain large effects plausible (although the effect sizes that are plausible differ between fields, as discussed below). On the other hand, in large samples where researchers can for example reject the presence of effects larger than d = 0.2, if the null hypothesis was true, this analysis of the width of the confidence interval would suggest that peers in many research lines would likely consider the study to be informative.

We see that the margin of error is almost, but not exactly, the same as the minimal statistically detectable effect ( d = 0.75). The small variation is due to the fact that the 95% confidence interval is calculated based on the t distribution. If the true effect size is not zero, the confidence interval is calculated based on the non-central t distribution, and the 95% CI is asymmetric. Figure 7 visualizes three t distributions, one symmetric at 0, and two asymmetric distributions with a noncentrality parameter (the normalized difference between the means) of 2 and 3. The asymmetry is most clearly visible in very small samples (the distributions in the plot have 5 degrees of freedom) but remains noticeable in larger samples when calculating confidence intervals and statistical power. For example, for a true effect size of d = 0.5 observed with 15 observations per group would yield d s = 0.50, 95% CI [-0.23, 1.22]. If we compute the 95% CI around the critical effect size, we would get d s = 0.75, 95% CI [0.00, 1.48]. We see the 95% CI ranges from exactly 0.00 to 1.48, in line with the relation between a confidence interval and a p value, where the 95% CI excludes zero if the test is statistically significant. As noted before, the different approaches recommended here to evaluate how informative a study is are often based on the same information.

graphic

Plot a Sensitivity Power Analysis

A sensitivity power analysis fixes the sample size, desired power, and alpha level, and answers the question which effect size a study could detect with a desired power. A sensitivity power analysis is therefore performed when the sample size is already known. Sometimes data has already been collected to answer a different research question, or the data is retrieved from an existing database, and you want to perform a sensitivity power analysis for a new statistical analysis. Other times, you might not have carefully considered the sample size when you initially collected the data, and want to reflect on the statistical power of the study for (ranges of) effect sizes of interest when analyzing the results. Finally, it is possible that the sample size will be collected in the future, but you know that due to resource constraints the maximum sample size you can collect is limited, and you want to reflect on whether the study has sufficient power for effects that you consider plausible and interesting (such as the smallest effect size of interest, or the effect size that is expected).

Assume a researcher plans to perform a study where 30 observations will be collected in total, 15 in each between participant condition. Figure 8 shows how to perform a sensitivity power analysis in G*Power for a study where we have decided to use an alpha level of 5%, and desire 90% power. The sensitivity power analysis reveals the designed study has 90% power to detect effects of at least d = 1.23. Perhaps a researcher believes that a desired power of 90% is quite high, and is of the opinion that it would still be interesting to perform a study if the statistical power was lower. It can then be useful to plot a sensitivity curve across a range of smaller effect sizes.

graphic

The two dimensions of interest in a sensitivity power analysis are the effect sizes, and the power to observe a significant effect assuming a specific effect size. These two dimensions can be plotted against each other to create a sensitivity curve. For example, a sensitivity curve can be plotted in G*Power by clicking the ‘X-Y plot for a range of values’ button, as illustrated in Figure 9 . Researchers can examine which power they would have for an a-priori plausible range of effect sizes, or they can examine which effect sizes would provide reasonable levels of power. In simulation-based approaches to power analysis, sensitivity curves can be created by performing the power analysis for a range of possible effect sizes. Even if 50% power is deemed acceptable (in which case deciding to act as if the null hypothesis is true after a non-significant result is a relatively noisy decision procedure), Figure 9 shows a study design where power is extremely low for a large range of effect sizes that are reasonable to expect in most fields. Thus, a sensitivity power analysis provides an additional approach to evaluate how informative the planned study is, and can inform researchers that a specific design is unlikely to yield a significant effect for a range of effects that one might realistically expect.

graphic

If the number of observations per group had been larger, the evaluation might have been more positive. We might not have had any specific effect size in mind, but if we had collected 150 observations per group, a sensitivity analysis could have shown that power was sufficient for a range of effects we believe is most interesting to examine, and we would still have approximately 50% power for quite small effects. For a sensitivity analysis to be meaningful, the sensitivity curve should be compared against a smallest effect size of interest, or a range of effect sizes that are expected. A sensitivity power analysis has no clear cut-offs to examine (Bacchetti, 2010) . Instead, the idea is to make a holistic trade-off between different effect sizes one might observe or care about, and their associated statistical power.

The Distribution of Effect Sizes in a Research Area

In my personal experience the most commonly entered effect size estimate in an a-priori power analysis for an independent t test is Cohen’s benchmark for a ‘medium’ effect size, because of what is known as the default effect . When you open G*Power, a ‘medium’ effect is the default option for an a-priori power analysis. Cohen’s benchmarks for small, medium, and large effects should not be used in an a-priori power analysis (Cook et al., 2014; Correll et al., 2020) , and Cohen regretted having proposed these benchmarks (Funder & Ozer, 2019) . The large variety in research topics means that any ‘default’ or ‘heuristic’ that is used to compute statistical power is not just unlikely to correspond to your actual situation, but it is also likely to lead to a sample size that is substantially misaligned with the question you are trying to answer with the collected data.

Some researchers have wondered what a better default would be, if researchers have no other basis to decide upon an effect size for an a-priori power analysis. Brysbaert (2019) recommends d = 0.4 as a default in psychology, which is the average observed in replication projects and several meta-analyses. It is impossible to know if this average effect size is realistic, but it is clear there is huge heterogeneity across fields and research questions. Any average effect size will often deviate substantially from the effect size that should be expected in a planned study. Some researchers have suggested to change Cohen’s benchmarks based on the distribution of effect sizes in a specific field (Bosco et al., 2015; Funder & Ozer, 2019; Hill et al., 2008; Kraft, 2020; Lovakov & Agadullina, 2017) . As always, when effect size estimates are based on the published literature, one needs to evaluate the possibility that the effect size estimates are inflated due to publication bias. Due to the large variation in effect sizes within a specific research area, there is little use in choosing a large, medium, or small effect size benchmark based on the empirical distribution of effect sizes in a field to perform a power analysis.

Having some knowledge about the distribution of effect sizes in the literature can be useful when interpreting the confidence interval around an effect size. If in a specific research area almost no effects are larger than the value you could reject in an equivalence test (e.g., if the observed effect size is 0, the design would only reject effects larger than for example d = 0.7), then it is a-priori unlikely that collecting the data would tell you something you didn’t already know.

It is more difficult to defend the use of a specific effect size derived from an empirical distribution of effect sizes as a justification for the effect size used in an a-priori power analysis. One might argue that the use of an effect size benchmark based on the distribution of effects in the literature will outperform a wild guess, but this is not a strong enough argument to form the basis of a sample size justification. There is a point where researchers need to admit they are not ready to perform an a-priori power analysis due to a lack of clear expectations (Scheel et al., 2020) . Alternative sample size justifications, such as a justification of the sample size based on resource constraints, perhaps in combination with a sequential study design, might be more in line with the actual inferential goals of a study.

So far, the focus has been on justifying the sample size for quantitative studies. There are a number of related topics that can be useful to design an informative study. First, in addition to a-priori or prospective power analysis and sensitivity power analysis, it is important to discuss compromise power analysis (which is useful) and post-hoc or retrospective power analysis (which is not useful, e.g., Zumbo and Hubley (1998) , Lenth (2007) ). When sample sizes are justified based on an a-priori power analysis it can be very efficient to collect data in sequential designs where data collection is continued or terminated based on interim analyses of the data. Furthermore, it is worthwhile to consider ways to increase the power of a test without increasing the sample size. An additional point of attention is to have a good understanding of your dependent variable, especially it’s standard deviation. Finally, sample size justification is just as important in qualitative studies, and although there has been much less work on sample size justification in this domain, some proposals exist that researchers can use to design an informative study. Each of these topics is discussed in turn.

Compromise Power Analysis

In a compromise power analysis the sample size and the effect are fixed, and the error rates of the test are calculated, based on a desired ratio between the Type I and Type II error rate. A compromise power analysis is useful both when a very large number of observations will be collected, as when only a small number of observations can be collected.

In the first situation a researcher might be fortunate enough to be able to collect so many observations that the statistical power for a test is very high for all effect sizes that are deemed interesting. For example, imagine a researcher has access to 2000 employees who are all required to answer questions during a yearly evaluation in a company where they are testing an intervention that should reduce subjectively reported stress levels. You are quite confident that an effect smaller than d = 0.2 is not large enough to be subjectively noticeable for individuals (Jaeschke et al., 1989) . With an alpha level of 0.05 the researcher would have a statistical power of 0.994, or a Type II error rate of 0.006. This means that for a smallest effect size of interest of d = 0.2 the researcher is 8.30 times more likely to make a Type I error than a Type II error.

Although the original idea of designing studies that control Type I and Type II error rates was that researchers would need to justify their error rates (Neyman & Pearson, 1933) , a common heuristic is to set the Type I error rate to 0.05 and the Type II error rate to 0.20, meaning that a Type I error is 4 times as unlikely as a Type II error. The default use of 80% power (or a 20% Type II or β error) is based on a personal preference of Cohen (1988) , who writes:

It is proposed here as a convention that, when the investigator has no other basis for setting the desired power value, the value .80 be used. This means that β is set at .20. This arbitrary but reasonable value is offered for several reasons (Cohen, 1965, pp. 98-99). The chief among them takes into consideration the implicit convention for α of .05. The β of .20 is chosen with the idea that the general relative seriousness of these two kinds of errors is of the order of .20/.05, i.e., that Type I errors are of the order of four times as serious as Type II errors. This .80 desired power convention is offered with the hope that it will be ignored whenever an investigator can find a basis in his substantive concerns in his specific research investigation to choose a value ad hoc.

We see that conventions are built on conventions: the norm to aim for 80% power is built on the norm to set the alpha level at 5%. What we should take away from Cohen is not that we should aim for 80% power, but that we should justify our error rates based on the relative seriousness of each error. This is where compromise power analysis comes in. If you share Cohen’s belief that a Type I error is 4 times as serious as a Type II error, and building on our earlier study on 2000 employees, it makes sense to adjust the Type I error rate when the Type II error rate is low for all effect sizes of interest (Cascio & Zedeck, 1983) . Indeed, Erdfelder, Faul, and Buchner (1996) created the G*Power software in part to give researchers a tool to perform compromise power analysis.

Figure 10 illustrates how a compromise power analysis is performed in G*Power when a Type I error is deemed to be equally costly as a Type II error, which for a study with 1000 observations per condition would lead to a Type I error and a Type II error of 0.0179. As Faul, Erdfelder, Lang, and Buchner (2007) write:

Of course, compromise power analyses can easily result in unconventional significance levels greater than α = .05 (in the case of small samples or effect sizes) or less than α = .001 (in the case of large samples or effect sizes). However, we believe that the benefit of balanced Type I and Type II error risks often offsets the costs of violating significance level conventions.

graphic

This brings us to the second situation where a compromise power analysis can be useful, which is when we know the statistical power in our study is low. Although it is highly undesirable to make decisions when error rates are high, if one finds oneself in a situation where a decision must be made based on little information, Winer (1962) writes:

The frequent use of the .05 and .01 levels of significance is a matter of convention having little scientific or logical basis. When the power of tests is likely to be low under these levels of significance, and when Type I and Type II errors are of approximately equal importance, the .30 and .20 levels of significance may be more appropriate than the .05 and .01 levels.

For example, if we plan to perform a two-sided t test, can feasibly collect at most 50 observations in each independent group, and expect a population effect size of 0.5, we would have 70% power if we set our alpha level to 0.05. We can choose to weigh both types of error equally, and set the alpha level to 0.149, to end up with a statistical power for an effect of d = 0.5 of 0.851 (given a 0.149 Type II error rate). The choice of α and β in a compromise power analysis can be extended to take prior probabilities of the null and alternative hypothesis into account (Maier & Lakens, 2022; Miller & Ulrich, 2019; Murphy et al., 2014) .

A compromise power analysis requires a researcher to specify the sample size. This sample size itself requires a justification, so a compromise power analysis will typically be performed together with a resource constraint justification for a sample size. It is especially important to perform a compromise power analysis if your resource constraint justification is strongly based on the need to make a decision, in which case a researcher should think carefully about the Type I and Type II error rates stakeholders are willing to accept. However, a compromise power analysis also makes sense if the sample size is very large, but a researcher did not have the freedom to set the sample size. This might happen if, for example, data collection is part of a larger international study and the sample size is based on other research questions. In designs where the Type II error rate is very small (and power is very high) some statisticians have also recommended to lower the alpha level to prevent Lindley’s paradox, a situation where a significant effect ( p < α ) is evidence for the null hypothesis (Good, 1992; Jeffreys, 1939) . Lowering the alpha level as a function of the statistical power of the test can prevent this paradox, providing another argument for a compromise power analysis when sample sizes are large (Maier & Lakens, 2022) . Finally, a compromise power analysis needs a justification for the effect size, either based on a smallest effect size of interest or an effect size that is expected. Table 7 lists three aspects that should be discussed alongside a reported compromise power analysis.

What to do if Your Editor Asks for Post-hoc Power?

Post-hoc, retrospective, or observed power is used to describe the statistical power of the test that is computed assuming the effect size that has been estimated from the collected data is the true effect size (Lenth, 2007; Zumbo & Hubley, 1998) . Post-hoc power is therefore not performed before looking at the data, based on effect sizes that are deemed interesting, as in an a-priori power analysis, and it is unlike a sensitivity power analysis where a range of interesting effect sizes is evaluated. Because a post-hoc or retrospective power analysis is based on the effect size observed in the data that has been collected, it does not add any information beyond the reported p value, but it presents the same information in a different way. Despite this fact, editors and reviewers often ask authors to perform post-hoc power analysis to interpret non-significant results. This is not a sensible request, and whenever it is made, you should not comply with it. Instead, you should perform a sensitivity power analysis, and discuss the power for the smallest effect size of interest and a realistic range of expected effect sizes.

Post-hoc power is directly related to the p value of the statistical test (Hoenig & Heisey, 2001) . For a z test where the p value is exactly 0.05, post-hoc power is always 50%. The reason for this relationship is that when a p value is observed that equals the alpha level of the test (e.g., 0.05), the observed z score of the test is exactly equal to the critical value of the test (e.g., z = 1.96 in a two-sided test with a 5% alpha level). Whenever the alternative hypothesis is centered on the critical value half the values we expect to observe if this alternative hypothesis is true fall below the critical value, and half fall above the critical value. Therefore, a test where we observed a p value identical to the alpha level will have exactly 50% power in a post-hoc power analysis, as the analysis assumes the observed effect size is true.

For other statistical tests, where the alternative distribution is not symmetric (such as for the t test, where the alternative hypothesis follows a non-central t distribution, see Figure 7 ), a p = 0.05 does not directly translate to an observed power of 50%, but by plotting post-hoc power against the observed p value we see that the two statistics are always directly related. As Figure 11 shows, if the p value is non-significant (i.e., larger than 0.05) the observed power will be less than approximately 50% in a t test. Lenth (2007) explains how observed power is also completely determined by the observed p value for F tests, although the statement that a non-significant p value implies a power less than 50% no longer holds.

graphic

When editors or reviewers ask researchers to report post-hoc power analyses they would like to be able to distinguish between true negatives (concluding there is no effect, when there is no effect) and false negatives (a Type II error, concluding there is no effect, when there actually is an effect). Since reporting post-hoc power is just a different way of reporting the p value, reporting the post-hoc power will not provide an answer to the question editors are asking (Hoenig & Heisey, 2001; Lenth, 2007; Schulz & Grimes, 2005; Yuan & Maxwell, 2005) . To be able to draw conclusions about the absence of a meaningful effect, one should perform an equivalence test, and design a study with high power to reject the smallest effect size of interest (Lakens, Scheel, et al., 2018) . Alternatively, if no smallest effect size of interest was specified when designing the study, researchers can report a sensitivity power analysis.

Sequential Analyses

Whenever the sample size is justified based on an a-priori power analysis it can be very efficient to collect data in a sequential design. Sequential designs control error rates across multiple looks at the data (e.g., after 50, 100, and 150 observations have been collected) and can reduce the average expected sample size that is collected compared to a fixed design where data is only analyzed after the maximum sample size is collected (Proschan et al., 2006; Wassmer & Brannath, 2016) . Sequential designs have a long history (Dodge & Romig, 1929) , and exist in many variations, such as the Sequential Probability Ratio Test (Wald, 1945) , combining independent statistical tests (Westberg, 1985) , group sequential designs (Jennison & Turnbull, 2000) , sequential Bayes factors (Schönbrodt et al., 2017) , and safe testing (Grünwald et al., 2019) . Of these approaches, the Sequential Probability Ratio Test is most efficient if data can be analyzed after every observation (Schnuerch & Erdfelder, 2020) . Group sequential designs, where data is collected in batches, provide more flexibility in data collection, error control, and corrections for effect size estimates (Wassmer & Brannath, 2016) . Safe tests provide optimal flexibility if there are dependencies between observations (ter Schure & Grünwald, 2019) .

Sequential designs are especially useful when there is considerable uncertainty about the effect size, or when it is plausible that the true effect size is larger than the smallest effect size of interest the study is designed to detect (Lakens, 2014) . In such situations data collection has the possibility to terminate early if the effect size is larger than the smallest effect size of interest, but data collection can continue to the maximum sample size if needed. Sequential designs can prevent waste when testing hypotheses, both by stopping early when the null hypothesis can be rejected, as by stopping early if the presence of a smallest effect size of interest can be rejected (i.e., stopping for futility). Group sequential designs are currently the most widely used approach to sequential analyses, and can be planned and analyzed using rpact (Wassmer & Pahlke, 2019) or gsDesign (K. M. Anderson, 2014) . 6

Increasing Power Without Increasing the Sample Size

The most straightforward approach to increase the informational value of studies is to increase the sample size. Because resources are often limited, it is also worthwhile to explore different approaches to increasing the power of a test without increasing the sample size. The first option is to use directional tests where relevant. Researchers often make directional predictions, such as ‘we predict X is larger than Y’. The statistical test that logically follows from this prediction is a directional (or one-sided) t test. A directional test moves the Type I error rate to one side of the tail of the distribution, which lowers the critical value, and therefore requires less observations to achieve the same statistical power.

Although there is some discussion about when directional tests are appropriate, they are perfectly defensible from a Neyman-Pearson perspective on hypothesis testing (Cho & Abe, 2013) , which makes a (preregistered) directional test a straightforward approach to both increase the power of a test, as the riskiness of the prediction. However, there might be situations where you do not want to ask a directional question. Sometimes, especially in research with applied consequences, it might be important to examine if a null effect can be rejected, even if the effect is in the opposite direction as predicted. For example, when you are evaluating a recently introduced educational intervention, and you predict the intervention will increase the performance of students, you might want to explore the possibility that students perform worse, to be able to recommend abandoning the new intervention. In such cases it is also possible to distribute the error rate in a ‘lop-sided’ manner, for example assigning a stricter error rate to effects in the negative than in the positive direction (Rice & Gaines, 1994) .

Another approach to increase the power without increasing the sample size, is to increase the alpha level of the test, as explained in the section on compromise power analysis. Obviously, this comes at an increased probability of making a Type I error. The risk of making either type of error should be carefully weighed, which typically requires taking into account the prior probability that the null-hypothesis is true (Cascio & Zedeck, 1983; Miller & Ulrich, 2019; Mudge et al., 2012; Murphy et al., 2014) . If you have to make a decision, or want to make a claim, and the data you can feasibly collect is limited, increasing the alpha level is justified, either based on a compromise power analysis, or based on a cost-benefit analysis (Baguley, 2004; Field et al., 2004) .

Another widely recommended approach to increase the power of a study is use a within participant design where possible. In almost all cases where a researcher is interested in detecting a difference between groups, a within participant design will require collecting less participants than a between participant design. The reason for the decrease in the sample size is explained by the equation below from Maxwell, Delaney, and Kelley (2017) . The number of participants needed in a two group within-participants design (NW) relative to the number of participants needed in a two group between-participants design (NB), assuming normal distributions, is:

The required number of participants is divided by two because in a within-participants design with two conditions every participant provides two data points. The extent to which this reduces the sample size compared to a between-participants design also depends on the correlation between the dependent variables (e.g., the correlation between the measure collected in a control task and an experimental task), as indicated by the (1- ρ ) part of the equation. If the correlation is 0, a within-participants design simply needs half as many participants as a between participant design (e.g., 64 instead 128 participants). The higher the correlation, the larger the relative benefit of within-participants designs, and whenever the correlation is negative (up to -1) the relative benefit disappears. Especially when dependent variables in within-participants designs are positively correlated, within-participants designs will greatly increase the power you can achieve given the sample size you have available. Use within-participants designs when possible, but weigh the benefits of higher power against the downsides of order effects or carryover effects that might be problematic in a within-participants design (Maxwell et al., 2017) . 7 For designs with multiple factors with multiple levels it can be difficult to specify the full correlation matrix that specifies the expected population correlation for each pair of measurements (Lakens & Caldwell, 2021) . In these cases sequential analyses might provide a solution.

In general, the smaller the variation, the larger the standardized effect size (because we are dividing the raw effect by a smaller standard deviation) and thus the higher the power given the same number of observations. Some additional recommendations are provided in the literature (Allison et al., 1997; Bausell & Li, 2002; Hallahan & Rosenthal, 1996) , such as:

Use better ways to screen participants for studies where participants need to be screened before participation.

Assign participants unequally to conditions (if data in the control condition is much cheaper to collect than data in the experimental condition, for example).

Use reliable measures that have low error variance (Williams et al., 1995) .

Smart use of preregistered covariates (Meyvis & Van Osselaer, 2018) .

It is important to consider if these ways to reduce the variation in the data do not come at too large a cost for external validity. For example, in an intention-to-treat analysis in randomized controlled trials participants who do not comply with the protocol are maintained in the analysis such that the effect size from the study accurately represents the effect of implementing the intervention in the population, and not the effect of the intervention only on those people who perfectly follow the protocol (Gupta, 2011) . Similar trade-offs between reducing the variance and external validity exist in other research areas.

Know Your Measure

Although it is convenient to talk about standardized effect sizes, it is generally preferable if researchers can interpret effects in the raw (unstandardized) scores, and have knowledge about the standard deviation of their measures (Baguley, 2009; Lenth, 2001) . To make it possible for a research community to have realistic expectations about the standard deviation of measures they collect, it is beneficial if researchers within a research area use the same validated measures. This provides a reliable knowledge base that makes it easier to plan for a desired accuracy, and to use a smallest effect size of interest on the unstandardized scale in an a-priori power analysis.

In addition to knowledge about the standard deviation it is important to have knowledge about the correlations between dependent variables (for example because Cohen’s d z for a dependent t test relies on the correlation between means). The more complex the model, the more aspects of the data-generating process need to be known to make predictions. For example, in hierarchical models researchers need knowledge about variance components to be able to perform a power analysis (DeBruine & Barr, 2019; Westfall et al., 2014) . Finally, it is important to know the reliability of your measure (Parsons et al., 2019) , especially when relying on an effect size from a published study that used a measure with different reliability, or when the same measure is used in different populations, in which case it is possible that measurement reliability differs between populations. With the increasing availability of open data, it will hopefully become easier to estimate these parameters using data from earlier studies.

If we calculate a standard deviation from a sample, this value is an estimate of the true value in the population. In small samples, our estimate can be quite far off, while due to the law of large numbers, as our sample size increases, we will be measuring the standard deviation more accurately. Since the sample standard deviation is an estimate with uncertainty, we can calculate a confidence interval around the estimate (Smithson, 2003) , and design pilot studies that will yield a sufficiently reliable estimate of the standard deviation. The confidence interval for the variance σ 2 is provided in the following formula, and the confidence for the standard deviation is the square root of these limits:

Whenever there is uncertainty about parameters, researchers can use sequential designs to perform an internal pilot study   (Wittes & Brittain, 1990) . The idea behind an internal pilot study is that researchers specify a tentative sample size for the study, perform an interim analysis, use the data from the internal pilot study to update parameters such as the variance of the measure, and finally update the final sample size that will be collected. As long as interim looks at the data are blinded (e.g., information about the conditions is not taken into account) the sample size can be adjusted based on an updated estimate of the variance without any practical consequences for the Type I error rate (Friede & Kieser, 2006; Proschan, 2005) . Therefore, if researchers are interested in designing an informative study where the Type I and Type II error rates are controlled, but they lack information about the standard deviation, an internal pilot study might be an attractive approach to consider (Chang, 2016) .

Conventions as meta-heuristics

Even when a researcher might not use a heuristic to directly determine the sample size in a study, there is an indirect way in which heuristics play a role in sample size justifications. Sample size justifications based on inferential goals such as a power analysis, accuracy, or a decision all require researchers to choose values for a desired Type I and Type II error rate, a desired accuracy, or a smallest effect size of interest. Although it is sometimes possible to justify these values as described above (e.g., based on a cost-benefit analysis), a solid justification of these values might require dedicated research lines. Performing such research lines will not always be possible, and these studies might themselves not be worth the costs (e.g., it might require less resources to perform a study with an alpha level that most peers would consider conservatively low, than to collect all the data that would be required to determine the alpha level based on a cost-benefit analysis). In these situations, researchers might use values based on a convention.

When it comes to a desired width of a confidence interval, a desired power, or any other input values required to perform a sample size computation, it is important to transparently report the use of a heuristic or convention (for example by using the accompanying online Shiny app). A convention such as the use of a 5% Type 1 error rate and 80% power practically functions as a lower threshold of the minimum informational value peers are expected to accept without any justification (whereas with a justification, higher error rates can also be deemed acceptable by peers). It is important to realize that none of these values are set in stone. Journals are free to specify that they desire a higher informational value in their author guidelines (e.g., Nature Human Behavior requires registered reports to be designed to achieve 95% statistical power, and my own department has required staff to submit ERB proposals where, whenever possible, the study was designed to achieve 90% power). Researchers who choose to design studies with a higher informational value than a conventional minimum should receive credit for doing so.

In the past some fields have changed conventions, such as the 5 sigma threshold now used in physics to declare a discovery instead of a 5% Type I error rate. In other fields such attempts have been unsuccessful (e.g., Johnson (2013) ). Improved conventions should be context dependent, and it seems sensible to establish them through consensus meetings (Mullan & Jacoby, 1985) . Consensus meetings are common in medical research, and have been used to decide upon a smallest effect size of interest (for an example, see Fried, Boers, and Baker (1993) ). In many research areas current conventions can be improved. For example, it seems peculiar to have a default alpha level of 5% both for single studies and for meta-analyses, and one could imagine a future where the default alpha level in meta-analyses is much lower than 5%. Hopefully, making the lack of an adequate justification for certain input values in specific situations more transparent will motivate fields to start a discussion about how to improve current conventions. The online Shiny app links to good examples of justifications where possible, and will continue to be updated as better justifications are developed in the future.

Sample Size Justification in Qualitative Research

A value of information perspective to sample size justification also applies to qualitative research. A sample size justification in qualitative research should be based on the consideration that the cost of collecting data from additional participants does not yield new information that is valuable enough given the inferential goals. One widely used application of this idea is known as saturation and is indicated by the observation that new data replicates earlier observations, without adding new information (Morse, 1995) . For example, let’s imagine we ask people why they have a pet. Interviews might reveal reasons that are grouped into categories, but after interviewing 20 people, no new categories emerge, at which point saturation has been reached. Alternative philosophies to qualitative research exist, and not all value planning for saturation. Regrettably, principled approaches to justify sample sizes have not been developed for these alternative philosophies (Marshall et al., 2013) .

When sampling, the goal is often not to pick a representative sample, but a sample that contains a sufficiently diverse number of subjects such that saturation is reached efficiently. Fugard and Potts (2015) show how to move towards a more informed justification for the sample size in qualitative research based on 1) the number of codes that exist in the population (e.g., the number of reasons people have pets), 2) the probability a code can be observed in a single information source (e.g., the probability that someone you interview will mention each possible reason for having a pet), and 3) the number of times you want to observe each code. They provide an R formula based on binomial probabilities to compute a required sample size to reach a desired probability of observing codes.

A more advanced approach is used in Rijnsoever (2017) , which also explores the importance of different sampling strategies. In general, purposefully sampling information from sources you expect will yield novel information is much more efficient than random sampling, but this also requires a good overview of the expected codes, and the sub-populations in which each code can be observed. Sometimes, it is possible to identify information sources that, when interviewed, would at least yield one new code (e.g., based on informal communication before an interview). A good sample size justification in qualitative research is based on 1) an identification of the populations, including any sub-populations, 2) an estimate of the number of codes in the (sub-)population, 3) the probability a code is encountered in an information source, and 4) the sampling strategy that is used.

Providing a coherent sample size justification is an essential step in designing an informative study. There are multiple approaches to justifying the sample size in a study, depending on the goal of the data collection, the resources that are available, and the statistical approach that is used to analyze the data. An overarching principle in all these approaches is that researchers consider the value of the information they collect in relation to their inferential goals.

The process of justifying a sample size when designing a study should sometimes lead to the conclusion that it is not worthwhile to collect the data, because the study does not have sufficient informational value to justify the costs. There will be cases where it is unlikely there will ever be enough data to perform a meta-analysis (for example because of a lack of general interest in the topic), the information will not be used to make a decision or claim, and the statistical tests do not allow you to test a hypothesis with reasonable error rates or to estimate an effect size with sufficient accuracy. If there is no good justification to collect the maximum number of observations that one can feasibly collect, performing the study anyway is a waste of time and/or money (Brown, 1983; Button et al., 2013; S. D. Halpern et al., 2002) .

The awareness that sample sizes in past studies were often too small to meet any realistic inferential goals is growing among psychologists (Button et al., 2013; Fraley & Vazire, 2014; Lindsay, 2015; Sedlmeier & Gigerenzer, 1989) . As an increasing number of journals start to require sample size justifications, some researchers will realize they need to collect larger samples than they were used to. This means researchers will need to request more money for participant payment in grant proposals, or that researchers will need to increasingly collaborate (Moshontz et al., 2018) . If you believe your research question is important enough to be answered, but you are not able to answer the question with your current resources, one approach to consider is to organize a research collaboration with peers, and pursue an answer to this question collectively.

A sample size justification should not be seen as a hurdle that researchers need to pass before they can submit a grant, ethical review board proposal, or manuscript for publication. When a sample size is simply stated, instead of carefully justified, it can be difficult to evaluate whether the value of the information a researcher aims to collect outweighs the costs of data collection. Being able to report a solid sample size justification means a researcher knows what they want to learn from a study, and makes it possible to design a study that can provide an informative answer to a scientific question.

This work was funded by VIDI Grant 452-17-013 from the Netherlands Organisation for Scientific Research. I would like to thank Shilaan Alzahawi, José Biurrun, Aaron Caldwell, Gordon Feld, Yaov Kessler, Robin Kok, Maximilian Maier, Matan Mazor, Toni Saari, Andy Siddall, and Jesper Wulff for feedback on an earlier draft. A computationally reproducible version of this manuscript is available at https://github.com/Lakens/sample_size_justification. An interactive online form to complete a sample size justification implementing the recommendations in this manuscript can be found at https://shiny.ieis.tue.nl/sample_size_justification/.

I have no competing interests to declare.

A computationally reproducible version of this manuscript is available at https://github.com/Lakens/sample_size_justification .

The topic of power analysis for meta-analyses is outside the scope of this manuscript, but see Hedges and Pigott (2001) and Valentine, Pigott, and Rothstein (2010) .

It is possible to argue we are still making an inference, even when the entire population is observed, because we have observed a metaphorical population from one of many possible worlds, see Spiegelhalter (2019) .

Power analyses can be performed based on standardized effect sizes or effect sizes expressed on the original scale. It is important to know the standard deviation of the effect (see the ‘Know Your Measure’ section) but I find it slightly more convenient to talk about standardized effects in the context of sample size justifications.

These figures can be reproduced and adapted in an online shiny app: http://shiny.ieis.tue.nl/d_p_power/ .

Confidence intervals around effect sizes can be computed using the MOTE Shiny app: https://www.aggieerin.com/shiny-server/

Shiny apps are available for both rpact: https://rpact.shinyapps.io/public/ and gsDesign: https://gsdesign.shinyapps.io/prod/

You can compare within- and between-participants designs in this Shiny app: http://shiny.ieis.tue.nl/within_between .

Supplementary data

Recipient(s) will receive an email with a link to 'Sample Size Justification' and will not need an account to access the content.

Subject: Sample Size Justification

(Optional message may have a maximum of 1000 characters.)

Citing articles via

Email alerts, affiliations.

  • Recent Content
  • Special Collections
  • All Content
  • Submission Guidelines
  • Publication Fees
  • Journal Policies
  • Editorial Team
  • Online ISSN 2474-7394
  • Copyright © 2024

Stay Informed

Disciplines.

  • Ancient World
  • Anthropology
  • Communication
  • Criminology & Criminal Justice
  • Film & Media Studies
  • Food & Wine
  • Browse All Disciplines
  • Browse All Courses
  • Book Authors
  • Booksellers
  • Instructions
  • Journal Authors
  • Journal Editors
  • Media & Journalists
  • Planned Giving

About UC Press

  • Press Releases
  • Seasonal Catalog
  • Acquisitions Editors
  • Customer Service
  • Exam/Desk Requests
  • Media Inquiries
  • Print-Disability
  • Rights & Permissions
  • UC Press Foundation
  • © Copyright 2024 by the Regents of the University of California. All rights reserved. Privacy policy    Accessibility

This Feature Is Available To Subscribers Only

Sign In or Create an Account

We use cookies to enhance our website for you. Proceed if you agree to this policy or learn more about it.

  • Essay Database >
  • Essays Samples >
  • Essay Types >
  • Thesis Proposal Example

Justification Thesis Proposals Samples For Students

3 samples of this type

WowEssays.com paper writer service proudly presents to you an open-access directory of Justification Thesis Proposals designed to help struggling students tackle their writing challenges. In a practical sense, each Justification Thesis Proposal sample presented here may be a guide that walks you through the essential phases of the writing procedure and showcases how to develop an academic work that hits the mark. Besides, if you need more visionary help, these examples could give you a nudge toward an original Justification Thesis Proposal topic or encourage a novice approach to a threadbare subject.

In case this is not enough to slake the thirst for efficient writing help, you can request customized assistance in the form of a model Thesis Proposal on Justification crafted by a pro writer from scratch and tailored to your particular directives. Be it a plain 2-page paper or a profound, lengthy piece, our writers specialized in Justification and related topics will submit it within the pre-agreed timeframe. Buy cheap essays or research papers now!

Example Of Thesis Proposal On Euthanasia

Introduction, “credit crisis and the effect on capital structure of dutch smes”: thesis proposal sample.

[First Last Name]

English [Number]

[Date Month Year]

Good Thesis Proposal On Practical Proposal

Audience name.

Dear Audience, Following the many apparent problems faced by musicians in the industry, this letter will essentially present solutions to the financial constraints faced by the upcoming musicians and a justification of the same.

Description of the problem

Don't waste your time searching for a sample.

Get your thesis proposal done by professional writers!

Just from $10/page

Password recovery email has been sent to [email protected]

Use your new password to log in

You are not register!

By clicking Register, you agree to our Terms of Service and that you have read our Privacy Policy .

Now you can download documents directly to your device!

Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.

or Use the QR code to Save this Paper to Your Phone

The sample is NOT original!

Short on a deadline?

Don't waste time. Get help with 11% off using code - GETWOWED

No, thanks! I'm fine with missing my deadline

Cornell Research Site

  • Find My GCO
  • IACUC applications (Cayuse Animal Management System)
  • IBC Applications (eMUA)
  • IRB Applications (RASS-IRB) External
  • Institutional Profile & DUNS
  • Rates and budgets
  • Report external interests (COI)
  • Join List Servs
  • Ask EHS External
  • Research Development Services
  • Cornell Data Services External
  • Find Your Next Funding Opportunity
  • Travel Registry External
  • RASS (Formerly Form 10 and NFA) External
  • International research activities External
  • Register for Federal and Non-Federal Systems
  • Disclose Foreign Collaborations and Support
  • Web Financials (WebFin2) External
  • PI Dashboard External
  • Research metrics & executive dashboards
  • Research Financials (formerly RA Dashboard) External
  • Subawards in a Proposal
  • Proposal Development, Review, and Submission
  • Planning for Animals, Human Participants, r/sNA, Hazardous Materials, Radiation
  • Budgets, Costs, and Rates
  • Collaborate with Weill Cornell Medicine
  • Award Negotiation and Finalization
  • Travel and International Activities
  • Project Finances
  • Project Modifications
  • Research Project Staffing
  • Get Confidential Info, Data, Equipment, or Materials
  • Managing Subawards
  • Animals, Human Participants, r/sNA, Hazardous Materials, Radiation
  • Project Closeout Financials
  • Project Closeout
  • End a Project Early
  • Protecting an Invention, Creation, Discovery
  • Entrepreneurial and Startup Company Resources
  • Gateway to Partnership Program
  • Engaging with Industry
  • Responsible Conduct of Research (RCR)
  • Export Controls
  • Research with Human Participants
  • Research Security
  • Work with Live Vertebrate Animals
  • Research Safety
  • Regulated Biological Materials in Research
  • Financial Management
  • Conflicts of Interest
  •   Search

Appendix C: Sample Budget Justification

Costs for project budgets - appendix c.

The budget justification is one of the most important non-technical sections of the proposal, and it is often required by the sponsor. In this section, the Principal Investigator (PI) provides additional detail for expenses within each budget category and articulates the need for the items/expenses listed. The information provided in the budget justification may be the definitive criteria used by sponsor review panels and administrative officials when determining the amount of funding to be awarded.

The following format is a sample only; not all components will apply to every proposal. Many sponsors prefer that budget justifications follow their own format. In all cases, however, it is best to present the justification for each budget category in the same order as that provided in the budget itself.

Salaries and Wages:

Note: The quantification of  unfunded  effort (e.g., "The PI will donate 5% effort...") in the proposal narrative, budget, or budget justification is considered Voluntary Committed Cost Sharing. This is a legal commitment which must be documented in the University's accounting system. Consider quantifying effort  only  for the requested salary support. See  http://www.dfa.cornell.edu/treasurer/policyoffice/policies/volumes/academic/costsharing.cfm  for additional information.

  • Principal Investigator:  This proposal requests salary support for _______% of effort during the academic year and 100% of effort for _______months during the summer.
  • Other Professional Support:  List title and level of effort to be proposed to be funded. Other personnel categories (Research Associates, Postdoctoral Associates, Technicians) may be included here.
  • Administrative and Clerical:  List the circumstances for requiring direct charging of these services, which must be readily and specifically identifiable to the project with a high degree of accuracy. Provide a brief description of actual job responsibilities, the proposed title, and the level of effort. (See note at the end of this Appendix regarding direct charging costs that are normally considered indirect.)
  • Graduate Students:  List number and a brief description of project role. Include stipend, GRA allowance (tuition), and health insurance.
  • Undergraduate Students:  List number and a brief description of project role.
  • Employee Benefits have been proposed at a rate of ______% for all non-student compensation as approved by the Department of Health and Human Services. See  https://www.dfa.cornell.edu/capitalassets/cost/employee .

Capital Equipment:  The following equipment will be necessary for the completion of the project: Include item description(s), estimated cost of each item, and total cost. Provide a brief statement on necessity and suitability.

Travel:  For each trip, list destination, duration, purpose, relationship to the project, and total cost. Indicate any plans for foreign travel.

Technical Supplies and Materials:  Include type of supplies, per unit price, quantity, and cost. When the cost is substantial, provide a brief statement justifying the necessity.

Publications:  Page charges (number of pages multiplied by the per-page charge).

Services:  Include type of services, cost per type, and total cost.

Consultants : Include the consultant's name, rate, number of days, total cost per consultant, and total consultant cost. Provide a brief statement outlining each individual's expertise and justifying the anticipated need for consultant services. Note: Justifying a specific consultant in the proposal may avoid the need to competitively bid consulting services.

Subcontracts:  Include the subcontractor's name, amount, and total cost. Provide a brief description of the work to be performed and the basis for selection of the subcontractor. A separate budget and corresponding budget justification should be completed by the subcontractor, and is required by many agencies. Note: Justifying a specific subcontractor in the proposal may avoid the need to competitively bid subcontracted services. Post-award changes to subcontracts (additions, deletions, scope or budget modification) may require sponsor approval.

Other Expenses:  May include conferences and seminars (see  Appendix D ), Repair and Maintenance, Academic and User Fees.

Facilities and Administrative Costs (F&A) : F&A costs have been proposed at a rate of _____% of Modified Total Direct Cost (MTDC) as approved in Cornell's rate agreement with the Department of Health and Human Services. A copy of this agreement may be found at  https://www.dfa.cornell.edu/capitalassets/cost/facilities . MTDC exclusions include Capital Equipment, GRA Allowance and Health Insurance, and Subcontract costs in excess of $25,000 per subcontract.

Annual escalations are proposed in accordance with University policy as outlined HERE .

Special information for direct charging costs that are normally considered indirect.  Many costs such as administrative and clerical salaries, office supplies, monthly telephone and network charges, general purpose equipment, and postage are not typically considered direct costs. These may be proposed as direct costs where "unlike and different" circumstances exist. In such cases a budget justification detailing the request must be submitted to OSP for review and approval. Please read the University policy at  https://www.dfa.cornell.edu/sites/default/files/policy/vol3_14.pdf  or contact your  Grant and Contract Officer  for additional assistance.

Overview of Costs for Project Budgets (Budget and Costing Guide)

Appendix a: sample budget format showing major categories, appendix b: facilities and administrative (f&a) cost calculation detail, appendix d: sample budget for conferences and seminars, appendix e: cost sharing allowability matrix.

From a high school essay to university term paper or even a PHD thesis

icon

Finished Papers

What Can You Help Me With?

No matter what assignment you need to get done, let it be math or English language, our essay writing service covers them all. Assignments take time, patience, and thorough in-depth knowledge. Are you worried you don't have everything it takes? Our writers will help with any kind of subject after receiving the requirements. One of the tasks we can take care of is research papers. They can take days if not weeks to complete. If you don't have the time for endless reading then contact our essay writing help online service. With EssayService stress-free academic success is a hand away. Another assignment we can take care of is a case study. Acing it requires good analytical skills. You'll need to hand pick specific information which in most cases isn't easy to find. Why waste your energy on this when they're so many exciting activities out there? Our writing help can also do your critical thinking essays. They aren't the easiest task to complete, but they're the perfect occasion to show your deep understanding of the subject through a lens of critical analysis. Hire our writer services to ace your review. Are you struggling with understanding your professors' directions when it comes to homework assignments? Hire professional writers with years of experience to earn a better grade and impress your parents. Send us the instructions, and your deadline, and you're good to go.

Finished Papers

You are free to order a full plagiarism PDF report while placing the order or afterwards by contacting our Customer Support Team.

Allene W. Leflore

Customer Reviews

What if I’m unsatisfied with an essay your paper service delivers?

IMAGES

  1. Thesis justification

    thesis justification sample

  2. 006 Essay Example Thesis Statement Examples For Essays ~ Thatsnotus

    thesis justification sample

  3. Justification Letter Sample

    thesis justification sample

  4. Justification report sample essay

    thesis justification sample

  5. Example Of Justification Of The Study In Research

    thesis justification sample

  6. (PDF) Sample Size Justification

    thesis justification sample

VIDEO

  1. SAMPLE THESIS VIDEO FOR IRB

  2. Thesis Nootropics Review: Personalized Focus or Hype?

  3. GOD ALWAYS JUSTIFIED MAN BY FAITH // ROMANS 4:1-3 // VIDA CHURCH

  4. How to Write a Good Thesis Statement?

  5. How to Write Thesis or Dissertation? ll Complete TU Thesis, Dissertation Formatting Guidelines

  6. How to Write a Thesis Statement on Pre-Columbian (Native Ame

COMMENTS

  1. PDF Sample Project Justification

    Justification Statement. The justification statement should include 2 to 3 paragraphs that convey the relevance of the over-arching topic in which the proposed research study is grounded. The purpose of this project is to examine the personal perceptions and safety concerns of workers in assumed low-risk. organizations.

  2. Can you provide a sample of the justification of the research for my

    Now, the justification or the rationale explains why the research is needed - what gaps it aims to fill in existing literature, how it aims to add to the existing body of knowledge, or what solutions it aims to provide. In the research paper, it is meant to set the context for the research, and is therefore, written in the introduction section.

  3. 7 Examples of Justification (of a project or research)

    Examples of justification. This research will focus on studying the reproduction habits of salmon in the Mediterranean region of Europe, since due to recent ecological changes in the water and temperatures of the region produced by human economic activity, the behavior of these animals has been modified. Thus, the present work would allow to ...

  4. How to Justify Your Methods in a Thesis or Dissertation

    Two Final Tips: When you're writing your justification, write for your audience. Your purpose here is to provide more than a technical list of details and procedures. This section should focus more on the why and less on the how. Consider your methodology as you're conducting your research.

  5. How to Write the Rationale of the Study in Research (Examples)

    What is the Rationale of the Study? The rationale of the study is the justification for taking on a given study. It explains the reason the study was conducted or should be conducted. This means the study rationale should explain to the reader or examiner why the study is/was necessary. It is also sometimes called the "purpose" or ...

  6. How to Write a Compelling Justification of Your Research

    Conclusion: Summarize the main points of your justification and reiterate the significance of your research. Emphasize why your work is unique and necessary to advance knowledge and address the problem of low proportion of uncontrolled hypertension. Remember, a compelling justification should be concise, persuasive, and grounded in evidence.

  7. What Is a Thesis?

    A thesis is a type of research paper based on your original research. It is usually submitted as the final step of a master's program or a capstone to a bachelor's degree. Writing a thesis can be a daunting experience. Other than a dissertation, it is one of the longest pieces of writing students typically complete.

  8. How to Write a Thesis Statement

    Placement of the thesis statement. Step 1: Start with a question. Step 2: Write your initial answer. Step 3: Develop your answer. Step 4: Refine your thesis statement. Types of thesis statements. Other interesting articles. Frequently asked questions about thesis statements.

  9. HOW TO WRITE A JUSTIFICATION STATEMENT FOR YOUR STUDY

    in this video Dr. Nelson, explains the importance, structure and content of a justification statement of a research proposal. To learn more about RineCynth A...

  10. How To Write The Methodology Chapter (With Examples)

    Do yourself a favour and start with the end in mind. Section 1 - Introduction. As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims. As we've discussed many times on the blog ...

  11. Topic: Introduction and research justification

    Further examples can be found at the end of this topic, and in the drop down for this topic in the left menu. Conclusion In summary, the introduction contains a problem statement, or explanation of why the research is important to the world, a summary of the literature review, and a summary of the research design.

  12. Dissertation & Thesis Outline

    Dissertation & Thesis Outline | Example & Free Templates. Published on June 7, 2022 by Tegan George.Revised on November 21, 2023. A thesis or dissertation outline is one of the most critical early steps in your writing process.It helps you to lay out and organize your ideas and can provide you with a roadmap for deciding the specifics of your dissertation topic and showcasing its relevance to ...

  13. Q: Can you give an example of the "rationale of a study"?

    Answer: The rationale of your research offers the reason for addressing a particular problem with a spscific solution. Your research proposal needs to explain the reasons why you are conducting the study: this forms the rationale for your research, also referred to as the justification of the study. The rationale should explain what you hope to ...

  14. How to write a fantastic thesis introduction (+15 examples)

    The thesis introduction, usually chapter 1, is one of the most important chapters of a thesis. It sets the scene. It previews key arguments and findings. And it helps the reader to understand the structure of the thesis. In short, a lot is riding on this first chapter. With the following tips, you can write

  15. Sample Size Justification

    An important step when designing an empirical study is to justify the sample size that will be collected. The key aim of a sample size justification for such studies is to explain how the collected data is expected to provide valuable information given the inferential goals of the researcher. In this overview article six approaches are discussed to justify the sample size in a quantitative ...

  16. SampleSizePlanner: A Tool to Estimate and Justify Sample Size for Two

    Choosing a method in case of sample-size justification. In an ideal world, the choice for the number of participants would be solely determined by scientific considerations, and depending on the chosen technique, the collection of data would continue until either the desired sample size or a desired outcome has been reached. In practice ...

  17. Justification Thesis Proposals Samples For Students

    3 samples of this type. WowEssays.com paper writer service proudly presents to you an open-access directory of Justification Thesis Proposals designed to help struggling students tackle their writing challenges. In a practical sense, each Justification Thesis Proposal sample presented here may be a guide that walks you through the essential ...

  18. Prize-Winning Thesis and Dissertation Examples

    Prize-Winning Thesis and Dissertation Examples. Published on September 9, 2022 by Tegan George.Revised on July 18, 2023. It can be difficult to know where to start when writing your thesis or dissertation.One way to come up with some ideas or maybe even combat writer's block is to check out previous work done by other students on a similar thesis or dissertation topic to yours.

  19. Appendix C: Sample Budget Justification

    The budget justification is one of the most important non-technical sections of the proposal, and it is often required by the sponsor. In this section, the Principal Investigator (PI) provides additional detail for expenses within each budget category and articulates the need for the items/expenses listed. The information provided in the budget justification may be the definitive criteria used ...

  20. Thesis Justification Sample

    Thesis Justification Sample - 580 . Finished Papers. Total price: Charita Davis #18 in Global Rating ID 19300. Total orders: 7428. A wide range of services. You get wide range of high quality services from our professional team. Thesis Justification Sample: Level: College, University, High School, Master's ...

  21. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management".

  22. Thesis Justification Sample

    He is passionate about scholarly writing, World History, and Political sciences. If you want to make a lasting impression with your research paper, count on him without hesitation. Price: .9. Max Price. Any. Thesis Justification Sample, Admissions Help, Research Paper For Criminal Justice Class, Custom Article Review Ghostwriting Websites Us ...

  23. Thesis Justification Sample

    Thesis Justification Sample, Popular Report Proofreading Service Ca, Best Financial Controller Cover Letter, Sample Application Letter For Professional Internship, Resume Search On, Vikings Homework Help Woodlands, Write A Psychological Film Analysis 4.8/5 ...