Academia Insider

What is the average IQ of PhD students and academics? Are they REALLY smart?

The question of intelligence and its relationship to academic achievement, particularly at the level of a PhD, has always been a topic of discussion within education policy and academia.

It is often assumed that those who attain a PhD are “really smart” and possess superior intelligence, as measured by traditional IQ tests. 

While the average IQ score of PhD students and academics, according to some studies, falls in the ‘superior’ range of around 125, this doesn’t exclude those with an average IQ from undertaking a PhD.

IQ tests can only measure a fraction of what we consider as intelligence, and the score obtained is influenced by a range of factors.

Many university professors and PhD students exemplify not just high cognitive abilities but also exhibit qualities like dedication, creativity, and problem-solving skills. 

The average IQ of PhD graduates and students

According to some sources, the average IQ score for people with PhDs is around 125, which is considered superior.

according to some studies, falls in the 'superior' range of around 125

However, this does not mean that people with lower or higher IQs cannot obtain a PhD, as IQ is only one of many factors that influence academic success.

Moreover, IQ is not a fixed or definitive measure of intelligence, as it can vary depending on the test used, the context, and the individual’s background and motivation.

Here is a table of the range of IQs mentioned for PhDs in some studies and their citation:

Can You Get a Ph.D with an Average IQ?

Yes, you can certainly pursue and successfully complete a Ph.D. with an average IQ.

Completing a Ph.D. is not just about high intelligence or academic prowess.

It requires a unique combination of traits such as resilience, passion for the research subject, and the ability to execute ideas into practice.

Understanding that the path to success involves overcoming numerous challenges and setbacks is crucial.

  • Developing a genuine enjoyment for your research will help maintain motivation during the years spent pursuing the degree.
  • Bridging the gap between theoretical knowledge and its practical application through effective execution is essential.

Therefore, having an average IQ does not preclude you from achieving a Ph.D., provided you can cultivate and leverage these other critical traits and skills.

The IQ Myth in Academia

Completing a PhD is not just about having a high IQ or acing undergraduate courses.

It entails a mixture of different characteristics that, when combined, contribute to one’s potential to successfully finish a doctoral program.

These traits can be better encapsulated in the following table:

Not everyone who gets a PhD has a high IQ. 

I have seen many average students get a PhD because success often relies on much more than IQ. It is a combination of Luck, persistence, IQ and supervisor choice. 

The term “smart” is multifaceted and can mean different things to different people. However, in a general sense, we often use it to denote a combination of:

  • intelligence,
  • and wisdom.

In this context, individuals who have obtained a PhD degree are certainly well-educated.

They have pursued a subject matter in depth, contributed original research to the field, and demonstrated the ability to think critically and solve complex problems.

That said, the choice to pursue a PhD isn’t necessarily an indication of “smartness” if we consider wisdom or the ability to make well-informed, prudent decisions.

Some people embark on a PhD journey without a clear understanding or vision of what they want to achieve at the end, which might not be the wisest decision.

In some cases, individuals may choose to do a PhD because they’re uncertain about their career path and see it as the path of least resistance.

This, too, might not be considered a smart move, as a PhD requires significant investment of time, energy, and often money.

It can also be a stressful and challenging endeavor, so it’s crucial to have a clear purpose and motivation when making this commitment.

On the other hand, a PhD can indeed be a smart choice for those who have a clear objective that aligns with the skills and opportunities offered by such a program.

This might include a deep interest in research and academia, a desire to contribute to a particular field of knowledge, or specific career goals that require or are enhanced by a PhD.

The features of a person with high IQ scores

A person with high IQ scores might exhibit the following features that may be beneficial for a PhD:

  • Longevity & Health : A high IQ is associated with longer lifespan and better overall health.
  • Resistance to Stress Disorders : High IQ individuals may be more resilient to conditions like post-traumatic stress disorder.
  • Success in Education & Occupation : They are likely to achieve educational success and gain higher occupational status.
  • Higher Income : High IQ often correlates with higher income levels.
  • Less Sensitivity to Disgust : High IQ individuals tend to be less disgust-sensitive, although the exact reason is unclear.
  • Non-Impulsivity : Despite higher IQ, these individuals may not necessarily exhibit less impulsive behavior.
  • Intelligence Despite Disorder : It is possible to have high IQ and still have disorganized behavior or difficulty implementing long-term plans.
  • No Correlation with Industriousness : Surprisingly, there’s no apparent correlation between high IQ and industriousness (the tendency to work hard and diligently). The reasons for this remain unclear.
  • Physiological Differences : People with higher IQs tend to have slightly bigger heads and brains (when controlled for body size), thicker axons on their neurons for more efficient electrical message transmission, and faster simple reflexes.

The above are general tendencies observed in some studies and may not apply to all individuals with high IQ scores.

Also, correlation does not imply causation, and many factors can influence these outcomes.

Wrapping up – Intelligence and PhDs

While it’s common to link high IQ scores to academic achievement, specifically at the PhD level, this isn’t a definitive measure of intelligence.

The average IQ of PhD students and graduates is approximately 125, but that doesn’t exclude those with an average IQ from pursuing a PhD.

Success in academia depends on more than IQ; resilience, research passion, and the ability to turn theories into practice are all key traits.

It’s important to highlight that undertaking a Ph.D. isn’t inherently an indication of “smartness”.

Some choose this path without clear goals, which, given the investment required, might not be the wisest decision.

On the other hand, for those with specific career goals or a deep interest in research and academia, pursuing a PhD can be beneficial.

High IQ individuals tend to have certain traits, like longevity, resistance to stress disorders, and higher income, but these are general tendencies and not applicable to everyone.

Conclusively, the relation between intelligence and PhD attainment is nuanced and impacted by several factors.

phd students iq

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

phd students iq

2024 © Academia Insider

phd students iq

  • Main website
  • Rationalwiki / Oliver Smith
  • Donation options
  • In the media
  • Relation to Søren Kierkegaard?

You are currently viewing PhD students aren’t what they used to be either

PhD students aren’t what they used to be either

  • Post author: Emil O. W. Kirkegaard
  • Post published: 19. May 2021
  • Post category: Education / intelligence / IQ / cognitive ability

One very basic fact of humans is that ability to do anything varies. This is also true for ability to do cognitively demanding jobs of which the main requirement is general intelligence. This trait follows a well known normalish distribution in the population. In general, then, it follows that the more people society dedicates to some activity, the lower average ability, and in this case, intelligence, of these people. This is equally true whether we are talking about dentists, mechanics, or various kinds of students. Thus, when we increase the proportion of the population that enrolls in some level of academic study, the lower average ability of such students. A lot of confusion in the literature results from ignoring this consequence and comparing just the educational levels as if these were invariant indicators over time. For instance, this gives us headlines such as The value of a high school degree has collapsed since 1980 with conclusions:

  • The average income of high school grads has declined 12% during the past 40 years.
  • Advanced degree holders, meanwhile, have enjoyed an 18% increase in income during the same time.
  • The value of a high school degree has declined along with a loss of manufacturing jobs and an increase in low-wage service jobs.

It is a rather trivial consequence of the falling human capital levels of people with just completed high school. In a nice 2014 blogpost , Todd Schoellman and Lutz Hendricks explain this with graphs:

Schoellman Figure 1

Over at Audacious Epigone , we can find the mean IQs by degree over time:

degree

The increased uptake of course also means that the curriculum and associated exams must be continuously made easier for the students to keep up their pass rates. They don’t make them like they used to do.

The main reason, though, I am writing this post is due to this pretty cool study using Danish data:

  • Akcigit, U., Pearce, J. G., & Prato, M. (2020). Tapping into talent: Coupling education and innovation policies for economic growth (No. w27862). National Bureau of Economic Research.
How do innovation and education policy affect individual career choice and aggregate productivity? This paper analyzes the various layers that connect R&D subsidies and higher education policy to productivity growth. We put the development of scarce talent and career choice at the center of a new endogenous growth framework with individual-level heterogeneity in talent, frictions, and preferences. We link the model to micro-level data from Denmark and uncover a host of facts about the links between talent, higher education, and innovation. We use these facts to calibrate the model and study counterfactual policy exercises. We find that R&D subsidies, while less effective than standard models, can be strengthened when combined with higher education policy that alleviates financial frictions for talented youth. Education and innovation policies not only alleviate different frictions, but also impact innovation at different time horizons. Education policy is also more effective in societies with high income inequality.

The key figure:

phd students iq

As authors explain:

Starting in 2002, the Danish Government required the universities to increase the number of PhD slots, as part of a larger initiative to support education and innovation in Denmark (see Section 2 for further institutional details). Figure 14 shows that as the number of slots for PhDs increases, the average IQ of the enrolling students falls. This indicates that there is heterogeneous quality of enrollees and expanding slots may draw in a marginal researcher less talented than the average researcher from the existing pool. Thus, even though policy can increase the supply of researchers, there is a trade-off between expanding the pool of PhDs and the average talent of PhDs in the economy.

The authors method of treating the IQ data is seemingly inappropriate. One has to take the mean of IQs, then transform to centile. Taking the mean of centiles produces a negative downwards bias. With this in mind, we see that the average IQ centile decreased from about 83th to 77th in less than 10 years after this policy change. This represents a drop from about 114.3 to 111.1 IQ (in R; qnorm(c(.83, .77), 100, 15) ). These IQs are quite low already. I did some simulations to see how large this bias is, but it seems to be only about 1 IQ point, so not a big deal here . Other plots from the paper:

phd students iq

The main thing to note here is simply the values on the Y axis. Having a father in 99th income centile gives about 2.2% chance of obtaining a PhD, but having an IQ in the 99th centile gives about 6.5% chance, or nearly 3x the effect size.

phd students iq

You Might Also Like

National chess skill: european culture, intelligence, iqs by university degrees from sats, comment on cpggrey’s new video on the future of automatization.

Med School Insiders

Are Doctors Smart? IQ by Profession

  • By Kevin Jubbal, M.D.
  • June 27, 2021
  • Accompanying Video , Lifestyle , Medical Student , Pre-med
  • Lifestyle , Professions

Did you know that the average IQ varies significantly by profession? Here are the smartest professionals, as demonstrated by the scientific literature.

Most of you will find this interesting, but a small portion will be deeply offended by the data. After all, the year is 2021 and being offended is one of the most widely practiced sports across hyperwoke social justice warriors eager to virtue signal how truly woke they are.

What is IQ?

First, what is IQ? IQ stands for Intelligence Quotient, a standardized test with numeric scoring designed to assess human intelligence. The concept of measuring one’s IQ arose in the 1910’s, by either Wilhelm Stern or Lewis Terman, depending on which source you believe.

The population’s average is 100 with a standard deviation of 15. This means approximately two-thirds of the population scores between 85 and 115, and 2.5% are above 130, and 2.5% are below 70.

It’s important to note that while IQ tests have a high degree of reliability, meaning you’ll score similarly by repeating the test, the validity of the test is limited to the types of intelligence that are necessary to do well in academic work. It does not account for creativity or social intelligence, among other valid and important forms of intelligence.

The Simon-Binet IQ Scale classifies scores as the following:

Over 140 – Genius or almost genius

120 – 140 – Very superior intelligence

110 – 119 – Superior intelligence

90 – 109 – Average or normal intelligence

80 – 89 – Dullness

70 – 79 – Borderline deficiency in intelligence

Under 70 – Feeble-mindedness

How Important is IQ?

So how important are IQ scores? When asked his IQ, Stephen Hawking replied, “I have no idea. People who boast about their IQ are losers.” Well said, Dr. Hawking. Well said.

Here are the professions with the highest average IQ, taken from a variety of sources, including Robert Hauser’s “Meritocracy, Cognitive Ability, and the Sources of Occupational Success”.

At the top of the list, in the low 130’s, are either physicians and surgeons or professors and researchers, depending on the study you look at. The range amongst physicians and surgeons is tightly clustered, whereas the range for professors and researchers is broader. Below that, in the high 120’s are lawyers, followed by accountants in the low 120’s. Pharmacists average around 120 and nurses in the high 110’s. You can find a link to the full list with more professions in the description.

So what does this mean? Not much, actually. It appears that, on average, those with higher IQ’s gravitate to more intellectually stimulating work. Cue the keyboard warriors enraged in protest that their work isn’t as intellectually stimulating as that of a professor or researcher. Curious to know more, I dug further into intelligence, wealth, and happiness. Can you take a guess of what I found?

IQ and Money

Jay Zagorsky from Ohio State University analyzed a sample of 7,500 adults between the ages of 33 and 41. The analysis initially confirmed findings similar to other studies linking higher intelligence with higher income. More specifically, every point increase in IQ was associated with approximately $200-$600 more income per year. For example, someone with an IQ of 130 would earn approximately $12,000 more than someone with an IQ of around 100. Not surprisingly, those with higher intelligence scores also had greater wealth, meaning a higher average net worth.

But when performing multivariate regression models and controlling for various factors, such as divorce, years spent in school, type of work, and inheritance, there was no link between IQ and net worth.

Other studies have found a correlation between IQ and income, meaning those with higher IQs tend to make more money each year. These studies find little correlation between IQ and wealth, however, meaning those in the yacht club aren’t on average smarter than those who aren’t.

I find that data questionable. When you look at some of the wealthiest people in the world, including Jeff Bezos, Warren Buffett, Elon Musk, Mark Zuckerberg, Bill Gates, Larry Ellison, Larry Page, and other highly successful entrepreneurs, it’s hard to argue they aren’t all incredibly intelligent. Their immense wealth and high degree of intelligence would surely skew the data, resulting in something statistically significant. But of course, the data sets we’re looking at don’t take these individuals into account.

IQ and Happiness

What about happiness? Do those with higher IQ tend to be happier than those with lower IQ?

In a 2012 review by Veenhoven and Choi, it was concluded that on the micro-level of individuals, there was no correlation between IQ and happiness. But at the macro-level, meaning the average IQ amongst nations, there was a strong positive correlation. The researchers concluded, “together these findings mean that smartness of all pays more than being smarter than others.”

You would think that smarter people should be happier. There is good evidence that IQ predicts more than just performance in school, but also success at work, health, and longevity. But these positive effects could be offset by negative effects, namely in expectations. As the authors write, “school-smart people could expect more of life and therefore end up equally happy as the less smart, who expect less.” They raise other theories too, such as the development of school intelligence involving opportunity costs, namely less time spent on sports or socializing, which are also important to leading a satisfying life.

And why would smarter nations be happier? One possible explanation is that both IQ and happiness depend on shared factors, such as adequate nutrition and health care.

If this video sparked curiosity, I’m glad, and I hope you join me here again, so please be sure to hit the like and subscribe buttons. It’s unfortunate that as we grow into adulthood, our childlike curiosity is beaten out of us. I find that one of the most rewarding experiences is to get reconnected with that childlike wonder and explore where your curiosity leads you. If this video offended you, examine what meaning you’re assigning to the data. And as Mae West famously said, “those who are easily shocked should be shocked more often.”

If you enjoyed this piece, check out my article on the competitiveness of medical school versus law school versus nursing school and other professions, or my article exploring the research on whether money can buy happiness. 

Picture of Kevin Jubbal, M.D.

Kevin Jubbal, M.D.

Feel-Good Productivity Ali Abdaal Book Summary - book cover

Feel-Good Productivity Book Summary — Using Joy to Revolutionize Studying

What if the the secret to productivity was joy? Our team took a deep dive into the newly released book Feel-Good Productivity to distill its key messages.

Holiday stress - girl working with hands on her head, Christmas decorations in the background

4 Things You’re Doing Wrong This Holiday Season

With the holiday season upon us, here are four things you’re probably doing wrong and what you can do to correct them.

Jar of money with SAVE label - Why are doctors broke

Why Are So Many Doctors Broke? Is It Worth the Debt?

Despite their high salaries, not all doctors are wealthy, and some live paycheck to paycheck. Here are 5 reasons why many doctors today are broke.

This Post Has 3 Comments

' src=

The most well written and balanced article I’ve read so far online. The biases that I’ve encountered is staggering when it came to Presidents and especially Trump. Anyone with that much hate can’t be impartial enough to take seriously. I found your article a breath of fresh air. Well done!

' src=

Yes, that much hate boiling in and marinating the brain disintegrates it.

' src=

To Carl Schultz:

Is your “..that much hate … marinating in brain …disintegrates it …” [ the brain ] … a scientifically based conclusion?

And what is your profession? ( Just curious; as well, hoping its not med, med sci,, data, math., eng., journalism…& on the list goes.

(And no. I’m not for Mr. Trump, his hatefulnesses, his unk8ndnesses, his bullying, &/or for his ridiculousnesses in his speaking shows.)

Just curious about your comment.

Leave a Reply Cancel reply

Join the Insider Newsletter

Join the Insider Newsletter

Receive regular exclusive MSI content, news, and updates! No spam. One-click unsubscribe.

Customer Note Premed Preclinical Med Student Clinical Med Student

You have Successfully Subscribed!

X

  • Latest news
  • UCL in the media
  • Services for media
  • Student news
  • Tell us your story

Menu

5 myths about doing a PhD debunked

10 November 2017

You've seen them lurking in the shadows of the Anatomy Building, carrying mysterious buckets of ice from one room to another, laughing uncontrollably at cat memes in dimly lit rooms, sipping their grandé-sized cups of coffee … and you're intrigued.

pHd

Well, my friend, allow me to introduce you to my people: the people of the PhD Students. Yes, we're a little odd (we prefer the term 'driven'), but we're also very keen to tell you about us, what we do for a living, and hopefully, convince you that the rumours you may have heard about doing a PhD aren't exactly all true. So, in light of that, here are 5 common myths about doing a PhD, debunked by a real PhD student.

1. I'm not smart enough to do a PhD

How many PhD students does it take to work a photocopier machine? 4 and a very annoyed technician. Truth is, we never really got around to figuring out how it worked, but somehow, someone somewhere on the admissions board considered us each worthy of a PhD placement.

The general opinion seems to be that people who do a PhD must have an IQ score approximately equal or above that of Einstein's. However, that's not quite true. Most people who end up doing a PhD are offered so because they're passionate about a subject and/or may have spent time gaining relevant experience in that field.

Quite frankly, whether your passion is understanding the signs of boredom in ferrets or the behaviour of chickens on the North coast of California on a windy day, if you are a committed, creative, and determined individual with some of troubleshooting skills you too can make a significant contribution to the field of chicken behavioural studies.

2. I can't afford a PhD

So this one isn't exactly a myth. Not many people can afford to self-fund a PhD, which is why most prospective PhD students apply for funded positions. There are many PhD opportunities in the UK that are funded by research councils and charitable bodies, such as the BBSRC, Wellcome Trust, Cancer Research UK, and MRC for the sciences and the ESRC and AHRC for the arts and humanities. (But don't let that stop you from going abroad and seeking equally ravishing opportunities there!).

A lot of research groups also tend to individually list their PhD opportunities on their university or company's website or on others, such as findaphd.com (yes, that's a real website). However, if you're not able to find one that is funded, part time PhDs allow you to have a separate job alongside your studies to financially aid you.

3. A PhD's final destination is a lifetime in academia

Oh dear. Who told you this? Well, I suppose it's true that, in order to qualify for entry into a career in academic teaching and research, you'll need a PhD. However, that's not the only career option PhD graduates have. Oh no, no. The reality of academia is that there are only a limited number of posts available, meaning that the competition can be as high as one french fry amongst thirty potato-hungry pigeons. On the other hand, however, a PhD isn't just 3 or 4 years of becoming an expert in zebrafish sleep cycles; it's 3 of 4 years of developing excellent research, project management, public speaking and professional networking skills (transferrable skills which plenty of non-academic jobs are looking for).

Furthermore, a lot of new PhD programmes (especially those at UCL) partner university research groups with those at large industrial companies, giving the candidate both an experience of more traditional academia as well as the modern complexities of working in industry. To loosely quote Hannah Montana; you got the best of both worlds there.

4. Doing a PhD is isolating and depressing

So I have both good and bad news about this. The somewhat bad news is that doing a PhD can become emotionally taxing, especially when you reach a 'slow' period in your data gathering or writing. However, NEVER FEAR! The good news is that it's pretty easy to avoid and/or deal with it.

Like any job, your happiness level whilst doing a PhD can rely quite heavily on your environment, how well you get along with your peers and supervisor, as well as how interested you are in your project. It therefore kinda makes sense that a lot of how you'll do in your PhD is dependent on how much research you do beforehand. For example, doing a masters could help you clarify your interest for a subject. Also, meeting supervisors before applying for a PhD with them can help you suss out each other's working styles.

A lot of universities also have PhD anxiety management courses as well as appropriate measurements put in place. These include having secondary and tertiary supervisors and graduate tutors if anything should come up. The most important thing to remember is that you won't be alone during your PhD nor is it actually depressing - there will always be someone there to help you through any difficulty you may come across. And there will always be cake.

5. I won't have time to do anything else

So don't tell my supervisor this*, but the truth is: you will have quite a substantial amount of free time on your hands.

Depending on your project, chances are you won't need to spend every waking hour/weekend/day of your life in the lab/library. Plenty of my PhD-doing friends dedicated their spare time to learning languages, travelling the world, getting married, having children, becoming professional models, and, well, I'm writing this article, so obviously things aren't all that bad.

One of the greatest lessons you will learn during your PhD isn't how to train artificial intelligence robots how to make a decent cup of tea, but rather how to manage your time effectively. Spreadsheets, calendars, Cortana, diaries, and - my favourite - sticky notes shaped like strawberries, are all great methods of planning your days to suit both you and your research. PhDs are flexible, stimulating, and - most of all - fun little experiences that are, ultimately, forever rewarding. So go on, join us; we have cookies (on Wednesdays).

*just kidding, he knows.

UCL Facebook page

Logo

How to develop a researcher mindset as a PhD student

Entering the postgraduate sphere is a whole new ball game. Shaif Uddin Ahammed shows how to hone a PhD mindset

Shaif Uddin Ahammed's avatar

Shaif Uddin Ahammed

  • More on this topic

An illustration of a man watering a plant growing from his head

Created in partnership with

University of the West of Scotland logo

You may also like

Sometimes quitting your PhD and leaving academia can be the most rational move for students

Popular resources

.css-1txxx8u{overflow:hidden;max-height:81px;text-indent:0px;} Students using generative AI to write essays isn't a crisis

How students’ genai skills affect assignment instructions, turn individual wins into team achievements in group work, access and equity: two crucial aspects of applied learning, emotions and learning: what role do emotions play in how and why students learn.

Life as a PhD student is challenging – and one of the most testing aspects of it is the change in mindset it requires. 

You switch from being a consumer of knowledge to a producer of knowledge. In other words, you transition from passively absorbing information to actively generating new insights through original research. To do that, you have to develop the mindset of a researcher. Here, I’ll reflect on my own academic journey and experiences of supervising others, to share my thoughts on how to do just that.

Have a career plan

A PhD can be long and the prospect of writing a thesis is daunting. It can even be distracting, because you’re leaving the very idea of long-term goals on the back burner.

  • Viving la viva: how to answer viva questions
  • What I have learned on the journey towards commercialising my PhD
  • Five tips for surviving your doctorate after moving over from industry

That’s exactly why it’s worth having a career plan. It will remind you why you’re doing all of this and carry you through the more draining aspects of your studies and research. Trust me, this will help. 

But there’s a difference between simply having goals and having a plan. A plan involves steps to help you achieve the goals you’re aiming towards and gives you boxes to tick. For example, your plan could involve attending conferences, publishing articles and teaching and supporting students. It should also identify skills gaps and outline plans to address them. 

Make sure your targets are realistic and achievable, and discuss them with your supervisor, who will guide you accordingly. Having a well-considered plan will help to motivate you and provide a map to help you chart your progress. Aside from anything else, this is important in helping you maintain a healthy work-life balance. 

Take every opportunity that you can to learn

If you’re studying towards a PhD, you have already demonstrated a desire to learn. Make sure you now take every opportunity to do so and that you learn from sources beyond your supervisor or supervisory team. 

Postgraduate research students can attend regular events and workshops organised by the academic skills teams and career advisors within their universities. By leveraging these resources, you can develop the knowledge and skills required to complete your doctoral degree and also learn about the skills required to secure a job with potential employers. 

It is particularly important to attend workshops organised by the university’s doctoral school. I would strongly urge you not to ignore these sessions. Some students choose to select only those workshops they believe will be beneficial, but attending all workshops – particularly in the early stages of your degree – will help you to develop skills and knowledge that could prove vital in the future. 

For instance, if you are a qualitative researcher, you might choose only to attend workshops related to qualitative research. However, in a future job you might need to teach quantitative methodology or be involved in research using quantitative methods. So it’s good practice not to be selective and to attend all workshops, allowing you to gain wider knowledge and develop networks with individuals from diverse backgrounds.

Involve yourself in academic activities

In research-related careers, applicants are generally expected to have experience of teaching, so it’s hugely important to actively seek teaching and supervisory opportunities both within your university and outside of it. You should also engage in grant applications with others, including your supervisory team – this will provide hands-on experience of the daily challenges faced by academics. 

Many PhD students – and even some supervisors – think these activities could delay the completion of a doctoral degree, but they really do help you to acquire the skills you will need going forward. Supervising undergraduate and postgraduate students will offer insight into mentoring and managing expectations, including those of your supervisor. Involvement with teaching and assessments will give you an intuition when it comes to academic life, and the opportunity to directly apply new skills with the students you work with. This will foster the mindset that you are not only a PhD student but also an active academic. 

Attend conferences and engage with journals

Seek out opportunities to publish in academic journals and attend relevant conferences. If you don’t, your work might not have the desired impact, regardless of its merit. 

Conferences offer a platform for feedback, peer review opportunities, research visibility and invaluable networking. Similarly, involvement in publications and conferences can inspire new ideas and perspectives for research.

The PhD journey is never an easy one, given the number of commitments involved. Remind yourself that you are a researcher and an academic, and that your work has the potential to shape knowledge and understanding for years to come. Research is challenging – but if you’re in a position to study for a PhD, that means you already have the tools to overcome them. 

Shaif Uddin Ahammed is programme leader of MSc International Management and lecturer in strategy and leadership at the University of the West of Scotland. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter .

Students using generative AI to write essays isn't a crisis

Eleven ways to support international students, indigenising teaching through traditional knowledge, seven exercises to use in your gender studies classes, rather than restrict the use of ai, embrace the challenge, how hard can it be testing ai detection tools.

Register for free

and unlock a host of features on the THE site

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Psychologenie

Psychologenie

Average IQ Score According to Various Occupational Groups

'IQ' stands for intelligence quotient. Find out average IQ scores of various occupational groups in this PsycholoGenie article.

Average IQ Score

‘IQ’ stands for intelligence quotient. Find out average IQ scores of various occupational groups in this PsycholoGenie article.

Human Brain

Intelligence is the ability to learn or understand or to deal with new or trying situations. ‘Intelligence quotient’ or IQ informs you about a person’s intelligence. With the help of specially designed legitimate tests, it is possible to measure a person’s intelligence. The average IQ is 100.

A score above 100 is considered as above average, while a score below 100 is considered as a below average. An IQ score much below 50 or above 150 is usually not noticed. Studies show that the IQ of half of the population is between 90 and 110, while 25% have higher IQ’s, and 25% have lower IQ’s. Einstein is considered the ‘only’ man to have an IQ of about 160. Mensa is a society for people with higher IQ, people in the top 2% (1 in 50).

Interpretation

Interpretation of IQ score is helpful in measuring certain aspects of intelligence. It reflects the level of performance of a person on a set of tasks. But it cannot be the 100% correct. It should only be considered as a guideline. You cannot judge the correct value of a person from his IQ. The younger the child is, the less reliable the score.

You may compare the IQ of a person with average IQ scores of other persons from the same category. However, it is not closely linked with some important qualities of the person, such as career achievement and happiness. Along with one’s intelligence quotient, their emotional intelligence should also be taken into consideration. An IQ score scale is the same for everyone, regardless of age. For example, the average IQ for a 13 year old is also 100.

Classification

  • Under 70 – Extremely low (2.2% population, condition of limited mental ability which produces difficulty in adapting to the demands of life.)
  • 70 – 79 – Borderline (6.7% population)
  • 80 – 89 – Low (16.1% population)
  • 90 – 109 – Average (50% population)
  • 110 – 119 – High (16.1% population)
  • 120 – 129 – Superior (6.7% population)
  • < 130 - Very superior (2.2% population)

Normal Distribution of IQ Scores

  • 50% of the scores fall between 90 and 110
  • 70% fall between 85 and 115
  • 95% fall between 70 and 130
  • 99.5% fall between 60 and 140

IQ and Mental Retardation

Severity of mental retardation can be divided into 4 levels:

  • 50 – 70 – Mild mental retardation (85%)
  • 35 – 50 – Moderate mental retardation (10%)
  • 20 – 35 – Severe mental retardation (4%)
  • IQ < 20 - Profound mental retardation (1%)

High IQ and Genius IQ

One who has an IQ around 140 to 145 is considered as genius.

  • 115 – 124 – Above average
  • 125 – 134 – Gifted
  • 135 – 144 – Highly gifted
  • 145 – 154 – Genius
  • 155 -164 – Genius
  • 165 – 179 – High genius
  • 180 – 200 – Highest genius
  • IQ > 200 – Unmeasurable genius

It has been observed that the average IQ score of occupational groups is almost the same.

  • 140 – Top civil servants; professors, and research scientists.
  • 130 – Physicians, surgeons, lawyers, engineers
  • 120 – School teachers, pharmacists, accountants, nurses, stenographers, managers.
  • 110 – Foremen, clerks, telephone operators, salesmen, policemen, electricians.
  • 100 plus – Machine operators, shopkeepers, butchers, welders, sheet metal Workers.
  • Below 100 – Warehouse men, carpenters, cooks, bakers, small farmers, truck and van Drivers.
  • 90 – Laborers, gardeners, miners, factory packers and sorters.

IQ can also be expressed in percentiles which is quite different from the percentage scores. Percentage is concerned with the number of items, correctly answered by a child, compared to the total number of items. But percentile means the number of other test takers score that an individual’s score equals or exceeds.

Being intelligent does not mean being knowledgeable. IQ score is just a rough measure of academic intelligence. An IQ not as high as hoped, should not be considered as an alarming situation. No one should be discouraged on this basis. There are many elements besides IQ which contribute to one’s success.

Like it? Share it!

Get Updates Right to Your Inbox

Further insights.

stressed

Privacy Overview

phd students iq

  • The Inventory

Support Quartz

Fund next-gen business journalism with $10 a month

Free Newsletters

Your college major is a pretty good indication of how smart you are

A path is chosen.

Do students who choose to major in different fields have different academic aptitudes? This question is worth investigating for many reasons, including an understanding of what fields top students choose to pursue, the diversity of talent across various fields, and how this might reflect upon the majors and occupations a culture values.

In order to explore this, I used five different measures of US students’ academic aptitude, which span 1946 to 2014, and discovered that the rank order of cognitive skills of various majors and degree holders has remained remarkably constant for the last seven decades.

An important caveat: The data presented looks only at  group averages  and does not speak to the aptitude of specific   individuals . Obviously there are people with high academic aptitude in every major and there can be larger aptitude differences between entire schools—for example the University of Chicago and a local community college—than between majors within a school. Also interests, which are not directly assessed here, likely play an important role in which major someone selects. One could argue that any one specific test and sample may not be an accurate reflection of the aptitude of specific majors, and this would be a valid point. However, this analysis uses five independent measures and samples of academic aptitude at different points in time—which include everything from tests of cognitive abilities to tests of academic achievement—showing these findings replicate and are quite robust.

In 1952, a study by Dael Wolfle and Toby Oxtoby  published in Science  examined the academic aptitudes of college seniors and recent graduates by discipline. The first sample used to investigate this question was standardized test scores on the Army General Classification Test (AGCT) scale from a sample of 10,000 US college graduates from 40 universities in 1946. The AGCT was originally used as a selection test of general learning ability in the military, and its modern equivalent is the Armed Services Vocational Aptitude Battery (ASVAB) , which is still in use today.

Image for article titled Your college major is a pretty good indication of how smart you are

The second sample was scores from 38,420 US college seniors who took the Selective Service College Qualification Test (SSCQT) put on the AGCT scale in 1951. This was a 150-item test measuring students’ mathematical and verbal ability that does not appear to be in use today.

Image for article titled Your college major is a pretty good indication of how smart you are

In both samples, the pattern was nearly identical. Students who had chosen to major in education and agriculture had the lowest average academic aptitude, whereas the opposite was found for engineering and physical sciences.

The next source of data comes from a research paper I published with colleagues David Lubinski and Camilla Benbow in the Journal of Educational Psychology  (pdf). Project Talent is a stratified random sample of the US population of about 400,000 students who were tested in high school on math, verbal, and spatial aptitude and graduated in the early 1970s. They were followed up 11 years after high school graduation to assess their educational, occupational, and broader life outcomes. The following chart shows general average academic aptitude by major, along with the pattern of average math, verbal, and spatial aptitude within each major as well as between students who earned bachelor’s, master’s, and PhD degrees.

Project talent: General, math, verbal, and spatial aptitude

Image for article titled Your college major is a pretty good indication of how smart you are

The pattern across majors was, again, nearly identical to the independent samples in 1946 and 1951, with education at the bottom and math/computer science, physical science, and engineering at the top.

The next sample comes from over 1.2 million students who took the Graduate Record Examination (GRE) between 2002 and 2005 and indicated their intended graduate major. The data were adapted from  the earlier study  (pdf), which also used Project Talent.

Image for article titled Your college major is a pretty good indication of how smart you are

Even among select GRE test takers, the pattern of education at the bottom and math/computer science, physical science, and engineering at the top remained the same.

The final sample was based on the Scholastic Assessment Test (SAT) and comes from “ The 2014 SAT Report on College & Career Readiness.” The average math and verbal aptitude was taken for college bound seniors who indicated their area of study based on a total sample of about 1.6 million.

Image for article titled Your college major is a pretty good indication of how smart you are

Reflecting back on the graphs from 1946 and 1951, both agriculture and education were also at the bottom. And again, the traditional science, technology, engineering, and mathematics (STEM) fields such as engineering, physical sciences, and mathematics/statistics tended to be at the top. However, the SAT data allowed a more detailed look due to more categories available. Interestingly, social sciences appear to be at the top along with the STEM majors. And yet, psychology, a social science, appears near the bottom. My hypothesis is that the higher average for social sciences is due in part to the fact that schools who select students with the highest test scores—such as Harvard University, Columbia University, Stanford University, University of Chicago, and Washington University in St. Louis—have “social sciences” as their most popular major according to data from US News . Finally, business appeared near the bottom from 1946 to 2005, but by 2014 had risen to the middle of the pack. This shows business is attracting more able students in recent years, perhaps due to the value of this major among current employers.

Similar patterns are found in many other sources of data, including within a select sample of students in the top 1% of academic aptitude. Even within participants at the World Economic Forum in Davos and billionaires , a similar pattern is found across the sectors in which they operate or made their money.

Why have STEM majors consistently been at the top?

According to a recent Payscale college salary report , STEM majors tend to be the most highly compensated. That STEM majors have consistently had the highest average academic aptitude may also reflect the fact that STEM disciplines are highly complex and require such aptitude. Even scientists in the “hard” STEM fields (e.g. physics, math) tend to believe that these fields require brilliance or genius according to a recent paper published in Science by Sarah-Jane Leslie and colleagues, perhaps because it is true, at least in part. In some of my research , even within the top 1% on the SAT-Mathematics (SAT-M) for talented test takers at age 12, a higher score was associated with a higher likelihood of these students eventually earning a STEM PhD, publication, patent, and university tenure. Additionally,  Stephen Hsu and James Schombert  used five years of university academic records to show that the probability of success of being at the top of one’s cohort in a physics or math major (but not other majors such as sociology, history, English, or biology) was highly dependent on an individual’s SAT-M score. For example, earning a score of roughly below 600 on the math portion made the probability of attaining a superior academic record in physics or math very low. Perhaps the STEM disciplines have always selected on academic aptitude and employers have rewarded that aptitude and skillset due to STEM’s usefulness in a variety of fields .

Why have education majors consistently been at the bottom?

These data show that US students who choose to major in education, essentially the bulk of people who become teachers, have for at least the last seven decades been selected from students at the lower end of the academic aptitude pool. A 2010 McKinsey report  (pdf) by Byron Auguste, Paul Kihn, and Matt Miller noted that top performing school systems, such as those in Singapore, Finland, and South Korea, “recruit 100% of their teacher corps from the top third of the academic cohort.” The US certainly recruits some of its teachers from the top of the aptitude distribution, including at top education schools such as Harvard University and Vanderbilt University. Additionally, Teach for America often selects students from highly selective institutions, which have already filtered students based on academic aptitude.

Andrew Yang, founder of Venture for America, has argued that what top students choose to study greatly influences a society down the road. The McKinsey team stated that closing the talent gap, or following the lead from some other countries and selecting teachers from the high end of the academic aptitude continuum may help improve education for US students. We really don’t know if this strategy would work, but given that the rank order of academic aptitudes for various majors has remained stubbornly constant for the last seven or more decades, it will be extremely difficult to shift what our culture values from traditional STEM (including medical) disciplines to education and teaching, at least in the short term. A recent article by Dan Goldhaber and Joe Walch in Education Next highlights, however, that the SAT scores of first year teachers has recently been on the rise.

How this reflects what US culture values

Why has the rank order of average academic aptitude across various areas been strikingly the same? That remains unclear. For one thing, however, it reflects upon the majors and resulting occupations that US culture has consistently valued for the last seven or more decades. We will have to wait and see if in the next seven decades, this pattern of academic aptitude across majors will change, and if so, in what ways. What majors and occupations future generations of top students choose to pursue directly impacts a nation’s future economy.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Psychol Sci

How Much Does Education Improve Intelligence? A Meta-Analysis

Stuart j. ritchie.

1 Department of Psychology, The University of Edinburgh

2 Centre for Cognitive Ageing and Cognitive Epidemiology, The University of Edinburgh

Elliot M. Tucker-Drob

3 Department of Psychology, University of Texas at Austin

4 Population Research Center, University of Texas at Austin

Associated Data

Supplemental material, RitchieOpenPracticesDisclosure for How Much Does Education Improve Intelligence? A Meta-Analysis by Stuart J. Ritchie and Elliot M. Tucker-Drob in Psychological Science

Supplemental material, RitchieSupplementalMaterial for How Much Does Education Improve Intelligence? A Meta-Analysis by Stuart J. Ritchie and Elliot M. Tucker-Drob in Psychological Science

Intelligence test scores and educational duration are positively correlated. This correlation could be interpreted in two ways: Students with greater propensity for intelligence go on to complete more education, or a longer education increases intelligence. We meta-analyzed three categories of quasiexperimental studies of educational effects on intelligence: those estimating education-intelligence associations after controlling for earlier intelligence, those using compulsory schooling policy changes as instrumental variables, and those using regression-discontinuity designs on school-entry age cutoffs. Across 142 effect sizes from 42 data sets involving over 600,000 participants, we found consistent evidence for beneficial effects of education on cognitive abilities of approximately 1 to 5 IQ points for an additional year of education. Moderator analyses indicated that the effects persisted across the life span and were present on all broad categories of cognitive ability studied. Education appears to be the most consistent, robust, and durable method yet to be identified for raising intelligence.

There is considerable interest in environmental factors that might improve the cognitive skills measured by intelligence tests, and for good reason: These skills are linked not just to higher educational attainment but to superior performance at work ( Kuncel & Hezlett, 2010 ; Schmidt, Oh, & Shaffer, 2016 ), better physical and mental health ( Gale et al., 2012 ; Wrulich et al., 2014 ), and greater longevity ( Calvin et al., 2017 ). The current meta-analysis focused on a potential intelligence-boosting factor that is routinely experienced by children and young adults throughout the world: education. We addressed the question of whether increases in normal-range educational duration after early childhood have positive effects on a student’s later intelligence.

On its face, the positive correlation between intelligence test scores and years of completed education ( Strenze, 2007 ) might suggest that the experience of prolonged education has a beneficial effect on intelligence. However, the association could also result from a selection process, whereby more intelligent children progress further in education ( Deary & Johnson, 2010 ). Indeed, there is ample evidence that pervasive selection processes operate in the intelligence-education association: Longitudinal studies demonstrate the predictive power of early intelligence test scores for later educational attainment ( Deary, Strand, Smith, & Fernandes, 2007 ; Roth et al., 2015 ). The existence of selection processes does not necessarily gainsay any causal effects of education, but it does create an endogeneity problem that renders causal hypotheses difficult to test in observational data. In recent years, however, researchers have increasingly capitalized on a number of sophisticated study designs that circumvent the endogeneity problem, testing the causal hypothesis that more education leads to higher intelligence. This unique class of studies serves as the basis for the current meta-analysis.

In a seminal review of the effects of educational duration on intelligence, Ceci (1991) adduced evidence from a wide variety of research designs, including studies of intermittent school attendance, studies of the “summer slide” (the drop in children’s cognitive performance during summer vacation), and studies using regression-discontinuity methods to separate schooling effects from age effects. Ceci’s conclusion was that “schooling emerges as an extremely important source of variance” in intelligence test scores (p. 719). However, this and several newer reviews ( Deary & Johnson, 2010 ; Gustaffson, 2001 ; Snow, 1996 ; Winship & Korenman, 1997 ) are all exclusively narrative. In recent years, several high-quality studies investigating educational effects on intelligence have been published, but there continues to be no overall quantitative synthesis of this work. We report the first such synthesis.

We analyzed results from the three most prominent quasiexperimental methods for testing the effects of education on intelligence. We defined intelligence as the score on a cognitive test; see below for consideration of how the test scores might relate to the underlying psychological processes. Each method implements a different approach to minimize effects stemming from selection processes. Full meta-analytic inclusion criteria are reported below, but first we describe each of the three designs, providing a canonical example of each.

The first research design, which we label control prior intelligence , is a longitudinal study in which cognitive testing data are collected before and after variation in the duration of education. This allows the relation between education and the second test to be adjusted for by each participant’s earlier ability level. An example of this design is the study by Clouston et al. (2012) , who analyzed data from three large U.S. and UK cohort studies, all of which had both an adolescent and a midlife cognitive test. Results indicated that completing a university education was linked to higher midlife cognitive ability, above and beyond adolescent intelligence.

The second design, policy change , relies on changes in educational duration that are, by all accounts, exogenous to the characteristics of the individuals. An example of this design is the study by Brinch and Galloway (2012) , who used large-scale data from a 1960s educational reform in Norway. This reform increased compulsory education by 2 years; critically, it was staggered across municipalities in the country. This allowed the researchers to estimate the effect of an additional year of school on a later intelligence test, taken by males at entry to military service as part of Norway’s universal military draft. Under the assumption that the policy change affected intelligence only via increasing years of schooling, the authors used an instrumental-variables analysis to estimate the effect of 1 year of schooling on intelligence at approximately 3.7 points on a standard IQ scale ( M = 100, SD = 15 in the population).

The third design takes advantage of a school-age cutoff . These studies use regression-discontinuity analysis to leverage the fact that school districts implement a date-of-birth cutoff for school entry. The first study to use this method was by Baltes and Reinert (1969) , but the most highly cited example is by Cahan and Cohen (1989) , who, in a sample of over 12,000 children across three grades of the Israeli school system (between the ages of approximately 10–12 years), found that schooling exerted positive effects on all of 12 tests covering a variety of cognitive domains. These educational effects were around twice the effect of a year of age. The strict assumptions of this method are sometimes not fully met ( Cliffordson, 2010 ); methodological issues are discussed in more detail below.

After synthesizing the evidence within and across these three research designs, we addressed two further questions. First, which factors moderate the effect of education on intelligence? Perhaps most important, we examined the moderator of age at the outcome test, thus asking whether any educational effects are subject to decline or “fadeout” with increasing age. Second, to what extent is there publication bias in this literature, such that the meta-analytic effects might be biased by a disproportionate number of positive results?

Inclusion criteria, literature search, and quality control

We included data from published articles as well as books, preprint articles, working papers, dissertations, and theses, as long as they met the meta-analytic inclusion criteria. The criteria were as follows. First, the outcome cognitive measures had to be objective (not, e.g., subjective teacher ratings) and continuous (not, e.g., categorical indicators such as the presence of mild cognitive impairment). Second, variation in education had to be after age 6 (i.e., the meta-analysis was not focused on interventions such as preschool but instead on variation later in the educational process). Third, the population under study had to be generally healthy and neurotypical. We thus did not include studies that focused specifically on samples of patients with dementia, individuals with neurodevelopmental disorders, or other such selected groups.

Fourth, studies had to fit into one of the three study design types described above. That is, they had to (a) use earlier cognitive test scores as a control variable in a model predicting cognitive test scores after some variation in educational duration (control prior intelligence), (b) use data from a natural experiment that specifically affected educational duration prior to the outcome cognitive test or tests (policy change), or (c) use a regression-discontinuity design to analyze cognitive test scores from individuals born on either side of a cutoff date for school entry (school-age cutoff).

We began by searching Google Scholar for articles that had cited Ceci’s (1991) review on the effects of education on intelligence, and then searching through the references within each of those studies. Next, we ran searches of APA PsycINFO, Google Scholar, and the ProQuest Dissertations and Theses online database, using search terms related to the three study design types in our inclusion criteria. These terms included combinations of general terms related to the broad topic—“intelligence,” “cognitive ability,” “cognition,” “mental ability,” “IQ,” “achievement,” “ability,” “reasoning,” “fluid intelligence,” “general intelligence,” “education,” “educational,” “school,” “schooling”—with specific terms related to the study designs, such as “return to education/school,” “influence/effect of education/school,” “regression discontinuity,” “instrumental variables,” “two-stage least squares,” “difference-in-difference,” “natural experiment,” and “quasi-experiment.” Having selected the relevant studies from these searches and removed duplicates, we then searched the references within each report to find any additional studies of interest. Finally, we e-mailed authors of multiple studies to request any unpublished preprint articles or working papers that we had not already found. A flow diagram of the overall literature search process is shown in Figure S1 in the Supplemental Material available online.

After arriving at a set of studies that fit the inclusion criteria, we closely examined each report and removed any studies that we deemed did not fit our quality criterion. No studies were excluded for the control-prior-intelligence design. One study was excluded for the policy-change design as we judged it to have a potentially confounded instrument. See Table S1 in the Supplemental Material for a brief description of the design of each included policy-change study, along with other relevant details. For the school-age-cutoff design, we excluded five studies because they did not explicitly report dealing with threats to the validity of the regression-discontinuity analysis related to selection or noncompliance with the cutoff age. We detail the inclusion criteria and quality control for the school-age-cutoff design in the Supplemental Material .

We also produced one new analysis for inclusion in the meta-analysis, using data from a large longitudinal study to which we had access (the British Cohort Study; Elliot & Shepherd, 2006 ), where the critical control-prior-intelligence analysis had not—to our knowledge—previously been performed. Full details of this analysis are available in the Supplemental Material .

When multiple results were available for a single data set, we coded all relevant cognitive outcomes. However, where multiple estimates from different analyses of the same cognitive outcomes within a data set were available, we used the following criteria to select the estimate for meta-analysis. First, for articles using an instrumental-variables approach in which an alternative ordinary least squares regression analysis was also available, we always took the estimates from the instrumental-variables analysis (although we also recorded the ordinary least squares regression estimates in our data spreadsheet). Second, to reduce heterogeneity due to between-study differences in what covariates were included, we took the analysis that adjusted for the fewest number of covariates. Of the effect sizes that remained after fulfilling the first two criteria, we took the estimate that involved the largest sample size. The precise sources (table, section, or paragraph) for each estimate are described in notes in the master data spreadsheet, available on the Open Science Framework page for this study ( https://osf.io/r8a24/ ). Note that for two of the studies ( Ritchie et al., 2013 ; Ritchie, Bates, & Deary, 2015 ), we had the data from the cohorts available and recalculated the estimates to remove one of the covariates (see the data spreadsheet). For comparison, we also provide an estimate where maximal covariates were included. A full list of all studies included in the final meta-analysis is shown in Table S4 in the Supplemental Material .

Statistical analysis

Calculating effect sizes.

We rescaled each effect size into the number of IQ point units, on the standard IQ scale ( M = 100, SD = 15), associated with 1 additional year of education. We also made the corresponding correction to the standard error associated with each rescaled effect size. For example, we multiplied z -scored per-year effect sizes by 15, and we divided unstandardized per-year effect sizes by the associated standard deviation of the cognitive test before multiplying them by 15 (effect-size calculations are described in the master data spreadsheet). For two studies, we recalculated the effect size using structural equation modeling of the correlation matrix provided in the report (see the Supplemental Material ). Where effect-size recalculation was not possible from the data provided in the original reports—for example, because of missing standard errors or the effect size being in units other than years of education—we contacted the authors to request further information.

Meta-analytic structural equation models

To produce the main estimates, we used random-effects meta-analytic structural equation modeling, as described by Cheung (2008) . This approach is mathematically equivalent to conventional random-effects meta-analytic approaches but has the added advantage of being able to capitalize on special features of structural equation modeling software, such as the correction of standard errors for nonindependence of observations.

Many studies reported effect sizes for more than one cognitive test outcome. For instance, they might have reported effects on a test of memory and a test of executive function. Instead of producing a per-study average of these estimates, we included them all individually, weighting each estimate by the reciprocal of the number of effect sizes provided from each study. In addition, using the TYPE = COMPLEX and the CLUSTER functions in Mplus (Version 7.3; Muthén & Muthén, 2014 ), we employed a sandwich estimator to correct standard errors for dependencies associated with the clustering of effect sizes within studies.

We used the tau (τ) statistic, an estimate of the standard deviation of the true meta-analytic effect, as an index of heterogeneity. To attempt to explain any heterogeneity, we tested a somewhat different set of moderators for each of the three study designs, as appropriate given their methodological differences. For all three designs, we tested the moderators of the age at the outcome test and the outcome test category (classified in two different ways, as described below). For both the control-prior-intelligence and the policy-change designs, we tested the moderators of participant age at the early (control) test or at the policy change (for the control-prior-intelligence design, we also tested the moderator of the gap between the two tests, though this was heavily related to the age at outcome test) and of whether the study was male only or mixed sex (several studies in these designs relied on military draft data and were thus restricted to male participants; note that this variable is confounded with the representativeness of the study because, aside from their single-sex nature, military draft studies will tend to include a more representative sample of the population than others). Where we combined all three study designs, we tested whether design was a moderator.

We classified outcome tests in two ways. The first was into the broad intelligence subtype: fluid tests (tests that assessed skills such as reasoning, memory, processing speed, and other tasks that could be completed without outside knowledge from the world), crystallized tests (tests that assessed skills such as vocabulary and general knowledge), and composite tests (tests that assessed a mixture of fluid and crystallized skills; in one instance, this composite was formally estimated as a latent factor with fluid and crystallized indicators). The second classification method was to highlight tests that might be considered achievement measures. To do this, we classified every test that would likely have involved content that was directly taught at school (including reading, arithmetic, and science tests) as “achievement,” and the remaining tests, which generally involved IQ-type measures (ranging from processing speed to reasoning to vocabulary), as “other” tests.

Publication-bias tests

We used four separate methods to assess the degree of publication bias in the data set. First, we tested whether the effect sizes were larger in peer-reviewed, published studies versus unpublished studies (for example, PhD dissertations or non-peer-reviewed books). If unpublished studies have significantly smaller effect sizes, this may indicate publication bias.

Second, we produced funnel plots, visually inspecting them and testing their symmetry using Egger’s test. Significant funnel plot asymmetry (where, for example, low-precision studies with small effects were systematically missing) was taken as a potential indication of publication bias in the data.

Third, we used p -curve ( Simonsohn, Simmons, & Nelson, 2015 ) to assess the evidential value of the data set using just the significant p values. A left-skewed p -curve (with more p values near the alpha level, in this case .05) indicates possible publication bias or so-called p -hacking (use of questionable research practices, such as the ad hoc exclusion of participants or inclusion of covariates, in order to turn a nonsignificant result into a significant one) in the data set. Conversely, a right-skewed p -curve indicates evidential value. The shape of the curve is tested using both a binomial test (for the proportion of values where p < .025) and a continuous test, which produces “ pp values” (the probability of finding a p value as extreme as or more extreme than the observed p value under the null hypothesis), and combines them to produce a z score using Stouffer’s method. We used the online p -curve app ( http://www.p-curve.com/ ) to compute the analyses.

Fourth, we used the Precision Effect Test–Precision Effect Estimate with Standard Errors technique (PET-PEESE; Stanley & Doucouliagos, 2014 ). The method first uses a weighted metaregression of the effect sizes on the standard errors, using the intercept of this regression—which estimates a hypothetical “perfect” study with full precision, and thus a standard error of zero—as the corrected “true” meta-analytic estimate (called the PET estimate). However, Stanley and Doucouliagos (2014) advised that, where the PET estimate was significantly different from zero, a less biased estimate can be produced by using the variance instead of the standard errors. The intercept of this regression is the PEESE estimate. We followed this conditional logic in our PET-PEESE analysis. References for all of the analysis software are provided in the Supplemental Material .

The selection process resulted in a final meta-analytic data set including 142 effect sizes from 42 data sets, analyzed in 28 studies. The total sample size across all three designs was 615,812. See Table 1 for a breakdown of study characteristics by study design. Figure S2 shows forest plots for each design.

Descriptive Statistics for Each Study Design

Note: To estimate N from studies with multiple effect sizes with different n s, we averaged sample sizes across effect sizes within each data set and rounded to the nearest integer. “Unpublished” refers to any study not published in a peer-reviewed journal.

Overall meta-analytic estimates

In three separate unconditional random-effects meta-analytic models (one for each study design), we estimated the effect of 1 additional year of education on cognitive outcomes. For all three study designs, there was a significant effect of 1 additional year of education. For control prior intelligence, the effect was 1.197 IQ points ( SE = 0.203, p = 3.84×10 −09 ); for policy change, it was 2.056 IQ points ( SE = 0.583, p = 4.23×10 −04 ); and for school-age cutoff, it was 5.229 IQ points ( SE = 0.530, p = 6.33×10 −23 ). An overall model including all estimates from all three designs found an average effect size of 3.394 IQ points for 1 year of education ( SE = 0.503, p = 1.55×10 −11 ).

The overall model, considering all study designs simultaneously and including study design as a nominal moderator variable, found that the estimate for school-age cutoff was significantly larger than that for control prior intelligence ( SE = 0.564, p = 1.98×10 −13 ) and for policy change ( SE = 0.790, p = 5.34×10 −05 ). There was no significant difference between the estimates for control prior intelligence and policy change ( SE = 0.608, p = .116).

The estimates above had minimal covariates included; for 27 of the 142 effect sizes, it was possible to extract an estimate that included a larger number of covariates (see data spreadsheet). This maximal-covariate analysis yielded reduced, though similar and still significant, effect-size estimates for the control-prior-intelligence design (0.903 IQ points, SE = 0.372, p = .015), and for the quasiexperimental design (1.852 IQ points, SE = 0.508, p = 2.71×10 −04 ). There were no additional covariates to include for the school-age-cutoff design.

Heterogeneity and moderator analyses

There was significant heterogeneity in the unconditional meta-analyses from all three designs (control prior intelligence: τ = 0.721, SE = 0.250, p = .004; policy change: τ = 1.552, SE = 0.144, p = 3.40×10 −27 ; school-age cutoff: τ = 1.896, SE = 0.226, p = 5.38×10 −17 ). This was also the case for the overall model including all the data, which included study design as a nominal moderator (τ = 2.353, SE = 0.272, p = 5.72×10 −18 ). We explored which moderators might explain the heterogeneity within each of the three study designs. Descriptive statistics for each moderator are shown in Table 1 .

Age at early test and time lag between tests

For the control-prior-intelligence design, we tested whether the age at which the participants had taken the initial (control) cognitive test, or the gap between this early test and the outcome test, moderated the effect size. The age at the early test, which did not vary substantially (see Table 1 ), was not significantly related to the effect size (−0.024 IQ points per year, SE = 0.064, p = .706). For the youngest early-test age (10 years), the metaregression model indicated that the effect size of 1 additional year of education was 1.243 IQ points; for the oldest (16 years), the effect size was 1.099 IQ points. Conversely, the time lag between the tests was a significant moderator of the effect size (−0.031 IQ points per year, SE = 0.015, p = .033). This metaregression indicated that at the smallest age gap (5 years), the effect size was 2.398 IQ points, whereas for the largest age gap (72.44 years), the effect size was a substantially smaller 0.317 IQ points. Note that this age gap is almost fully confounded with the age at the outcome test ( r = .988), assessed as a moderator below.

Age at intervention

For the policy-change design, we tested whether the age at which the educational policy change produced an increment in compulsory schooling moderated the intervention effect. This was not the case: The effect size increased by a nonsignificant 0.038 IQ points per year of age at the intervention ( SE = 0.228, p = .867). The metaregression model implied that at the youngest intervention age (7.5 years), the effect size was 1.765 IQ points, and at the oldest (19 years) it was 2.204 IQ points.

Age at outcome test

Figure 1 shows the effect sizes in the first two study designs as a function of the participants’ mean age at the outcome test. For the control-prior-intelligence studies, outcome age was a significant moderator: The effect size of education declined by −0.026 IQ points per year of age ( SE = 0.012, p = .029). At the youngest age (18 years) the effect size of having had an additional year of education was 2.154 IQ points, whereas at the oldest age (83 years) the effect size was 0.485 IQ points. This effect was smaller but still significant if the largest effect size (> 3 IQ points for an additional year) was excluded (−0.011 IQ points per year of age, SE = 0.005, p = .018). There was no significant moderating effect of age at outcome for the policy-change studies (0.014 IQ points per year, SE = 0.022, p = .543): The effect at the youngest age (18 years; 1.690 IQ points) was not significantly different from the effect at the oldest age (71 years; 2.413 IQ points). There was comparatively little variation in the age at the outcome test for the school-age-cutoff design ( SD = 1.6 years; see Table 1 ); there was no significant age moderation effect (−0.027 points per year, SE = 0.399, p = .947).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_0956797618774253-fig1.jpg

Effect of 1 additional year of education as a function of age at the outcome test, separately for control-prior-intelligence and policy-change study designs. Bubble size is proportional to the inverse variance for each estimate (larger bubbles = more precise studies). Estimates in these illustrations differ slightly from the final metaregression estimate, which accounted for clustering. The shaded area around the regression line represents the 95% confidence interval.

Outcome test category

Splitting the outcome cognitive tests into three broad categories—composite, fluid, and crystallized tests—we tested whether the category moderated effect sizes. For the control-prior-intelligence design, there were stronger educational impacts on composite tests (1.876 IQ points, SE = 0.467, p = 5.94×10 −05 ) than on fluid tests (0.836 points, SE = 0.097, p = .152), and the difference between composite and fluid was significant (1.039 points, SE = 0.496, p = .036). There was only one crystallized outcome test for the control-prior-intelligence design (1.893 points, SE = 0.348, p = 5.34×10 −08 ), so we did not include it in the moderator comparison here. For the policy-change design, there were significant effects for both composite (2.314 IQ points, SE = 0.869, p = .008) and fluid (2.272 points, SE = 0.765, p = .003) but not crystallized (1.012 points, SE = 1.125, p = .368) tests; however, the effects on the three different categories were not significantly different from one another (all difference p values > .35). Finally, for the school-age-cutoff design, there were significant effects of a year of education on composite (6.534 points, SE = 2.433, p = .007), fluid (5.104 points, SE = 0.621, p = 2.05×10 −16 ), and crystallized (5.428 points, SE = 0.170, p = 1.04×10 −223 ) tests; there were, however, no significant differences between effect sizes across outcome types (difference p values > .5).

We then split the outcome tests into “achievement” tests versus “other” tests. There was only one achievement test in the control-prior-intelligence design, so we did not run this analysis. For policy change, there was no significant difference in the educational effect on the 7 achievement tests (2.760 IQ points, SE = 0.968, p = .004) versus the 23 other tests (1.784 points, SE = 0.553, p = .001; difference SE = 1.011, p = .334). However, for school-age cutoff, which had the largest proportion of achievement tests (38 of the 86 tests were classed as achievement tests), achievement tests showed a substantially and significantly larger educational effect (6.231 points, SE = 0.339, p = 2.85×10 −75 ) than other tests (3.839 points, SE = 0.412, p = 1.11×10 −20 ; difference SE = 0.371, p = 1.19×10 −10 ).

Male-only studies

We tested whether studies that included only male participants showed a differential educational effect. This was not the case for control prior intelligence (effect for the 2 male-only estimates: 2.261 IQ points, SE = 0.897, p = .012; effect for 24 mixed-sex estimates: 1.027 IQ points, SE = 0.110, p = 6.79×10 −21 ; difference SE = 0.905, p = .173), or for policy change (effect for the 8 male-only estimates: 1.683 IQ points, SE = 0.507, p = .001; effect for 22 mixed-sex estimates: 2.215 IQ points, SE = 0.788, p = .005; difference SE = 0.941, p = .572). There were no male-only school-age-cutoff studies.

Multiple moderators

Table 2 shows the results from each study design after the inclusion of multiple moderators. We included as many moderators as possible for each design, though we chose to include the “achievement” categorization of the outcome test for the school-age-cutoff design (instead of the alternative categorization involving composite, fluid, and crystallized tests, which we used for the other designs) because it had such a substantial effect in the single-moderator model. Including multiple moderators reduced the τ statistic—to a larger degree for the control-prior-intelligence design than for the others—though significant heterogeneity remained in all cases. The moderators that were individually significant (e.g., age in the control-prior-intelligence design or achievement tests in the school-age-cutoff design) were also significant in the multiple-moderator model, indicating that their effects were incremental of the other moderators that we included.

Simultaneous Multiple-Moderator Analyses for Each Study Design

Note: Values are estimates (in IQ point units); standard errors are in parentheses. The change in the τ statistic refers to that from the unconditional models, as reported in the row above.

Publication status

As an initial test of publication bias, we tested whether the effect sizes were larger in studies published in peer-reviewed journals compared with those that were either unpublished or published elsewhere. For control prior intelligence, there were 4 published versus 22 unpublished estimates; there was no significant difference in their effect sizes (the effect was 0.036 points larger in published studies, SE = 0.343, p = .915). For policy-change studies, for which there were 21 published and 9 unpublished estimates, the effect size was significantly larger in unpublished studies, though there were still significant effects within each set of studies (published effect = 1.635 points, SE = 0.575, p = .004; unpublished effect = 3.469 points, SE = 0.388, p = 4.11×10 −19 ; difference SE = 0.710, p = .010). For school-age-cutoff studies, there was no significant difference between published and unpublished studies (for which there were 64 and 22 estimates, respectively; difference = 0.509 points higher in published studies, SE = 0.689, p = .460).

Funnel plots

Funnel plots for the three study designs are shown in Figure 2 . Note that, for the school-age-cutoff design, 42 of the 86 standard errors were reported as approximate or as averages; because they were inexact, we used them in the estimates of the meta-analytic effects above but did not use them to estimate the funnel plots (or for the PET-PEESE analysis below). Egger’s test found no evidence of funnel plot asymmetry for any of the designs (control prior intelligence: z = −1.378, p = .168; policy change: z = −0.486, p = .627; school-age cutoff: z = 0.941, p = .347). However, only the funnel for control prior intelligence displayed an approximately funnel-like shape. See Figure S3 in the Supplemental Material for funnel plots including studies from all three designs, one using the raw effect sizes and using effect sizes residualized for the moderators shown in Table 2 .

An external file that holds a picture, illustration, etc.
Object name is 10.1177_0956797618774253-fig2.jpg

Funnel plots showing standard error as a function of effect size, separately for each of the three study designs. The dotted lines form a triangular region (with a central vertical line showing the mean effect size) where 95% of estimates should lie in the case of zero within-group heterogeneity in population effect sizes. Note that 42 of the total 86 standard errors reported as approximate or as averages in the original studies were not included for the school-age-cutoff design.

Next, we used p -curve to examine the distribution of study p values. The p -curves are shown in Figure 3 . For the control-prior-intelligence design, the binomial test for a right-skewed p -curve was significant, indicating evidential value ( p = .0007); this was also the case for the continuous test (full p -curve: z = −18.50, p = 2.06×10 −76 ; half p -curve: z = −19.19, p = 4.49×10 −82 ). For the quasiexperimental design, the binomial test was significant ( p = .0461), as was the continuous test (full p -curve: z = −15.59, p = 8.51×10 −55 ; half p -curve: z = −17.64, p = 1.21×10 −69 ). For school-age-cutoff studies, all three tests were significant (binomial test p < .0001; continuous test full p -curve: z = −43.72, p ≈ .00; half p -curve: z = −42.61, p ≈ .00). For all three designs, p -curve estimated that the statistical power of the tests included was 99%. Thus, overall, p -curve indicated that all three designs provided evidential value, and there was no evidence for publication bias or p -hacking in the studies with statistically significant results (full output from the p -curve app is available on the Open Science Framework page for the present study).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_0956797618774253-fig3.jpg

p -curves illustrating the distribution of significant p values for each of the three study designs.

Finally, we used PET-PEESE to obtain an estimate of the effect size for each study design in a hypothetical study with perfect precision. For all three designs, the PET estimate (intercept) was significant, so we went on to use the PEESE estimate of the intercept, which represents the predicted effect sizes under a counterfactual of no publication bias (control prior intelligence: PET estimate = 1.034 IQ points per year, SE = 0.153, p = 5.45×10 −07 ; PEESE estimate = 1.091 points, SE = 0.117, p = 1.76×10 −09 ; policy change: PET estimate = 1.286 points, SE = 0.153, p = 3.69×10 −09 ; PEESE estimate = 1.371 IQ points, SE = 0.142, p = 2.08×10 −10 ; school-age cutoff: PET estimate = 3.299 IQ points, SE = 1.166, p = .007; PEESE estimate = 4.244 IQ points, SE = 0.718, p = 5.26×10 −07 ). Note that only the exact standard errors (i.e., not those reported as approximate or averages, as noted for the funnel plots above) were used for the PET-PEESE analysis. For all three designs, the PEESE test indicated effect sizes that were slightly smaller than in the original estimate but still statistically significant. Graphs of the PET-PEESE estimates for each design are shown in Figure S4 in the Supplemental Material .

Overall, four different publication-bias tests broadly indicated minimal systematic bias in the results: Where there was an unexpected result—unpublished studies producing larger estimates for the policy-change design—this was in the opposite direction to what would be expected under publication bias.

In a meta-analysis of three quasiexperimental research designs, we found highly consistent evidence that longer educational duration is associated with increased intelligence test scores. Each of the designs implemented a different approach for limiting endogeneity confounds resulting from selection processes, where individuals with a propensity toward higher intelligence tend to complete more years of education. Thus, the results support the hypothesis that education has a causal effect on intelligence test scores. The effect of 1 additional year of education—contingent on study design, inclusion of moderators, and publication-bias correction—was estimated at approximately 1 to 5 standardized IQ points.

Each research design had its own strengths and weaknesses. The control-prior-intelligence design produced precise, long-range estimates of the educational effect, taking into account the full range of educational variation. However, this approach did not employ a specific instrument for introducing differences in educational duration, instead capitalizing on naturally occurring variation, which is itself multidetermined. Moreover, because the early and outcome tests were rarely identical (and because the early ability tests likely contained measurement error), the control for preexisting ability levels was likely only partial.

The policy-change design produced causal estimates across large, population-based data sets. However, estimates from this approach were relatively imprecise, as is typical of instrumental-variable analyses. Furthermore, because the policies used as instruments typically increased educational duration only for the subset of individuals who would otherwise have attended school at the preexisting minimum compulsory level, this design should be interpreted as producing a “local average treatment effect” that might not generalize across the full educational range ( Morgan & Winship, 2015 , p. 305).

The school-age-cutoff design produced the largest number of estimates across a wide range of cognitive abilities, but it was restricted to comparisons across adjacent school years. In this design, the critical causal estimate is based on comparing test scores in a given grade with a counterfactual formed by extrapolating within-grade age trends beyond the cutoff dates. This approach is powerful, but the key assumption—that the age trend extrapolates—is difficult to test. Moreover, although this approach produced large effect-size estimates, we did not identify any studies that tested whether these effects persisted into adulthood. These estimates should thus be regarded with caution.

The finding of educational effects on intelligence raises a number of important questions that we could not fully address with our data. First, are the effects on intelligence additive across multiple years of education? We might expect the marginal cognitive benefits of education to diminish with increasing educational duration, such that the education-intelligence function eventually reaches a plateau. Unfortunately, we are not aware of any studies that have directly addressed this question using a rigorous quasiexperimental method.

Second, are there individual differences in the magnitude of the educational effect? One possibility is the Matthew effect ( Stanovich, 1986 ), whereby children at greater initial cognitive (or socioeconomic) advantage benefit more from additional education than those at lower advantage. Another possibility is that education acts as an equalizer, such that children at lower levels of initial advantage benefit most ( Downey, von Hippel, & Broh, 2004 ). Indeed, some evidence of an equalizing effect was reported in a single study by Hansen, Heckman, and Mullen (2004) .

Third, why were the effects obtained from the control-prior-intelligence and policy-change designs—which generally came from increases in educational duration that were not explicitly targeted cognitive interventions—still apparent in later life, when effects from targeted educational interventions, such as preschool, have tended to show fade-out into early adulthood ( Bailey, Duncan, Odgers, & Yu, 2017 ; Protzko, 2015 )? Even in the control-prior-intelligence design, where the effects showed a decline across time ( Fig. 1 ), estimates remained statistically significant into the eighth and ninth decades of life. One intriguing possibility is that, unlike targeted interventions, increases in educational attainment have lasting influences on a range of downstream social processes, for instance occupational complexity ( Kohn & Schooler, 1973 ) that help to maintain the initial cognitive benefits.

Fourth, which cognitive abilities were impacted? It is important to consider whether specific skills—those described as “malleable but peripheral” by Bailey et al. (2017 , p. 15)—or general abilities—such as the general g factor of intelligence—have been improved ( Jensen, 1989 ; Protzko, 2016 ). The vast majority of the studies in our meta-analysis considered specific tests and not a latent g factor, so we could not reliably address this question. However, it is of important theoretical and practical interest whether the more superficial test scores or the true underlying cognitive mechanisms are subject to the education effect. In our analyses with test category as a moderator, we generally found educational effects on all broad categories measured (we did observe some differences between the test categories, but it should be noted that differential reliability of the tests might have driven some of these differences). However, further studies are needed to assess educational effects on both specific and general cognitive variables, directly comparing between the two (e.g., Ritchie, Bates, & Deary, 2015 ).

Fifth, how important are these effects? There is strong evidence from industrial and organizational psychology and cognitive epidemiology studies that IQ is associated with occupational, health, and other outcomes (e.g., Calvin et al., 2017 ), but to our knowledge, no studies have explicitly tested whether the additional IQ points gained as a result of education themselves go on to improve these outcomes (see Ackerman, 2017 , for discussion of this criterion problem in intelligence research). A quasiexperimental study by Davies, Dickson, Davey Smith, van den Berg, and Windmeijer (2018) found that raising the school-leaving age improved not only IQ but also a variety of indicators of health and well-being. It is possible that the educational benefits to the upstream variables were partly mediated via the IQ increases (or vice versa), but this would need explicitly to be investigated.

Finally, what are the underlying psychological mechanisms of the educational effect on intelligence? Ceci (1991) outlined a number of promising pathways, including the teaching of material directly relevant to the tests, the training of thinking styles such as abstract reasoning, and the instilling of concentration and self-control. Studies that attempt to pin down the proximal educational processes that might, in part, drive the effect (such as reading; Ritchie, Bates, & Plomin, 2015 ; Stanovich, 1993 ; though see Watkins & Styck, 2017 ); those that focus on the differences between the educational effect on specific subtests (e.g., Ritchie et al., 2013 ), and those that address effects of variation in the quality, not just the quantity, of education (e.g., Allensworth, Moore, Sartain, & de la Torre, 2017 ; Becker, Lüdtke, Trautwein, Köller, & Baumert, 2012 ; Gustaffson, 2001 ) are all promising ways to progress toward clarifying a mechanism.

The results reported here indicate strong, consistent evidence for effects of education on intelligence. Although the effects—on the order of a few IQ points for a year of education—might be considered small, at the societal level they are potentially of great consequence. A crucial next step will be to uncover the mechanisms of these educational effects on intelligence in order to inform educational policy and practice.

Supplemental Material

Acknowledgments.

We are grateful to Sorel Cahan, Sean Clouston, Neil Davies, James Gambrell, Emma Gorman, Dua Jabr, Daniel Kämhofer, Ben Southwood, and Tengfei Wang for their assistance in finding manuscripts, locating effect-size estimates, and providing additional information on relevant data sets. We thank the Centre for Longitudinal Studies, Institute of Education, and the UK Data Service for the British Cohort Study data. These bodies are not responsible for our analysis or interpretation.

Action Editor: Brent W. Roberts served as action editor for this article.

Author Contributions: Both authors developed the study concept. S. J. Ritchie performed the literature search and the initial data coding, then both authors performed quality control on the studies to be included in the meta-analysis and agreed on the final data set. E. M. Tucker-Drob developed the framework for the Mplus syntax and wrote the baseline Mplus scripts (for the meta-analytic models), which were adapted for these analyses by S. J. Ritchie. S. J. Ritchie wrote the R analysis scripts (for the publication-bias tests and figures), which were adapted for the multimoderator figure by E. M. Tucker-Drob. Both authors interpreted the analyses, drafted the manuscript, and approved the final manuscript for submission.

Declaration of Conflicting Interests: The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.

Funding: E. M. Tucker-Drob’s contribution to this study was supported by National Institutes of Health (NIH) Research Grant R01HD083613. The Population Research Center at the University of Texas at Austin is supported by NIH Grant R24HD042849.

Supplemental Material: Additional supporting information can be found at http://journals.sagepub.com/doi/suppl/10.1177/0956797618774253

An external file that holds a picture, illustration, etc.
Object name is 10.1177_0956797618774253-img1.jpg

All meta-analytic data and all codebooks and analysis scripts (for Mplus and R) are publicly available at the study’s associated page on the Open Science Framework ( https://osf.io/r8a24/ ). These data and scripts are described in the Supplemental Material. The study was not formally preregistered. The complete Open Practices Disclosure for this article can be found at http://journals.sagepub.com/doi/suppl/10.1177/0956797618774253 . This article has received the badge for Open Data. More information about the Open Practices badges can be found at http://www.psychologicalscience.org/publications/badges .

  • Ackerman P. L. (2017). Adult intelligence: The construct and the criterion problem . Perspectives on Psychological Science , 12 , 987–998. [ PubMed ] [ Google Scholar ]
  • Allensworth E. M., Moore P. T., Sartain L., de la Torre M. (2017). The educational benefits of attending higher performing schools: Evidence from Chicago high schools . Educational Evaluation and Policy Analysis , 39 , 175–197. [ Google Scholar ]
  • Bailey D., Duncan G. J., Odgers C. L., Yu W. (2017). Persistence and fadeout in the impacts of child and adolescent interventions . Journal of Research on Educational Effectiveness , 10 , 7–39. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Baltes P. B., Reinert G. (1969). Cohort effects in cognitive development of children as revealed by cross-sectional sequences . Developmental Psychology , 1 , 169–177. [ Google Scholar ]
  • Becker M., Lüdtke O., Trautwein U., Köller O., Baumert J. (2012). The differential effects of school tracking on psychometric intelligence: Do academic-track schools make students smarter? Journal of Educational Psychology , 104 , 682–699. [ Google Scholar ]
  • Brinch C. N., Galloway T. A. (2012). Schooling in adolescence raises IQ scores . Proceedings of the National Academy of Sciences, USA , 109 , 425–430. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cahan S., Cohen N. (1989). Age versus schooling effects on intelligence development . Child Development , 60 , 1239–1249. [ PubMed ] [ Google Scholar ]
  • Calvin C. M., Batty G. D., Der G., Brett C. E., Taylor A., Pattie A., . . . Deary I. J. (2017). Childhood intelligence in relation to major causes of death in 68 year follow-up: Prospective population study . British Medical Journal, 357 , Article j2708. doi: 10.1136/bmj.j2708 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ceci S. J. (1991). How much does schooling influence general intelligence and its cognitive components? A reassessment of the evidence . Developmental Psychology , 27 , 703–722. [ Google Scholar ]
  • Cheung M. W.-L. (2008). A model for integrating fixed-, random-, and mixed-effects meta-analyses into structural equation modeling . Psychological Methods , 13 , 182–202. [ PubMed ] [ Google Scholar ]
  • Cliffordson C. (2010). Methodological issues in investigations of the relative effects of schooling and age on school performance: The between-grade regression discontinuity design applied to Swedish TIMSS 1995 data . Educational Research and Evaluation , 16 , 39–52. [ Google Scholar ]
  • Clouston S. A., Kuh D., Herd P., Elliott J., Richards M., Hofer S. M. (2012). Benefits of educational attainment on adult fluid cognition: International evidence from three birth cohorts . International Journal of Epidemiology , 41 , 1729–1736. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Davies N. M., Dickson M., Davey Smith G., van den Berg G. J., Windmeijer F. (2018). The causal effects of education on health outcomes in the UK Biobank . Nature Human Behaviour , 2 , 117–125. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Deary I. J., Johnson W. (2010). Intelligence and education: Causal perceptions drive analytic processes and therefore conclusions . International Journal of Epidemiology , 39 , 1362–1369. [ PubMed ] [ Google Scholar ]
  • Deary I. J., Strand S., Smith P., Fernandes C. (2007). Intelligence and educational achievement . Intelligence , 35 , 13–21. [ Google Scholar ]
  • Downey D. B., von Hippel P. T., Broh B. A. (2004). Are schools the great equalizer? School and non-school sources of inequality in cognitive skills . American Sociological Review , 69 , 613–635. [ Google Scholar ]
  • Elliott J., Shepherd P. (2006). Cohort profile: 1970 British Birth Cohort (BCS70) . International Journal of Epidemiology , 35 , 836–843. [ PubMed ] [ Google Scholar ]
  • Gale C. R., Batty G. D., Osborn D. P., Tynelius P., Whitley E., Rasmussen F. (2012). Association of mental disorders in early adulthood and later psychiatric hospital admissions and mortality in a cohort study of more than 1 million men . Archives of General Psychiatry , 69 , 823–831. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gustaffson J.-E. (2001). Schooling and intelligence: Effects of track of study on level and profile of cognitive abilities . International Education Journal , 2 , 166–186. [ Google Scholar ]
  • Hansen K. T., Heckman J. J., Mullen K. J. (2004). The effect of schooling and ability on achievement test scores . Journal of Econometrics , 121 , 39–98. [ Google Scholar ]
  • Jensen A. R. (1989). Raising IQ without increasing g ? A review of The Milwaukee Project: Preventing mental retardation in children at risk; Developmental Review , 9 , 234–258. [ Google Scholar ]
  • Kohn M. L., Schooler C. (1973). Occupational experience and psychological functioning: An assessment of reciprocal effects . American Sociological Review , 38 , 97–118. [ Google Scholar ]
  • Kuncel N. R., Hezlett S. A. (2010). Fact and fiction in cognitive ability testing for admissions and hiring decisions . Current Directions in Psychological Science , 19 , 339–345. [ Google Scholar ]
  • Morgan S. L., Winship C. (2015). Counterfactuals and causal inference (2nd ed.). Cambridge, England: Cambridge University Press. [ Google Scholar ]
  • Muthén L. K., Muthén B. O. (2014). Mplus user’s guide (7th ed). Los Angeles, CA: Author. [ Google Scholar ]
  • Protzko J. (2015). The environment in raising early intelligence: A meta-analysis of the fadeout effect . Intelligence , 53 , 202–210. [ Google Scholar ]
  • Protzko J. (2016). Does the raising IQ-raising g distinction explain the fadeout effect? Intelligence , 56 , 65–71. [ Google Scholar ]
  • Ritchie S. J., Bates T. C., Deary I. J. (2015). Is education associated with improvements in general cognitive ability, or in specific skills? Developmental Psychology , 51 , 573–582. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ritchie S. J., Bates T. C., Der G., Starr J. M., Deary I. J. (2013). Education is associated with higher later life IQ scores, but not with faster cognitive processing speed . Psychology and Aging , 28 , 515–521. [ PubMed ] [ Google Scholar ]
  • Ritchie S. J., Bates T. C., Plomin R. (2015). Does learning to read improve intelligence? A longitudinal multivariate analysis in identical twins from age 7 to 16 . Child Development , 86 , 23–36. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Roth B., Becker N., Romeyke S., Schäfer S., Domnick F., Spinath F. M. (2015). Intelligence and school grades: A meta-analysis . Intelligence , 53 , 118–137. [ Google Scholar ]
  • Schmidt F. L., Oh I.-S, Shaffer J. A. (2016). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings (Fox School of Business Research Paper) . Retrieved from https://ssrn.com/abstract=2853669
  • Simonsohn U., Simmons J. P., Nelson L. D. (2015). Better P -curves: Making P -curve analysis more robust to errors, fraud, and ambitious P -hacking, a reply to Ulrich and Miller (2015) . Journal of Experimental Psychology: General , 144 , 1146–1152. [ PubMed ] [ Google Scholar ]
  • Snow R. E. (1996). Aptitude development and education . Psychology, Public Policy, and Law , 2 , 536–560. [ Google Scholar ]
  • Stanley T. D., Doucouliagos H. (2014). Meta-regression approximations to reduce publication selection bias . Research Synthesis Methods , 5 , 60–78. [ PubMed ] [ Google Scholar ]
  • Stanovich K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy . Reading Research Quarterly , 21 , 360–407. [ Google Scholar ]
  • Stanovich K. E. (1993). Does reading make you smarter? Literacy and the development of verbal intelligence . Advances in Child Development and Behavior , 24 , 133–180. [ PubMed ] [ Google Scholar ]
  • Strenze T. (2007). Intelligence and socioeconomic success: A meta-analytic review of longitudinal research . Intelligence , 35 , 401–426. [ Google Scholar ]
  • Watkins M. W., Styck K. M. (2017). A cross-lagged panel analysis of psychometric intelligence and achievement in reading and math . Journal of Intelligence , 5 ( 3 ), Article 31. doi: 10.3390/jintelligence5030031 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Winship C., Korenman S. (1997). Does staying in school make you smarter? The effect of education on IQ in the bell curve . In Devlin B., Fienberg S. E., Resnick D. P., Roeder K. (Eds.), Intelligence, genes, and success (pp. 215–234). New York, NY: Springer. [ Google Scholar ]
  • Wrulich M., Brunner M., Stadler G., Schalke D., Keller U., Martin R. (2014). Forty years on: Childhood intelligence predicts health in middle adulthood . Health Psychology , 33 , 292–296. [ PubMed ] [ Google Scholar ]
  • Share on Facebook
  • Share on Twitter
  • Share on Pinterest
  • Share on Email
  • Subscribe to our Newsletter

YourTango

Woman With A PhD Thinks She's Smarter Than Marine With A High School Diploma — Until They Both Take An IQ Test

The test challenges the notion that education alone determines intelligence..

  • Megan Quinn

Written on Dec 04, 2023

phd student, marine, iq test, smart

When six strangers gathered to take an IQ test, one of them openly expressed her assumption, suggesting that she believed one of them would not fare as well as the others. However, she was proved wrong when all of their results were revealed. 

The woman with a Ph.D. believed that she would score higher on an IQ test than a marine with a high school diploma. 

In a YouTube video from the channel “Jubilee,” as part of their “Ranking” series, six people came together to take an IQ test. 

One of the subjects was 30-year-old Maria, who has a Ph.D. in cancer biology and completed her graduate education at the University of South Carolina. She now claims to work in the Biotech industry. 

Most of the other subjects also shared that they also had a college education from prestigious schools including Harvard and Yale. The only one who has not received a college degree was 21-year-old Tyler, who has a high school education and works for the U.S. Marine Corps. 

RELATED:  Woman Called Out For Saying Her Husband Is 'Not Well Educated Or Well Read' Because He's In The Military

When the group was asked what intelligence meant to them, Tyler responded, “I think intelligence is better defined as your adaptability and your problem-solving skills more than it is your education.” He later added that common sense is the most essential quality to have when it comes to intelligence, more so than an education. 

According to Maria, it is one’s EQ , IQ, common sense, and street smart that contribute to their overall intelligence. 

The group was asked to rank themselves in the order from most to least intelligent. 

Maria ranked herself as the second most intelligent of the group given her impressive background. She ranked Tyler as the least intelligent. “It has nothing to do with your background,” she said to him. “I don’t really think you have the highest EQ out of all of us personally.” 

She later went on to say that she ranked Tyler as the least intelligent due to his “demeanor, body language, and the way he carries himself.” She was not the only one who felt this way. Out of the six members, three others also ranked Tyler as the least intelligent. 

Despite the group’s beliefs, Tyler confidently ranked himself as the most intelligent of the group. “I know what I’m about, and I’m sure of that,” he said. 

The group members were then asked to put their intelligence to the test by taking an IQ test. An IQ test is a standardized assessment designed to measure a person's cognitive abilities in various areas, including problem-solving, logical reasoning, memory, and comprehension. 

RELATED:  Two Philosophers Explain The Difference Between People Who Have 'Great Minds' & Those Who Are Less Talented

To everyone’s surprise, the IQ test results ranked Tyler as the third highest intelligent of the group, with a score of 131, while Maria was the one who was ranked the lowest, with a score of 112. 

However, after learning their results, not all of the group members were satisfied. 

One member named Sada claimed that IQ tests are now known  to be “flawed” and based on incorrect sample sizes. She argued that the test could not prove how intelligent she actually was since she had dyslexia. 

Maria stood by her claims that one’s intelligence is based on common sense and street smarts, and cannot be determined solely with an IQ test. “There’s more to that person than that test,” she said. 

Others believed that one could perform better on an IQ test if they had studied for it beforehand. 

Tyler admitted that until that day, he had never taken an IQ test before, or knew what it entailed. “This is my villain redemption arc!” he said, referring to how most of the group initially perceived him. “I’m climbing up the ladder!” 

Intelligence is not solely determined by one’s level of education. Multiple factors make up one’s intelligence, including innate abilities, adaptability, creativity, and one’s willingness to learn. 

In the end, it doesn’t matter if you have a Ph.D. or a high school diploma. Your education is just a fraction of many components that add on to your overall intelligence . 

RELATED:  Customer Goes Off On Man Who Asks Restaurant If They Offer A Military Discount — 'All Of You Think You're Entitled'

Megan Quinn is a writer at YourTango who covers entertainment and news, self, love, and relationships.

Our Newsletters

Get the best of YourTango delivered straight to your inbox — the biggest stories, actionable advice & horoscope predictions!

score card research

Advertisement

Advertisement

Aspiring PhDs: the (un)surprising relation between doctoral students and research productivity

  • Review Paper
  • Published: 22 January 2023
  • Volume 3 , article number  32 , ( 2023 )

Cite this article

phd students iq

  • Cristóbal Rodríguez-Montoya   ORCID: orcid.org/0000-0002-8988-0248 1 ,
  • Carlos Zerpa-García   ORCID: orcid.org/0000-0002-7150-384X 2 &
  • Mirnalin Cherubin   ORCID: orcid.org/0000-0003-4296-4046 3  

275 Accesses

1 Altmetric

Explore all metrics

Knowledge is a significant driver of economic growth. For higher education institutions (HEIs)-prime knowledge generators- as well as for nations, research productivity is a priority. The contribution of PhD students to research productivity is not entirely visible. This lack of visibility may have implications for policy making at the institutional and national level. This research employed a bi-level, mixed-method approach: qualitative at the microlevel (institutionally and individually) for inductive insights about the connection of PhD programs and students to research productivity; and quantitative at the macro-level, analyzing data from 78 countries, from 2014 to 2019. We found a statistically significant correlation between the number of PhD students and the quantity of papers published: over 90% ( R 2  = 0.904, F (1.365) = 3431.9, p  < 0.01). Participant observation provided theoretical insights about the “how” and “why” of the student´s connection to research productivity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

phd students iq

The Shifting Sands of Academic Output: University of Cape Town Research Output in Education and Social Anthropology—1993–2013

phd students iq

The Shifting Sands of Academic Output: University of Cape Town Research Output in Education and Social Anthropology (1993–2013)

phd students iq

Exploring Researcher Motivation: Implications for PhD Education

Data availability.

The datasets analyzed during the current study are available in: The UIS (Institute for Statistics) information service of the United Nations Educational, Scientific and Cultural Organization (UNESCO), ISCED (International Standard Classification of Education) Level 8 (doctorate) 2014–2019, by country. The Scopus database (via SCImago Lab) published papers by country.

Abramo G, D’Angelo CA (2014) How do you define and measure research productivity? Scientometrics 101(2):1129–1144. https://doi.org/10.1007/s11192-014-1269-8

Article   Google Scholar  

Baird LL (1991) Publication productivity in doctoral research departments: interdisciplinary and intradisciplinary factors. Res High Educ 32(3):303–318. https://doi.org/10.1007/BF00992894

Benckendorff P, Shu ML (2019) Research impact benchmarks for tourism, hospitality and events scholars in Australia and New Zealand. J Hosp Tour Manag 38:184–190

Bland CJ, Center BA, Finstad DA, Risbey KR, Staples JG (2005) A theoretical, practical, predictive model of faculty and department research productivity. Acad Med 80(3):225–237

Bland CJ, Center BA, Finstad DA, Risbey KR, Staples JG (2006) The impact of appointment type on the productivity and commitment of full-time faculty in research and doctoral institutions. J High Educ 77(1):89–123. https://doi.org/10.1080/00221546.2006.11778920

Çakır MP, Acartürk C, Alaşehir O, Çilingir C (2015) A comparative analysis of global and national university ranking systems. Scientometrics 103(3):813–848. https://doi.org/10.1007/s11192-015-1586-6

Chung JY, Petrick J (2011) Doctoral students’ research productivity: An analysis of publications in tourism and hospitality journals. J Hosp Leisure Sport Tour Edu (Pre-2012) 10(1):63. https://doi.org/10.1007/BF00992894

Churchill MP, Lindsay D, Mendez DH, Crowe M, Emtage N, Jones R (2021) Does publishing during the doctorate influence completion time? A quantitative study of doctoral candidates in Australia. Int J Doctoral Stud 16:689–713. https://doi.org/10.28945/4875

Dundar H, Lewis DR (1998) Determinants of research productivity in higher education. Res High Educ 39(6):607–631. https://doi.org/10.1023/A:1018705823763

Goldberger ML, Maher BA, Flattau PE (1995) Research-Doctorate Pro- grams in the United States: Continuity and Change. National Academy Press, Washington

Google Scholar  

Guerin C (2016) Connecting the dots: Writing a doctoral thesis by publication. In: Badenhorst C (ed) Research literacies and writing pedagogies for masters and doctoral writers. Brill, Leiden, pp 31–50

Grunig SD (1997) Research, reputation, and resources: the effect of research activity on perceptions of undergraduate education and institutional resource acquisition. J High Educ 68(1):17–52

Heng K, Hamid M, Khan A (2020) Factors influencing academics’ research engagement and productivity: a developing countries perspective. Educ Res 30(3):965–987. https://doi.org/10.3316/informit.465283943914964

Helene AF, Ribeiro PL (2011) Brazilian scientific production, financial support, established investigators and doctoral graduates. Scientometrics 89(2):677–686. https://doi.org/10.1007/s11192-011-0470-2

Holbrook MB (1995) Consumer research: Introspective essays on the study of consumption. Sage Publications, Thousand Oaks

Book   Google Scholar  

Holbrook MB (2006) Consumption experience, customer value, and subjective personal introspection: An illustrative photographic essay. Journal of Business Research. 59(6):714–725

Holm-Nielsen LB (2012) Making a strong university stronger: Change without a burning platform. Building world-class universities. Brill, Leiden, pp 71–87

Jowsey T, Corter A, Thompson A (2020) Are doctoral theses with articles more popular than monographs? Supervisors and students in biological and health sciences weigh up risks and benefits. High Educ Res Dev 39(4):719–732. https://doi.org/10.1080/07294360.2019.1693517

Jung J (2012) Faculty research productivity in Hong Kong across academic discipline. High Educ Stud 2(4):1–13

Jung J (2020) The fourth industrial revolution, knowledge production and higher education in South Korea. J High Educ Policy Manag 42(2):134–156

Kawulich BB (2005) Participant observation as a data collection method. InForum qual sozialforschung/forum Qual soc res 6:2

Larivière V (2012) On the shoulders of students? The contribution of PhD students to the advancement of knowledge. Scientometrics 90(2):463–481. https://doi.org/10.1007/s11192-011-0495-6

Mason S, Merga M (2018) Integrating publications in the social science doctoral thesis by publication. High Educ Res Dev 37(7):1454–1471

Mason S, Merga MK, Morris JE (2020) Typical scope of time commitment and research outputs of thesis by publication in Australia. High Educ Res Dev 39(2):244–258

Merga MK, Mason S, Morris JE (2020) ‘What do I even call this?’Challenges and possibilities of undertaking a thesis by publication. J Furth High Educ 44(9):1245–1261

Merga MK, Mason S (2021a) Mentor and peer support for early career researchers sharing research with academia and beyond. Heliyon 7(2):e06172

Merga MK, Mason S (2021b) Doctoral education and early career researcher preparedness for diverse research output production. J Furth High Educ 45(5):672–687

Nettles MT, Millett CM (2006) Three magic letters: Getting to Ph.D. Johns Hopkins University Press, Baltimore

Nicholas D, Rodríguez-Bravo B, Watkinson A, Boukacem-Zeghmouri C, Herman E, Xu J, Świgoń M (2017) Early career researchers and their publishing and authorship practices. Learn Pub 30(3):205–217

O’Keeffe P (2020) PhD by Publication: innovative approach to social science research, or operationalisation of the doctoral student or both? High Educ Res Dev 39(2):288–301

Santos JM, Horta H, Zhang LF (2020) The association of thinking styles with research agendas among academics in the social sciences. High Educ Q 74(2):193–210

Shen WQ, Liu D, Chen H (2017) Chinese Ph. D. students on exchange in European Union countries: experiences and benefits. European J High Educ 7(3):322–335

Spinak E (2001) Indicadores Cienciométricos Acimed 9:16–18

Quimbo MAT, Sulabo EC (2014) Research productivity and its policy implications in higher education institutions. Stud High Educ 39(10):1955–1971

Wilkins S, Hazzam J, Lean J (2021) Doctoral publishing as professional development for an academic career in higher education. Int J Manag Educ 19(1):100459

Xue H, Desmet PM (2019) Researcher introspection for experience-driven design research. Des Stud 63:37–64

Zhou Y, Volkwein JF (2004) Examining the influences on faculty department intentions: A comparison of tenured versus non-tenured faculty at research universities using NSOPF-99. Res High Educ 45(2):139–176. https://doi.org/10.1023/B:RIHE.0000015693.38603.4c

Download references

This research is part of the normal work activity of the authors; no additional funding was necessary.

Author information

Authors and affiliations.

School of Business, Pontificia Universidad Católica Madre y Maestra, Santiago, Dominican Republic

Cristóbal Rodríguez-Montoya

Department of Behavioral Science and Technology, Universidad Simón Bolívar, Caracas, Venezuela

Carlos Zerpa-García

School of Industrial Engineering, Pontificia Universidad Católica Madre y Maestra, Santo Domingo, Dominican Republic

Mirnalin Cherubin

You can also search for this author in PubMed   Google Scholar

Contributions

All the authors made substantial contributions to the conception or design of the work; and the acquisition, analysis, or interpretation of the data; drafted the work or revised it critically for important intellectual content; approved the version to be published; and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Cristóbal Rodríguez-Montoya .

Ethics declarations

Conflict of interest.

The authors have no conflict of interest, and neither relevant financial nor non-financial interests to disclose.

Research involve in human and animal rights

This research does not contain any studies performed by any of the authors with human participants other than the authors themselves; therefore, consent is both implied and explicit.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Rodríguez-Montoya, C., Zerpa-García, C. & Cherubin, M. Aspiring PhDs: the (un)surprising relation between doctoral students and research productivity. SN Soc Sci 3 , 32 (2023). https://doi.org/10.1007/s43545-023-00616-8

Download citation

Received : 20 March 2022

Accepted : 05 January 2023

Published : 22 January 2023

DOI : https://doi.org/10.1007/s43545-023-00616-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Higher education
  • Universities
  • PhD programs
  • PhD students
  • Research productivity
  • Knowledge generation
  • Find a journal
  • Publish with us
  • Track your research
  • Skip to main content

Eleanor Munson, PhD

Dallas Educational Consultants

The Five Levels of Giftedness

Five Levels of Giftedness

The label of ‘gifted’ is assigned once a psychologist – or another person who is qualified to administer and interpret IQ tests – has evaluated a child with an intelligence test, most commonly one the Wechsler tests (WPPSI, WISC, or WAIS).  IQ scores for our population fall along a bell-shaped curve, meaning that 50% of the population scores around the average (IQ scores of 90-109), and as the curve drops on either end, the percentage of people scoring in that range gets smaller and smaller.

Deborah Ruf, Ph.D. has spent her career focused on that small area at the far right end, or tail, of the curve; the individuals that make up the most intellectually gifted of our society.  One might think that the individuals who score in this area are more similar than not, but through her research, Dr. Ruf has discovered and defined five distinctively different levels of giftedness.  The differences between the levels are quite striking and have significant implications for a child’s home and school life.  A description of each of Dr. Ruf’s levels follows.

Five Levels of Giftedness: The Scores & a Summary of What They Mean*

Level One:  Moderately Gifted to Gifted

  • IQ scores of 120-129 represent the 90th-98th percentiles
  • what most of us think of as bright
  • make up a large proportion of students in gifted programs
  • like being read to before age one
  • can do simple addition and subtraction before age four
  • reading 2-3 years beyond grade level by age seven
  • parents realize children are not being challenged and contact someone for help between grades two and four

Level Two:  Highly Gifted

  • IQ scores of 130-135 represent approximately 98th – 99th percentiles
  • can pay attention while being read to by five to nine months
  • can count to 5 (or higher) by age two
  • know many sight words and maybe reading by age four
  • master most kindergarten skills by age four
  • are independent on the computer by age four and a half
  • are impatient with the repetition and slow pace of school by age six to seven

Level Three:  Exceptionally Gifted

  • IQ scores of 136-140 represent approximately 98th – 99th percentiles
  • independently look at and turn pages of books before ten months
  • question Santa or the tooth fairy by age three or four
  • rarely go through any stage of phonetically sounding out words
  • intense interest in mazes between ages four and five
  • spontaneously read (with or without instruction) before kindergarten
  • read 2-5 years beyond grade level by age six

Level Four:  Exceptionally to Profoundly Gifted

  • IQ scores of 141+ represent the 99th percentile
  • books are a favorite interest by three to four months
  • knows the entire alphabet by fifteen to twenty-two months
  • at four or five years can perform many academic and intellectual functions of an eight-year-old
  • reading for pleasure and information by age five
  • can play adult level card games and board games by age five and a half
  • most are capable of completing all academic work through 8th grade by 3rd or 4th grade
  • these are the kids that attend college at ages ten, eleven, and twelve

Level Five:  Exceptionally to Profoundly Gifted

  • knows numbers, letters, colors, and shapes before they can talk
  • can speak in full, complex sentences by fifteen months
  • has kindergarten skills by age two
  • spontaneously reads, understands fairly complex math problems, and has existential concerns by ages four to five (with or without instruction)
  • frequently one parent must postpone their career to advocate for their child’s education

phd students iq

From the descriptions above it’s easy to see that differences between levels of giftedness can be significant.  Children who fall in these categories need, and frequently don’t get, a customized educational plan that addresses their intellectual strengths.  If you see signs of giftedness in your child, seek out the help a specialist to help ensure that their academic needs are met.

*Information from Dr. Deborah Ruf’s book, 5 Levels of Gifted: School Issues and Educational Options, Dr. Ruf, a high intelligence specialist, founded Educational Options in 1999 to provide information, support, and guidance to families with gifted children.  Her book is an excellent resource for parents, educators, and anyone who wants to learn more about gifted children and adults.

© Eleanor Munson, Ph.D. Unauthorized use and/or duplication of this material without express and written permission from Eleanor Munson, Ph.D. is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Eleanor Munson, Ph.D. with appropriate and specific direction to the original content.

More on Giftedness from Dr. Munson:

  • Defining “Gifted”
  • Is My Child Gifted?
  • Do You Have to be “Gifted” to go to a Top School?
  • Consulting on Gifted Students with Dr. Munson
  • Contact Dr. Munson

About Eleanor Munson, PhD

About Eleanor Munson, PhD

Dr. Eleanor Munson is a graduate of the Hockaday School and an expert in private schools in the Dallas, TX area. Rely on her experience to help make wise, well-informed decisions about your child’s educational environment, from preschool through high school. Contact Dr. Munson today to get started!

Sabbatical Announcement

Dr. Munson is currently on sabbatical, and is not accepting new clients.

This will close in 20 seconds

  • Intelligence
  • Memory & Learning
  • Productivity
  • Books & Reading
  • Brain Technology

What is the average IQ of a Master’s Student?

  • by Ernest Panfiloff

A master’s degree is an advanced degree that a student can earn after completing a bachelor’s program. A master’s degree program can be as short as one year or as long as 4 or more years. Some master’s programs can even take more than 5 years to complete. A master’s degree can unlock a number of career opportunities that a bachelor’s alone will not.

How Hard is it to Get a Master’s Degree?

Earning a master’s degree can be a challenging endeavor, but for many, it’s the next logical step after earning a bachelor’s degree. However, some students are intimidated by the idea of earning a master’s degree. Fortunately, several degrees are available that are geared toward adult learners who may have never earned a bachelor’s degree without returning to school to get an advanced degree.

Earning a Master’s degree is statistically more difficult than a bachelor’s degree, so if you’re considering pursuing a graduate degree, the chances are good that you’re also thinking about how long it will take you to earn it. Many schools require anywhere from 2-to 5 years of coursework and 3-to 5 years of internship, research, and other post-grad work.

Getting a master’s or doctorate is an accomplishment that few people can achieve, no matter the field. Earning a graduate degree can be difficult, however, and there are many obstacles that you could encounter along the way. Getting a master’s degree isn’t a simple task. It requires years of study, and you’d better have the drive and ability to commit to your studies if you don’t want to fall behind.

The Essence of Getting a Master’s Degree

Getting a master’s or doctoral degree is often thought of as a “finish line” to education. However, to truly get a master’s degree, you need more than just a diploma. A master’s program takes dedication, commitment, and hard work. It should also be something that will help you succeed in your career.

Getting a master’s degree does more than just set you apart. It can give you a boost in pay, help you land your dream job, and show potential employers that you’re on top of the latest trends in your field. But, as with any big decision, it’s important to do your homework before making the big leap. If you’re considering earning a master’s degree in education, read on for everything you need to know about getting a master’s degree.

It’s one thing to get an undergrad degree but another to graduate with a Master’s Degree. So what does it take to earn that coveted degree? According to a new study, it takes 1,000 hours of learning—or 40 hours a week for 1 year. The study interviews thousands of people who graduate with a Master’s. The researchers found that it was more the years of schooling than the hours spent studying that mattered most. (The study also found that going to a more expensive school didn’t necessarily lead to a higher earning potential.)

How Do They Measure the IQ?

Thankfully, your IQ level does not indicate how smart you are. It’s one of the least accurate ways to measure intelligence, meaning that your IQ simply explains where your intelligence falls on the spectrum—and it doesn’t account for things like age. That said, many experts still consider intelligence to be the most important trait in predicting success, so it’s important to know how to measure it.

IQ, an intelligence quotient, is a measurement of intelligence, usually assessing a person’s ability to think and reason. Contrary to popular belief, IQ tests are not designed to measure what special skills an individual possesses or to rate how intelligent they are compared to others. The IQ tests, however, are used to measure general intelligence, or general mental ability, which focuses on a person’s problem-solving and reasoning skills and reaction times.

What Is the Average IQ Of a Master’s Student?

The average smarts level of a Master’s student is higher than that of the average undergraduate student. Though their average IQ might be higher, graduate students tend to have an even more difficult time finding a job, possibly because they tend to take more classes and longer to finish. The average IQ of Master’s students is 126, compared to the average undergraduate’s 115. The difference between the two is most clearly seen in their understanding of abstract concepts.

Can You Get Masters with Low IQ Or Below-Average?

Many people believe that learning is not possible for those with low IQ or below-average IQ. However, IQ is not the only factor that determines whether someone can succeed in a degree program. Other factors that play an important part include motivation, dedication, emotion management skills, and self-awareness. If you have an average or above-average IQ, assume you have the basic academic skills to succeed. However, you need to ensure that you select a program that is the right fit for you. A degree from a bad program might do more harm than good.

The short answer is yes. You can get a Master’s degree with a low IQ or below-average, but you will need to work hard, do research, and study. But, as with anything else, education is your ticket to a better life. Having a low IQ or just average IQ isn’t a hindrance to pursuing a master’s. It is simply a contributing factor. Many universities offer master’s degrees that don’t require a high IQ. The requirements also depend on the faculty of the institutions they enroll in. These degrees, however, are not awarded in every subject. For instance, some institutions only award Master’s degrees to those students who have earned Master’s degrees and PhDs from the same university.

Even though the average intelligence of Master’s students is higher than that of undergraduates, on average, Master’s students performed worse on the critical thinking test than undergraduates. The research also found that 66% of undergraduates tested well below their intelligence level, while only 17% of Master’s students did, so it seems that Master’s students might be smarter than it seems.

phd students iq

Rob Henderson

5 Seriously Stunning Facts About Higher Education in America

Research disturbs commonly held assumptions about college..

Posted March 17, 2019 | Reviewed by Jessica Schrader

Matt Madd/Flickr

Recently, some rich people and well-known celebrities got caught cheating to get their kids into elite universities. They bribed sports coaches, cheated the SAT, and fabricated phony credentials.

Beyond the scams of the rich and famous, though, there are other surprising facts about college in America. Facts closer to our everyday experiences. Here are five.

1. 4 out of 10 college students fail to complete their degrees. According to research from the National Student Clearinghouse Research Center, only 58 percent of students manage to complete their degree programs within 6 years. Bill Gates has called this figure “tragic." He has written , “Based on the latest college completion trends, only about half of all those students will leave college with a diploma. The rest—most of them low-income, first-generation, and minority students—will not finish a degree. They’ll drop out." Sadly, community college figures are even more dismal. A recent study in California found that 70 percent of community college students fail out.

2. Attending an elite university doesn’t boost income. What matters is the ability to get in. Economists Stacy Dale and Alan B. Krueger looked at two groups, totaling about 19,000 students. One group gained admission into elite universities, attended, and graduated. Another group also gained admission to elite universities. But this group, rather than attending the elite schools, chose to attend less selective schools instead. More than 20 years after they graduated, Dale and Krueger measured their incomes. They found no difference. A student who got into Princeton but attended Penn State made as much as a student who got into, and attended, Princeton.

3. Graduating from a non-selective college doesn't boost income. In a book about social class in America, researchers looked at how differently ranked colleges affected earnings. They found that students who attended the country’s most elite institutions earned about 84 percent more on average compared to those who had not graduated from college. Graduates of “somewhat selective” private colleges and “leading state universities” earned about 52 percent more than non-graduates. However, they found “no income advantage” for those who graduated from a “non-selective” college compared to those who did not attend college.

4. A person with average academic ability has a higher than 50 percent chance of dropping out of college. For the general population, the average IQ score is 100 . Research has found that, among white, American college students, those with a 105 IQ score have a 50-percent chance of dropping out of college. They also report that the average IQ of a college graduate is about 114. But they also show that having a high IQ is no guarantee of graduating. Those who score 130 (very rare; about 2-percent of the population) still have a 10-percent dropout rate.

5. SAT coaching and test prep aren't important. Many people have heard that private SAT prep courses and private tutoring produce substantial gains. Test prep companies tout that their users receive boosts of 100 points or more after only a few weeks of study. Research doesn’t support this. A meta-analysis from researchers at Harvard found that, on average, SAT coaching produces a 10-point gain. They conclude that this gain is “too small to be practically important.” More recent research from Stanford supports this. They found that students receive an 11-15-point gain from SAT coaching, which roughly corresponds to getting one or two additional questions correct. Perhaps even more surprising, a study from 2015 found that private tutoring has no effect on SAT gains. As they put it, "our hypothesis that more elite forms of test prep (private tutor) would predict higher SAT scores was not confirmed. The only form of that prep actually associated with higher SAT scores was participation in a private test prep course, which translated into an 11-point gain on the SAT when compared to students with no preparation."

The Purpose of College

The economist Bryan Caplan has written a provocative book titled The Case Against Education . According to Caplan, the value of college isn’t in what you learn. It's in getting the degree. “Teachers have a foolproof way to make their students cheer: cancel class . . . such jubilation is bizarre. Since you go to school to acquire skills, a teacher who cancels class rips you off.” Unless the purpose of attending college isn’t to obtain skills. Maybe the purpose is actually just to obtain the degree.

Suppose you had a choice: Attend college for 4 years, gain the skills, but have no degree at the end. Or get a degree right now, fully accredited, but not attend a single class. Caplan would not be surprised if you selected the second option. This suggests that education is less valuable than a degree. At least for earnings.

In sum, there are many odd facts about college that upset our commonly held assumptions. Before you pursue a new educational goal, it is worth looking into these research findings.

Facebook image: l i g h t p o e t/Shutterstock

Rob Henderson

Rob Henderson received a Ph.D. in Psychology from the University of Cambridge (St. Catharine's College). He obtained a B.S. in Psychology from Yale University and is a veteran of the U.S. Air Force.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Skip to Content

  • About the Program
  • BioFrontiers Institute

Other ways to search:

  • Events Calendar

Interdisciplinary Quantitative Biology

Sammy Ramsey presents at the IQ symposium

Learn from over 70 diverse research faculty & peers

Screen showing cells

Master cutting-edge quantitative techniques in core courses

IQ student poster session

Collaborate with students, faculty & alumni

Biotechnology Building

Rotate in world-renowned labs to gain new skills

Lyanna Kessler presents at the IQ Symposium

Celebrate your cohort's successes

IQ students pose as a group after Fort Lewis workshop

Interdisciplinary Quantitative (IQ) Biology Graduate Certificate is a program through CU Boulder's BioFrontiers Institute. The certificate is earned in concordance with a PhD from one of CU Boulder's academic departments. Students learn Interdisciplinary Quantitative skills, while also gaining in-depth knowledge of their field through their degree-granting PhD program with one of our eleven partner academic departments .

Through IQ Biology, students learn the essential competencies demonstrated by knowledgeable, and well-rounded researchers who collaborate effectively across disciplines. These competencies are attained through cross-departmental lab rotations, courses, interdisciplinary projects, outreach activities, and science engagement.

Galveston hotel

Tristan Caro attends Gordon Geobiology Conference

Kate Bubar presenting in a lecture hall at the Infectious Disease Dynamics conference

Kate Bubar attends the Infectious Disease Dynamics conference

Sierra presents work at ESA

Ecology in Montréal - Sierra Jech attends ESA Conference 2022

When the PhD path leads to career struggles

A bird flew past a rainbow on the horizon, as viewed from Morrissey Boulevard in Dorchester.

A doctoral degree is a major commitment. Think carefully.

I appreciated reading Kara Miller’s The Big Idea column “PhD: Pretty heavily disappointed” (Business, May 22), about people with doctoral degrees struggling to build careers in academia. It made me think back to a conversation I had when I was about to graduate from high school.

I happened to run into a former track coach of mine, and as we were reminiscing he asked me what I planned as a major in college. “History,” I responded. He said, “Why don’t you take some computer classes also? It never hurts to be able to do something useful.”

I did not reflect on his motivation at the time, but my track coach was a young guy, and he was probably giving me advice straight from his own life, as a parent trying to raise his own young children. I did take computer classes in college and ultimately received a PhD in chemical engineering. I always remember that conversation as being a kind of turning point.

Earning a doctoral degree is a life commitment of great proportion. It can take, as Miller notes, between four and seven years. If we think of working life as roughly between the ages of 22 and 65, then a PhD requires more than 10 percent of a person’s working life. People need to think carefully about that investment.

Advertisement

Two powerful arguments in favor of the path of science, technology, engineering, and math are that there tend to be more STEM jobs for PhDs, and many universities’ STEM departments are generous in covering their PhD students’ tuition and cost of studies, including a stipend toward food, rent, and other expenses.

Stuart Gallant

Not much has changed in 30 years

As I prepared to graduate in 1995 with a doctor of education degree from the Harvard Graduate School of Education, my mother memorably said to me, “Of my four children, you are the one with the most education and the smallest salary.” Apparently not much has changed in 30 years.

I must congratulate these students, however, on following their passion rather than following the money. I can’t help but think that their lives, though stressful, may contain greater happiness.

Peggy Clark

Lawyers & electricians & philosophers, oh my!

Kara Miller’s column on the career challenges for people with doctoral degrees generated more than 260 comments on Boston.Globe.com. The following is an edited sample of readers’ reactions:

Lots of law school grads are underemployed as well. (PL)

So true, PL. The market in Massachusetts is flooded with talented lawyers seeking work. (Roforma)

Supply and demand, the market at work. (guk)

Investing in education and research in all fields is the hallmark of a society with staying power. Disinvesting from these endeavors signals decline and decay. (Massachusetts citizen)

Electricians, plumbers, mechanics, and other skilled technical professions have no problems getting $100k jobs with great benefits. (ramsen)

Not enough turnover from tenured professors, leaving little space for new faculty. Although the tenured, well-established professors are needed, it’s the junior faculty who are hungry and with new ideas that help build new programs. The whole graduate program model is a bad model. I worked two jobs, had my tuition and some type of minimal student health insurance and could barely cover the rent with my stipend, and the second job paid for everything else. Though I was working on many faculty projects, it was the faculty who said this would be good for me. Never did they say it was also good for them. (TravelerofNJ2)

I just retired from a tenured faculty position in science. I’m in my early 70s. I have colleagues who are still doing what they do well into their 70s, a couple approaching 80. There is no active incentive from the university to move the older faculty on, to make way for a new generation. (Lola-lola)

The next step is for adjuncts to go on strike across the nation and hold colleges and universities accountable. The current system is completely absurd. (Wordsmith2358)

Universities should be required to release disclosure data about the fate of their PhD graduates. (davidman820)

I knew an attorney who managed a Cheesecake Factory. She had worked in food services through school. As an attorney, she really did not make that much money and was not doing the field of law of her choice. How many real estate closings can you do without dying of boredom? She went into management in the food industry and makes the same salary. (Antietem)

It was always a question and puzzling to me why people study philosophy. (Blazer27)

phd students iq

Globe Opinion

Florida State University

FSU | The Graduate School

Main navigation Pulldown

The graduate school.

  • Prospective Students
  • Graduate Admissions

International Admissions

We are now accepting Graduate applications for Fall 2024, and Spring 2025.* *Please contact your department for application open terms and deadlines.

phd students iq

Our community welcomes you

Each year, the university enrolls approximately 1,700 international students. We are proud to offer extensive support and services to our international population. International applicants should plan to apply early so they have ample time to obtain their immigration documents and make living arrangements in the U.S. Any F-1/J-1 students planning to obtain their I-20/DS-2019 should contact the Center for Global Engagement at [email protected] . Please check with your department regarding deadlines. 

International Admissions Requirements

In addition to meeting graduate university admissions requirements, international applicants must also meet the following University requirements to be considered for admission. 

English Language Proficiency Requirement

Official English Language Proficiency results are required of all international applicants whose native language is not English. The following are the minimum scores required for admission to the University, although some departments require higher scores at the graduate level: 

Internet based TOEFL ( IBTOEFL ): 80 

Paper based TOEFL ( TOEFL ): 550 

International English Language Testing System ( Academic IELTS ): 6.5 

Pearson Test of English ( PTE ): 55 

Duolingo : 120 (Summer 2022 and Forward)

Cambridge C1 Advanced Level : 180  (Fall 2022 and Forward)

Michigan Language Assessment : 55  (Fall 2022 and Forward)

Although official scores are required, most departments will begin to review your application with self-reported scores, while they are waiting for the official scores to arrive. You can self-report your scores on your Online Status Page, after you submit your application. 

The English Language Proficiency requirement can be waived, at the University-level, for applicants who have earned a minimum of a BA or higher in the US or in an English-Speaking Country. Please note, your department may still require proof of English-Language proficiency. *A variety of countries are exempt from the English language proficiency requirement.

Transcript and Credential Evaluation Requirements

All transcripts/academic records that are not in English must be accompanied by certified English translations. 

To be considered "certified," documents should be true copies that are signed and dated by an educational official familiar with academic records. Any translated record should be literal and not an interpretive translation. Documents signed by a notary or other public official with no educational affiliation will not be accepted. 

If the transcript/academic record does not indicate the degree earned and date the degree was awarded, separate proof of degree is required. 

International applicants or degrees earned from international institutions must submit their official transcripts through the SpanTran pathway portal, or from another NACES approved evaluator. SpanTran has created a custom application for Florida State University that will make sure you select the right kind of evaluation at a discounted rate. Florida State University recommends SpanTran as our preferred credential evaluation because it offers an easy way to streamline the application process.

Please read more about our general transcript requirements on our  Graduate Admissions page. 

International Transfer Credit

International transfer credit is awarded for coursework completed at an accredited (recognized) institution of higher learning. No credit is awarded for technical, vocational, or below-college-level coursework, or courses completed with grades below "D-." An official course-by-course evaluation is required for all academic records from non-U.S. institutions. We recommend the evaluation be done by a member of the  National Association of Credential Evaluation Services . 

Link to Center for Global Engagement Website

SUPPORT TO HELP YOU THROUGHOUT THE PROCESS.

The Center for Global Engagement (CGE) and its staff are here to serve international students and their families. They may advise you about:

  • F and J visa requirements
  • Cultural adjustment
  • Employment matters
  • Housing assistance
  • Assistance with personal concerns
  • Maintaining your visa status

Many academic programs only accept applications for a specific admit term. Contact your academic department to determine which admit term to apply. It is recommended that you submit your application as soon as the admit term opens. CGE also assists students throughout the New International Student Checklist and Process . You may learn more about what CGE has to offer by emailing [email protected]

Link to Center for Intensive English Studies Website

Center for Intensive English Studies

Need to improve your English skills? FSU’s Center for Intensive English Studies can help! At CIES, you will be given personalized instruction by highly qualified teachers in a safe, friendly environment.

Please note that admission to and completion of the CIES program does not necessarily guarantee admission to the University as a degree-seeking student.

CIES also offers:

  • TEFL certification  opportunities
  • Credit-bearing courses and workshops  to enhance your English speaking ability

Learn more about how the Center for Intensive English Studies can help you.  

Florida State University is required by U.S. federal regulations to verify the financial resources of each applicant prior to issuing the Form I-20. If granted admission to the University, an email with instructions on how to complete the I-20 will be sent from the Center for Global Engagement (CGE). You will provide information verifying your financial support (bank statements, award letters, scholarships, etc.) through the I-20 application. FSU requires proof of financial support for the first year of study and demonstrated availability of funds for the length of your academic program

Estimated International Student Costs:

For more information on estimated costs of living and the I-20 process, please visit  CGE’s website .  

I-20 Application

Shortly after admission, students will receive an email with instructions for completing the online I-20 application to demonstrate proof of adequate funding. Florida State University is required by U.S. federal regulations to verify the financial resources of each applicant prior to issuing the Form I-20. Applicants must show proof of financial support for the first year of study and confirm availability of funds for the length of the academic program.

For more information, contact the Center for Global Engagement at [email protected] .

US Federal Grants and Loans are not Awarded to International Students

Graduate students may apply to their respective departments for assistantships or fellowships, although funds are very limited. For further information, please contact your academic department directly. 

SPEAK (Speaking Proficiency English Assessment Kit) is a test for evaluating the English speaking ability of non-native speakers of English. At FSU, the SPEAK test is administered by the Center for Intensive English Studies to international students who have been appointed or will be appointed as teaching assistants in an academic department at Florida State University.

For more information, click here .

  Explore Funding Opportunities 

May the TOEFL be waived?

The TOEFL may only be waived as a test requirement if the student has received a bachelor's or master's degree from a U.S. institution.

Can you review my documents prior to applying?

Students must submit the application, application fee, and any required departmental materials for application materials to be reviewed.

Can the application fee be waived?

Unfortunately, the Office of Admissions is unable to waive the application fee payment for graduate applicants.  In order to complete your application for review, you must submit the application fee payment by logging in to your Application Status Check ,  along with any other documents required by the department. 

When will I receive a decision?

Applications are reviewed holistically by each graduate department. Please contact your department for information about decision timelines. Please note that the application must first be completed before it can be reviewed. Contact your department for more information.

Can the GRE be waived?

FSU is currently waiving the GRE requirement for most master’s and specialist programs through Fall 2026*. For more information on whether the requirement can be waived, please contact your graduate department. 

* Excludes the College of Business

What if I don’t meet the English Language Proficiency score requirements?

​​​​​​ The FSU Center of Intensive English Studies (CIES) offers comprehensive courses to help students improve their English skills. Students who complete the top-level of the CIES program will not have to take an English Language Proficiency test.

What is the F-1 visa/I-20 process?

  • Students can learn more about the I-20 process here .
  • Students can learn about the visa here .

Do you have funding available for International students?

  • The Graduate School offers fellowship and grant opportunities for graduate students. For current FSU students, the  Office of Graduate Fellowships and Awards  assists in identifying and applying for external funding opportunities. In addition,  here is some more information  about additional funding opportunities for international students. 
  • There may also be additional funding opportunities through your department. Please contact your graduate representative for assistance. If you do not know who to contact, please email us at [email protected] for assistance.

Are there on-campus housing opportunities?

University housing costs are not included in the tuition and fees at Florida State University. If you want the option of living on campus, you can apply for housing online as soon as you are officially admitted to FSU. Housing at university-owned residence halls and apartments fill quickly. You can also find off-campus housing options by clicking here .

University of South Florida

Main Navigation

Kailei Yan poses

From oncology nurse to researcher, PhD student researches cancer patient care

  • May 28, 2024

Awards & Accolades , Research , Student Success

Cancer patients across the globe endure severe side effects from treatments that can drastically affect their quality of life. Kailei Yan, a third-year PhD student at the USF Health College of Nursing, is trailblazing research on the role of self-efficacy in mediating the relationship between symptoms and quality of life in cancer patients.

Yan was inspired to pursue research after Dr. Theresa Beckie, a researcher and faculty member at the college, presented in one of Yan’s undergraduate courses.

“That was the moment I became motivated to become a scientist,” says Yan. “And now, Dr. Beckie is my advisor!”

After earning her BSN from the college, Yan worked as an oncology nurse at the Moffitt Cancer Center. Watching patients endure the side effects of cancer and its treatment influenced her decision to pursue cancer research.

“I saw a lot of suffering,” says Yan. “I feel there should be some ways to relieve that. That’s my motivation.”

Yan says she is grateful for all the support she has received as a PhD student at the college. She was recently selected for the Southern Nursing Research Society Dissertation Research Award, an award Dr. Theresa Beckie reccomended Yan for.

"The work that Kailei Yan proposes is important because it seeks to determine the mechanism of action of interventions designed to improve quality of life of patients with cancer," says Dr. Beckie. 

With the continued support of the college and this grant, Yan will be able to delve deeper into her research and further contribute to the field of study.

The application for the USF Health College of Nursing’s PhD program opens on August 15.

Learn more about our PhD program

Return to article listing

Explore More Categories

  • Student Success

About Department News

USF Health College of Nursing News highlights the great work of our trailblazing faculty, staff, and students! The College of Nursing is an integral part of USF Health and the University of South Florida. USF Health College of Nursing -- Where Nursing Trailblazers Belong!  

  • Graduate College

Contact the NAU Office of Graduate & Professional Studies

nau student academic services building on campus in flagstaff

NAU Office of Graduate & Professional Studies admission deadlines

  • International students must apply on or before March 1st for fall admission, if an earlier deadline is not stipulated below.
  • The deadlines listed below are subject to change, but are reviewed and updated regularly. For the most accurate deadline information, please check the NAU Office of Graduate & Professional Studies Admissions Application for the specific program.
  • For full consideration of available funding (GA, tuition waivers, or scholarships) it is best to apply to the program early. Contact the program for specific funding deadlines.

Definitions:

Priority-   If a priority deadline has been specified, it is highly recommended that you submit your application on or before this date. Students that meet this deadline may be given special consideration for things such as assistantships, scholarships, fellowships, etc., if available. Rolling admission- no specific deadline has been identified. Students can apply for admission up until the start of any given term or session. Space available basis- applications will be accepted and considered if space is available in the program. Final- applications will not be accepted past this date. Admission not available- admission applications are not accepted for the specific term.

Graduate program application deadlines

Office of graduate & professional studies, mailing address, social media.

IMAGES

  1. Average IQ Of Phd Students [Best Guide]

    phd students iq

  2. The Average IQ Of A PhD Holder

    phd students iq

  3. What is the average IQ of PhD students and academics? Are they REALLY

    phd students iq

  4. We gave this basic IQ to test to a group of 250 educated adults and

    phd students iq

  5. Can someone with an IQ of 120 do a PhD in mathematics?

    phd students iq

  6. Attractiveness and the IQ Levels of College Disciplines

    phd students iq

VIDEO

  1. PhD defense

  2. My PhD Journey and Story of PhD Preferences

  3. How To Get A PhD In Strategy And Innovation ( Knowledge About Strategic Management PhD Programs )

  4. Panel on Education in Parapsychology

  5. AMU Class 6th Entrance Exam 2024

  6. MS & PhD in USA with IIT Topper: MIT to UC Berkeley? Scholarships, Fees, Jobs

COMMENTS

  1. What is the average IQ of PhD students and academics? Are they REALLY

    The average IQ of PhD graduates and students. According to some sources, the average IQ score for people with PhDs is around 125, which is considered superior. However, this does not mean that people with lower or higher IQs cannot obtain a PhD, as IQ is only one of many factors that influence academic success.

  2. What is the Average IQ of a PhD Student?

    Many of us have pondered this question at one point or another. Here's the gist: the average IQ of a PhD student hovers around 130. Now, this is quite high considering the average IQ for the general population stands at about 100. It's a noteworthy fact, but by no means does it encapsulate the entire picture. Now that we've answered the ...

  3. PhD students aren't what they used to be either

    Starting in 2002, the Danish Government required the universities to increase the number of PhD slots, as part of a larger initiative to support education and innovation in Denmark (see Section 2 for further institutional details). Figure 14 shows that as the number of slots for PhDs increases, the average IQ of the enrolling students falls.

  4. PDF Genius, Creativity, and Talent

    High-IQ Definition 140 = genius level IQ (top 1%); about average for PhD's in physics or who graduate Phi Beta Kappa 150 = Fewer than 1 in 10,000 this high 160 = eligibility for Four Sigma Society; 1 out of 30,000 score this high 165 = 1 in a million; eligibility for Mega Society 228 = Record IQ claimed by columnist

  5. Can You Get a Ph.D. with an Average IQ?

    Yes, you can get a Ph.D. with low to average intelligence, provided you have the right motivation and desire. Most Ph.D. programs require you to score between 130 and 140 on the GRE, but this is by no means a guarantee. Some people are just born smart, while others have to work extra hard to be considered smart.

  6. Attractiveness and the IQ Levels of College Disciplines

    What Dr. Srivastava found is that the disciplines with the highest hotness ratings tended to be those nearer to the bottom when ranked by the average IQ of graduate students (IQ was estimated by ...

  7. Are Doctors Smart? IQ by Profession

    The Simon-Binet IQ Scale classifies scores as the following: Over 140 - Genius or almost genius. 120 - 140 - Very superior intelligence. 110 - 119 - Superior intelligence. 90 - 109 - Average or normal intelligence. 80 - 89 - Dullness. 70 - 79 - Borderline deficiency in intelligence. Under 70 - Feeble-mindedness.

  8. 5 myths about doing a PhD debunked

    The general opinion seems to be that people who do a PhD must have an IQ score approximately equal or above that of Einstein's. ... Not many people can afford to self-fund a PhD, which is why most prospective PhD students apply for funded positions. There are many PhD opportunities in the UK that are funded by research councils and charitable ...

  9. How to develop a researcher mindset as a PhD student

    Created in partnership with. Life as a PhD student is challenging - and one of the most testing aspects of it is the change in mindset it requires. You switch from being a consumer of knowledge to a producer of knowledge. In other words, you transition from passively absorbing information to actively generating new insights through original ...

  10. What predicts grad school success?

    The report, which examined 1,753 studies, found that GRE scores could help predict students' graduate grade point averages, first-year GPAs, ratings from faculty, exam scores, degree attainment and number of citations earned. The specific subject tests were even stronger indicators of students' performance.

  11. Can a person of average intelligence get a PhD in physics or math if he

    I recently reread Malcolm Gladwell's book Outliers, which has a discussion of this very point, i.e., IQ versus academic success.The gist of it is that, like it or not, the IQ test robustly measures something so that a score below X will, with high probability, prevent academic success at level Y. E.g. a child with an IQ below 50 will have trouble taking classes with other students.

  12. Average PHD IQ : r/cognitiveTesting

    cambridge faculty - 125iq or so. average phd student in denmark - 111iq. grad degree holder in usa - at least 105, though the wordsum was used, not an iq test. imo the true figure is probably close to the danish one. grad degree holder in usa - 108.6 using the WAIS and WISC.

  13. Do Harvard and MIT students have a 145 IQ average?

    An IQ score of 145 is considered to be in the "genius" range, indicating exceptional intelligence and cognitive abilities. It is well above the average IQ score of 100, and only about 0.1% of the population falls into this category. 2. Is it true that Harvard and MIT students have a 145 IQ average?

  14. Average IQ Score According to Various Occupational Groups

    The average IQ is 100. A score above 100 is considered as above average, while a score below 100 is considered as a below average. An IQ score much below 50 or above 150 is usually not noticed. Studies show that the IQ of half of the population is between 90 and 110, while 25% have higher IQ's, and 25% have lower IQ's.

  15. What your college major says about your intelligence

    The next sample comes from over 1.2 million students who took the Graduate Record Examination (GRE) between 2002 and 2005 and indicated their intended graduate major.

  16. How Much Does Education Improve Intelligence? A Meta-Analysis

    This was not the case: The effect size increased by a nonsignificant 0.038 IQ points per year of age at the intervention ( SE = 0.228, p = .867). The metaregression model implied that at the youngest intervention age (7.5 years), the effect size was 1.765 IQ points, and at the oldest (19 years) it was 2.204 IQ points.

  17. Woman With A PhD Thinks She's Smarter Than Marine With A ...

    To everyone's surprise, the IQ test results ranked Tyler as the third highest intelligent of the group, with a score of 131, while Maria was the one who was ranked the lowest, with a score of 112.

  18. How Much Does Education Really Boost Intelligence?

    The three study types respectively yielded estimated IQ increases of approximately one point, two points, and five points per additional year of schooling. The results are "not really ...

  19. Aspiring PhDs: the (un)surprising relation between doctoral students

    We found a statistically significant correlation between the number of PhD students and the quantity of papers published: over 90% (R2 = 0.904, F (1.365) = 3431.9, p < 0.01). Participant observation provided theoretical insights about the "how" and "why" of the student´s connection to research productivity. Knowledge is a significant ...

  20. How Gifted is Your Child?

    reading 2-3 years beyond grade level by age seven. parents realize children are not being challenged and contact someone for help between grades two and four. Level Two: Highly Gifted. IQ scores of 130-135 represent approximately 98th - 99th percentiles. can pay attention while being read to by five to nine months.

  21. Intelligence and Achievement Testing: Is the Half-Full Glass Getting

    Today, however, achievement and IQ tests have the potential to identify talented students from all walks of life - and thus to level the playing field of public education. Of course, the playing field is not yet level, but there are some signs that things are getting better. One sign is that IQ and achievement tests continue to be refined and ...

  22. What is the average IQ of a Master's Student?

    Though their average IQ might be higher, graduate students tend to have an even more difficult time finding a job, possibly because they tend to take more classes and longer to finish. The average IQ of Master's students is 126, compared to the average undergraduate's 115. The difference between the two is most clearly seen in their ...

  23. 5 Seriously Stunning Facts About Higher Education in America

    For the general population, the average IQ score is 100. Research has found that, among white, American college students, those with a 105 IQ score have a 50-percent chance of dropping out of ...

  24. Interdisciplinary Quantitative Biology

    Interdisciplinary Quantitative (IQ) Biology Graduate Certificate is a program through CU Boulder's BioFrontiers Institute. The certificate is earned in concordance with a PhD from one of CU Boulder's academic departments. Students learn Interdisciplinary Quantitative skills, while also gaining in-depth knowledge of their field through their ...

  25. When the PhD path leads to career struggles

    When the PhD path leads to career struggles. Updated May 28, 2024, 2:30 a.m. A bird flew past a rainbow on the horizon, as viewed from Morrissey Boulevard in Dorchester. Pat Greenhouse/Globe Staff ...

  26. International Admissions

    International applicants should plan to apply early so they have ample time to obtain their immigration documents and make living arrangements in the U.S. Any F-1/J-1 students planning to obtain their I-20/DS-2019 should contact the Center for Global Engagement at [email protected]. Please check with your department regarding deadlines.

  27. From oncology nurse to researcher, PhD student researches cancer

    Kailei Yan, a third-year PhD student at the USF Health College of Nursing, is trailblazing research on the role of self-efficacy in mediating the relationship between symptoms and quality of life in cancer patients. Yan was inspired to pursue research after Dr. Theresa Beckie, a researcher and faculty member at the college, presented in one of ...

  28. PDF Division of MedicalSciences FY25 Graduate Student Rates

    FY25 Graduate Student Rates 7/1/2024 - 6/30/2025 _____ The DMS Program Fee is due in the Fall semester. FY25 rates will be in effect for the period of 7/1/2024 - 6/30/2025. For budge ng purposes apply an infla onary increase to each category to es mate FY26 costs. Update Date: 5/7/2024. G1/G2 Student HMS Obliga on Faculty Obliga on Total

  29. Graduate Degrees Awarded

    Among the 2021-22 Research Doctorates awarded nationally, 4.6% were Black/African Americans and 5.8% were Hispanic/Latino. International students made up more than one-third (34.1%) of all research doctorates awarded in 2021-22. For Oakland University, Black/African American students made up 11% of our awarded research doctorates and ...

  30. Important NAU Graduate Program Deadlines

    NAU Office of Graduate & Professional Studies admission deadlines. International students must apply on or before March 1st for fall admission, if an earlier deadline is not stipulated below. The deadlines listed below are subject to change, but are reviewed and updated regularly. For the most accurate deadline information, please check the NAU ...