Find anything you save across the site in your account

Why Is It So Hard to Be Rational?

By Joshua Rothman

A man draws a technical drawing of a cartoon head

I met the most rational person I know during my freshman year of college. Greg (not his real name) had a tech-support job in the same computer lab where I worked, and we became friends. I planned to be a creative-writing major; Greg told me that he was deciding between physics and economics. He’d choose physics if he was smart enough, and economics if he wasn’t—he thought he’d know within a few months, based on his grades. He chose economics.

We roomed together, and often had differences of opinion. For some reason, I took a class on health policy, and I was appalled by the idea that hospital administrators should take costs into account when providing care. (Shouldn’t doctors alone decide what’s best for their patients?) I got worked up, and developed many arguments to support my view; I felt that I was right both practically and morally. Greg shook his head. He pointed out that my dad was a doctor, and explained that I was engaging in “motivated reasoning.” My gut was telling me what to think, and my brain was figuring out how to think it. This felt like thinking, but wasn’t.

The next year, a bunch of us bought stereos. The choices were complicated: channels, tweeters, woofers, preamps. Greg performed a thorough analysis before assembling a capable stereo. I bought one that, in my opinion, looked cool and possessed some ineffable, tonal je ne sais quoi. Greg’s approach struck me as unimaginative, utilitarian. Later, when he upgraded to a new sound system, I bought his old equipment and found that it was much better than what I’d chosen.

In my senior year, I began considering graduate school. One of the grad students I knew warned me off—the job prospects for English professors were dismal. Still, I made the questionable decision to embark on a Ph.D. Greg went into finance. We stayed friends, often discussing the state of the world and the meta subject of how to best ascertain it. I felt overwhelmed by how much there was to know—there were too many magazines, too many books—and so, with Greg as my Virgil, I travelled deeper into the realm of rationality. There was, it turned out, a growing rationality movement, with its own ethos, thought style, and body of knowledge, drawn heavily from psychology and economics. Like Greg, I read a collection of rationality blogs—Marginal Revolution, Farnam Street, Interfluidity, Crooked Timber. I haunted the Web sites of the Social Science Research Network and the National Bureau of Economic Research, where I could encounter just-published findings; I internalized academic papers on the cognitive biases that slant our thinking, and learned a simple formula for estimating the “expected value” of my riskier decisions. When I was looking to buy a house, Greg walked me through the trade-offs of renting and owning (just rent); when I was contemplating switching careers, he stress-tested my scenarios (I switched). As an emotional and impulsive person by nature, I found myself working hard at rationality. Even Greg admitted that it was difficult work: he had to constantly inspect his thought processes for faults, like a science-fictional computer that had just become sentient.

Often, I asked myself, How would Greg think? I adopted his habit of tracking what I knew and how well I knew it, so that I could separate my well-founded opinions from my provisional views. Bad investors, Greg told me, often had flat, loosely drawn maps of their own knowledge, but good ones were careful cartographers, distinguishing between settled, surveyed, and unexplored territories. Through all this, our lives unfolded. Around the time I left my grad program to try out journalism, Greg swooned over his girlfriend’s rational mind, married her, and became a director at a hedge fund. His net worth is now several thousand times my own.

Meanwhile, half of Americans won’t get vaccinated; many believe in conspiracy theories or pseudoscience. It’s not that we don’t think—we are constantly reading, opining, debating—but that we seem to do it on the run, while squinting at trolls in our phones. This summer, on my phone, I read a blog post by the economist Arnold Kling, who noted that an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “ Rationality: What It Is, Why It Seems Scarce, Why It Matters ” (Viking) and Julia Galef’s “ The Scout Mindset: Why Some People See Things Clearly and Others Don’t ” (Portfolio). It makes sense, Kling suggested, for rationality to be having a breakout moment: “The barbarians sack the city, and the carriers of the dying culture repair to their basements to write.” In a polemical era, rationality can be a kind of opinion hygiene—a way of washing off misjudged views. In a fractious time, it promises to bring the court to order. When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.

And yet rationality has sharp edges that make it hard to put at the center of one’s life. It’s possible to be so rational that you are cut off from warmer ways of being—like the student Bazarov, in Ivan Turgenev’s “ Fathers and Sons ,” who declares, “I look up to heaven only when I want to sneeze.” (Greg, too, sometimes worries that he is rational to excess—that he is becoming a heartless boss, a cold fish, a robot.) You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“ RATIONAL , adj.: Devoid of all delusions save those of observation, experience and reflection,” Ambrose Bierce wrote, in his “Devil’s Dictionary.”) You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus. Possibly, like Mr. Spock, of “ Star Trek ,” your rational calculations fail to account for the irrationality of other people. (Surveying Spock’s predictions, Galef finds that the outcomes Spock has determined to be impossible actually happen about eighty per cent of the time, often because he assumes that other people will be as “logical” as he is.)

Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “ The Revolt of the Public and the Crisis of Authority in the New Millennium ,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect. COVID deniers and climate activists are different kinds of people, but they’re united in their frustration with the systems built by experts on our behalf—both groups picture élites shuffling PowerPoint decks in Davos while the world burns. From this perspective, the root cause of mass irrationality is the failure of rationalists. People would believe in the system if it actually made sense.

Lawyer has child sign waver before having pinata at their party.

Link copied

And yet modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.” We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust. Rationality is one of humanity’s superpowers. How do we keep from misusing it?

Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?). Rationality was obviously useful, but Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?

The Weberian definitions of rationality are by no means canonical. In “ The Rationality Quotient: Toward a Test of Rational Thinking ” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently. For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest. It’s because some people are better jugglers than others that the world is full of “smart people doing dumb things”: college kids getting drunk the night before a big exam, or travellers booking flights with impossibly short layovers.

Galef, who hosts a podcast called “ Rationally Speaking ” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.) Galef tends to see rationality as a method for acquiring more accurate views. Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want. Intentions matter: a person isn’t rational, Pinker argues, if he solves a problem by stumbling on a strategy “that happens to work.”

Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “ Know Thyself: The Science of Self-Awareness ” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.” Metacognition emerges early in life, when we are still struggling to make our movements match our plans. (“Why did I do that?” my toddler asked me recently, after accidentally knocking his cup off the breakfast table.) Later, it allows a golfer to notice small differences between her first swing and her second, and then to fine-tune her third. It can also help us track our mental actions. A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.

In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before. Studying for a test by reviewing your notes, Fleming writes, is a bad idea, because it’s the mental equivalent of driving a familiar route. “Experiments have repeatedly shown that testing ourselves—forcing ourselves to practice exam questions, or writing out what we know—is more effective,” he writes. The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”

Fleming notes that metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational. In a section of her book called “Calibration Practice,” she offers readers a collection of true-or-false statements (“Mammals and dinosaurs coexisted”; “Scurvy is caused by a deficit of Vitamin C”); your job is to weigh in on the veracity of each statement while also indicating whether you are fifty-five, sixty-five, seventy-five, eighty-five, or ninety-five per cent confident in your determination. A perfectly calibrated individual, Galef suggests, will be right seventy-five per cent of the time about the answers in which she is seventy-five per cent confident. With practice, I got fairly close to “perfect calibration”: I still answered some questions wrong, but I was right about how wrong I would be.

There are many calibration methods. In the “equivalent bet” technique, which Galef attributes to the decision-making expert Douglas Hubbard, you imagine that you’ve been offered two ways of winning ten thousand dollars: you can either bet on the truth of some statement (for instance, that self-driving cars will be on the road within a year) or reach blindly into a box full of balls in the hope of retrieving a marked ball. Suppose the box contains four balls. Would you prefer to answer the question, or reach into the box? (I’d prefer the odds of the box.) Now suppose the box contains twenty-four balls—would your preference change? By imagining boxes with different numbers of balls, you can get a sense of how much you really believe in your assertions. For Galef, the box that’s “equivalent” to her belief in the imminence of self-driving cars contains nine balls, suggesting that she has eleven-per-cent confidence in that prediction. Such techniques may reveal that our knowledge is more fine-grained than we realize; we just need to look at it more closely. Of course, we could be making out detail that isn’t there.

Knowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge. Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps. The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes, an eighteenth-century mathematician and minister. So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.

There are many ways to explain Bayesian reasoning—doctors learn it one way and statisticians another—but the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.

Consider the example of a patient who has tested positive for breast cancer—a textbook case used by Pinker and many other rationalists. The stipulated facts are simple. The prevalence of breast cancer in the population of women—the “base rate”—is one per cent. When breast cancer is present, the test detects it ninety per cent of the time. The test also has a false-positive rate of nine per cent: that is, nine per cent of the time it delivers a positive result when it shouldn’t. Now, suppose that a woman tests positive. What are the chances that she has cancer?

When actual doctors answer this question, Pinker reports, many say that the woman has a ninety-per-cent chance of having it. In fact, she has about a nine-per-cent chance. The doctors have the answer wrong because they are putting too much weight on the new information (the test results) and not enough on what they knew before the results came in—the fact that breast cancer is a fairly infrequent occurrence. To see this intuitively, it helps to shuffle the order of your facts, so that the new information doesn’t have pride of place. Start by imagining that we’ve tested a group of a thousand women: ten will have breast cancer, and nine will receive positive test results. Of the nine hundred and ninety women who are cancer-free, eighty-nine will receive false positives. Now you can allow yourself to focus on the one woman who has tested positive. To calculate her chances of getting a true positive, we divide the number of positive tests that actually indicate cancer (nine) by the total number of positive tests (ninety-eight). That gives us about nine per cent.

Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information. In the early hours of September 26, 1983, the Soviet Union’s early-warning system detected the launch of intercontinental ballistic missiles from the United States. Stanislav Petrov, a forty-four-year-old duty officer, saw the warning. He was charged with reporting it to his superiors, who probably would have launched a nuclear counterattack. But Petrov, who in all likelihood had never heard of Bayes, nevertheless employed Bayesian reasoning. He didn’t let the new information determine his reaction all on its own. He reasoned that the probability of an attack on any given night was low—comparable, perhaps, to the probability of an equipment malfunction. Simultaneously, in judging the quality of the alert, he noticed that it was in some ways unconvincing. (Only five missiles had been detected—surely a first strike would be all-out?) He decided not to report the alert, and saved the world.

Bayesian reasoning implies a few “best practices.” Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.

In a sense, the core principle is mise en place. Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat. But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities. Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.

Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful. But the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size. The rationality community has its own lingua franca. If a rationalist wants to pay you a big compliment, she might tell you that you have caused her to “revise her priors”—that is, to alter some of her well-justified prior assumptions. (On her mental map, a mountain range of possibilities has gained or lost probabilistic altitude.) That same rationalist might talk about holding a view “on the margin”—a way of saying that an idea or fact will be taken into account, as a kind of tweak on a prior, the next time new information comes in. (Economists use the concept of “marginal utility” to describe how we value things in series: the first nacho is delightful, but the marginal utility of each additional nacho decreases relative to that of a buffalo wing.) She might speak about “updating” her opinions—a cheerful and forward-looking locution, borrowed from the statistical practice of “Bayesian updating,” which rationalists use to destigmatize the act of admitting a mistake. In use, this language can have a pleasingly deliberate vibe, evoking the feeling of an edifice being built. “Every so often a story comes along that causes me to update my priors,” the economist Tyler Cowen wrote, in 2019, in response to the Jeffrey Epstein case. “I am now, at the margin, more inclined to the view that what keeps many people on good behavior is simply inertia.”

In Silicon Valley, people wear T-shirts that say “Update Your Priors,” but talking like a rationalist doesn’t make you one. A person can drone on about base rates with which he’s only loosely familiar, or say that he’s revising his priors when, in fact, he has only ordinary, settled opinions. Google makes it easy to project faux omniscience. A rationalist can give others and himself the impression of having read and digested a whole academic subspecialty, as though he’d earned a Ph.D. in a week; still, he won’t know which researchers are trusted by their colleagues and which are ignored, or what was said after hours at last year’s conference. There’s a difference between reading about surgery and actually being a surgeon, and the surgeon’s priors are what we really care about. In a recent interview, Cowen—a superhuman reader whose blog, Marginal Revolution, is a daily destination for info-hungry rationalists—told Ezra Klein that the rationality movement has adopted an “extremely culturally specific way of viewing the world.” It’s the culture, more or less, of winning arguments in Web forums. Cowen suggested that to understand reality you must not just read about it but see it firsthand; he has grounded his priors in visits to about a hundred countries, once getting caught in a shoot-out between a Brazilian drug gang and the police.

Mushrooms in a clearing.

Clearly, we want people in power to be rational. And yet the sense that rationalists are somehow unmoored from direct experience can make the idea of a rationalist with power unsettling. Would such a leader be adrift in a matrix of data, more concerned with tending his map of reality than with the people contained in that reality? In a sketch by the British comedy duo Mitchell and Webb, a government minister charged with ending a recession asks his analysts if they’ve considered “killing all the poor.” “I’m not saying do it—I’m just saying run it through the computer and see if it would work,” he tells them. (After they say it won’t, he proposes “blue-skying” an even more senseless alternative: “Raise V.A.T. and kill all the poor.”) This caricature echoes a widespread skepticism of rationality as a value system. When the Affordable Care Act was wending its way through Congress, conservatives worried that similar proposals would pop up on “death panels,” where committees of rational experts would suggest lowering health-care costs by killing the aged. This fear, of course, was sharpened by the fact that we really do spend too much money on health care in the last few years of life. It’s up to rationalists to do the uncomfortable work of pointing out uncomfortable truths; sometimes in doing this they seem a little too comfortable.

In our personal lives, the dynamics are different. Our friends don’t have power over us; the best they can do is nudge us in better directions. Elizabeth Bennet, the protagonist of “ Pride and Prejudice ,” is intelligent, imaginative, and thoughtful, but it’s Charlotte Lucas, her best friend, who is rational. Charlotte uses Bayesian reasoning. When their new acquaintance, Mr. Darcy, is haughty and dismissive at a party, she gently urges Lizzy to remember the big picture: Darcy is “so very fine a young man, with family, fortune, everything in his favour”; in meeting him, therefore, one’s prior should be that rich, good-looking people often preen at parties; such behavior is not, in itself, revelatory. When Charlotte marries Mr. Collins, an irritating clergyman with a secure income, Lizzy is appalled at the match—but Charlotte points out that the success of a marriage depends on many factors, including financial ones, and suggests that her own chances of happiness are “as fair as most people can boast on entering the marriage state.” (In modern times, the base rates would back her up: although almost fifty per cent of marriages end in divorce, the proportion is lower among higher-income people.) It’s partly because of Charlotte’s example that Lizzy looks more closely at Mr. Darcy, and discovers that he is flawed in predictable ways but good in unusual ones. Rom-com characters often have passionate friends who tell them to follow their hearts, but Jane Austen knew that really it’s rational friends we need.

In fact, as Charlotte shows, the manner of a kind rationalist can verge on courtliness, which hints at deeper qualities. Galef describes a typically well-mannered exchange on the now defunct Web site ChangeAView. A male blogger, having been told that one of his posts was sexist, strenuously defended himself at first. Then, in a follow-up post titled “Why It’s Plausible I’m Wrong,” he carefully summarized the best arguments made against him; eventually, he announced that he’d been convinced of the error of his ways, apologizing not just to those he’d offended but to those who had sided with him for reasons that he now believed to be mistaken. Impressed by his sincere and open-minded approach, Galef writes, she sent the blogger a private message. Reader, they got engaged.

The rationality community could make a fine setting for an Austen novel written in 2021. Still, we might ask, How much credit should rationality get for drawing Galef and her husband together? It played a role, but rationality isn’t the only way to understand the traits she perceived. I’ve long admired my friend Greg for his rationality, but I’ve since updated my views. I think it’s not rationality, as such, that makes him curious, truthful, honest, careful, perceptive, and fair, but the reverse.

In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization. This is surely right in some cases, but not in all. One spring, when I was in high school, a cardinal took to flying at our living-room window, and my mother—who was perceptive, funny, and intelligent, but not particularly rational—became convinced that it was a portent. She’d sometimes sit in an armchair, waiting for it, watchful and unnerved. Similar events—a torn dollar bill found on the ground, a flat tire on the left side of the car rather than the right—could cast shadows over her mood for days, sometimes weeks. As a voter, a parent, a worker, and a friend, she was driven by emotion. She had a stormy, poetic, and troubled personality. I don’t think she would have been helped much by a book about rationality. In a sense, such books are written for the already rational.

My father, by contrast, is a doctor and a scientist by profession and disposition. When I was a kid, he told me that Santa Claus wasn’t real long before I figured it out; we talked about physics, computers, biology, and “Star Trek,” agreeing that we were Spocks, not Kirks. My parents divorced decades ago. But recently, when my mother had to be discharged from a hospital into a rehab center, and I was nearly paralyzed with confusion about what I could or should do to shape where she’d end up, he patiently, methodically, and judiciously walked me through the scenarios on the phone, exploring each forking path, sorting the inevitabilities from the possibilities, holding it all in his head and communicating it dispassionately. All this was in keeping with his character.

I’ve spent decades trying to be rational. So why did I feel paralyzed while trying to direct my mother’s care? Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive. An effective rationalist must be able to short the mortgage market today, or commit to a particular rehab center now, even though we live in a world of Bayesian probabilities. I know, rationally, that the coronavirus poses no significant risk to my small son, and yet I still hesitated before enrolling him in daycare for this fall, where he could make friends. You can know what’s right but still struggle to do it.

Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds. Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved. I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart. I trusted that, unlike the minister in the Mitchell and Webb sketch, he would care enough to think deeply about my problem. Caring is not enough, of course. But, between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.

The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula. But, in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together. For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. ♦

New Yorker Favorites

  • The day the dinosaurs died .
  • What if you could do it all over ?
  • A suspense novelist leaves a trail of deceptions .
  • The art of dying .
  • Can reading make you happier ?
  • A simple guide to tote-bag etiquette .
  • Sign up for our daily newsletter to receive the best stories from The New Yorker .

rational thinking essay

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Among the A.I. Doomsayers

By Andrew Marantz

Can We Get Kids Off Smartphones?

By Jessica Winter

Arguing Ourselves to Death

By Jay Caspian Kang

Percival Everett Can’t Say What His Novels Mean

By Maya Binyam

Oxford University Press

Oxford University Press's Academic Insights for the Thinking World

rational thinking essay

What is the value of rationality, and why does it matter?

rational thinking essay

The Value of Rationality

  • By Ralph Wedgwood
  • December 16 th 2017

Rationality is a widely discussed term. Economists and other social scientists routinely talk about rational agents making rational choices in light of their rational expectations. It’s also common in philosophy, especially in those areas that are concerned with understanding and evaluating human thinking, actions, and institutions. But what exactly is rationality? In the past, most philosophers assumed that the central notion of rationality is a normative or evaluative concept: to think rationally is to think ‘properly’ or ‘well’—in other words, to think as one ‘should’ think. Rational thinking is in a sense good thinking, while irrational thinking is bad. Recently, however, philosophers have raised several objections to that assumption.

First of all, how can it be true that you should never think irrationally, if you sometimes can’t help it?

Secondly, picture a scenario where you would be punished for thinking rationally—wouldn’t it be good to think irrationally in this case and bad to keep on thinking rationally?

And finally, rationality requires that our mental states (in other words, our beliefs, choices, and attitudes in general) are consistent and coherent. But why is that important, and what is so good about it?

Having considered these three arguments, we can now debate which side is right. Does thinking ‘rationally’ mean thinking ‘well ‘ and ‘properly’, or not? However, looking at both sides of the issue, it becomes evident that we still need considerable philosophical arguments and analysis before we can arrive at any conclusion. The reason why is because the problem itself is not clearly defined, since we don’t know the meaning of some of the key terms. Therefore, as a next step in the analysis, we will review some recent work in linguistics, specifically semantics.

Rationality, in the end, is the feature of your mind that guides you—ideally (if you’re lucky) towards the goal of getting things right.

Most linguists accept that every concept expressed by ‘should’ implies some concept that can be expressed by ‘can.’ But there are many different shades of ‘can.’ So, even if there is a strong sense of ‘can’ that makes it true that you ‘can’t help’ thinking as irrationally as you do, there could still be a weaker sense of ‘can’ that makes it true that you ‘can’ think more rationally than you do. In this way, we may be able to answer the first objection: the sense in which it is true that we ‘should think rationally’ implies one of these weaker senses of ‘can’, which make it true that we ‘can’ think more rationally than we do.

The same sort of differentiation may help with the second and third objections. The meaning of terms like ‘good’, ‘well’, and ‘properly’ changes in different circumstances. Think about the scenario in which you would be punished for thinking rationally, and rewarded for doing the opposite. In one sense of good, it is good in this case to think irrationally, but in another sense, it remains good for you to think rationally, because rational thinking in itself is always good.

Instead of answering our questions, however, this line of argument raises more, because what we need to do now is define this sense of ‘good’, in which rational thinking is always ‘good.’ But here is a proposal about how to answer these further questions. When you have a belief, or when you choose a course of action, you have a goal—the goal of getting things right. After all, it would be absurd and nonsensical to say, “I know that this is the right thing to believe, but why should I believe it?” To get things right, your beliefs and choices must fit with the external world.

However, your beliefs and choices cannot be directly guided by what is happening in the external world. They can only be directly guided by what is going in your mind. Rationality, in the end, is the feature of your mind that guides you—ideally (if you’re lucky) towards the goal of getting things right.

Suppose that your belief does get things right in this way. The fact that you succeeded in getting things right is explained in part by the fact that you were thinking rationally. In other words, rationality matters because rationality is the means by which we pursue the goal of getting things right.

Featured image credit: Photograph of a boy in front of a chess landscape by Positive Images. Public domain via Pixabay .

Ralph Wedgwood is a Professor of Philosophy at University of Southern California. He is the author of The Value of Rationality , The Nature of Normativity , and around fifty articles in various volumes and philosophy journals.

Our Privacy Policy sets out how Oxford University Press handles your personal information, and your rights to object to your personal information being used for marketing to you or being processed as part of our business activities.

We will only use your personal information to register you for OUPblog articles.

Or subscribe to articles in the subject area by email or RSS

Related posts:

rational thinking essay

Recent Comments

Perhaps we should also view `rationality’ from a `reasonable’ perspective.

For instance, one could reasonably argue that, both qualitatively and quantitatively, any belief (i.e., the perceived content of a well-defined declarative sentence) is necessarily associated with a suitably-defined truth assignation that must fall into one or more of the following three categories:

(i) beliefs that we hold to be `true’ in an absolute, Platonic, sense, and have in common with others holding beliefs similarly;

(ii) beliefs that we hold to be `true’—short of Platonic belief—since they can be treated as self-evident, and have in common with others who also hold them as similarly self-evident;

(iii) beliefs that we agree to define as `true’ on the basis of a convention, and have in common with others who accept the same convention for assigning truth values to such assertions.

Clearly the three categories of beliefs have associated truth assignations with increasing degrees of objective accountability (i.e., accountability based on evidence-based reasoning) which must, in turn, influence the psyche of whoever is exposed to a particular category at a particular moment of time.

Thus, zealots might be categorised as irrational agents since they accept all three as definitive; prophets as reasonable agents since they hold only (ii) and (iii) as definitive; and scientists as rational agents since they hold only (iii) as definitive.

If rational thinking is “good” or “proper” thinking, it has to be better than something else. The article suggests “irrational” thinking, but I don’t find that much help. I suspect most of us would contrast “rational” thinking with “emotional” thinking, which suggests a difference, not just in outcomes, but two fundamentally different kinds of thinking, each rising from very different activities in the brain and body.

I also suspect most of us would consider “rational” thinking to be a later, and more refined evolutionary development – a specifically human kind of thinking – an historical development that came into its own during the time of classical Greek culture.

To evaluate the value and importance of “rational” thinking it should help to know how we came to have it. I suggest “rational” thinking developed as a way to reduce uncertainty in our increasingly complex, culturally driven species.

Most creatures live “in the moment”. They don’t know about tomorrow afternoon, much less a week from Friday and so they have not developed, and could not use a kind of thinking that considered all the possible events between now and then. We live in the moment, the hour, the day, the week, the year, the generation, our cultural age, in history. For us, necessity has been the mother of invention. It has brought us stories, history, writing, counting, money, the Rosetta stone, books, libraries, newspapers, radio, television, computers, and artificial intelligence. None of this could come from the kind of thinking that came packaged in the box when our species was new.

We need to plan for layer upon layer of overlapping slices of time and so our level of uncertainty and our need for information is not only vastly greater than any other species, it is continually increasing. “Rational” thinking has been our answer to that need. It has worked pretty well, but at some level we don’t like it. Compared to “emotional” thinking, to going with our “gut” it seems contrived, slightly unnatural.

The author asks, “What is the value of rationality and why does it matter?” I have drifted pretty far from his analysis, but this where the question led me. I ask the author’s indulgence and thank him for making me think.

Comments are closed.

  • Newsletters

Site search

  • Israel-Hamas war
  • 2024 election
  • Kate Middleton
  • TikTok’s fate
  • Supreme Court
  • All explainers
  • Future Perfect

Filed under:

The myth of rational thinking

Why our pursuit of rationality leads to explosions of irrationality.

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: The myth of rational thinking

A ceramic cast of a human head being shattered into fragments.

Are human beings uniquely irrational creatures? And if we are, what are the consequences of basing our society on the opposite assumption?

These are questions Justin E.H. Smith, a philosopher at the University of Paris, takes up in his new book, Irrationality: A History of the Dark Side of Reason . He pokes holes in the story humans in the Western world have been telling themselves for centuries: that we were once blinkered by myth and superstition, but then the ancient Greeks discovered reason and, later, the Enlightenment cemented rationality as the highest value in human life.

Smith argues that this is a flattering but false story. Humans, he says, are hardly rational, and in fact, irrationality has defined much of human life and history. And the point is not merely academic. “The desire to impose rationality, to make people or society more rational,” he writes, “mutates ... into spectacular outbursts of irrationality.”

If Smith is right, that leaves us in a precarious position. If we can’t impose order on society, what are we supposed to do? Should we not strive to incentivize rationality as much as possible? Should we rethink the role of reason in human life?

I put these and other questions to Smith in a recent interview. A lightly edited transcript of our conversation follows.

Sean Illing

It’s hard to sum up the thesis of a book like this. How would you characterize it?

Justin E.H. Smith

The thesis is that the 20th-century philosophers T.W. Adorno and Max Horkheimer were basically correct when they argued that the Enlightenment world has an innate tendency to degenerate into myth, reason into unreason. And that this tendency of reason toward unreason is exacerbated by overly ambitious efforts to suppress or eliminate unreason. I think this is true both at the level of individual reason, or “psychology,” as well as at the level of society as a whole.

Some examples of this will help clarify what you mean, but first let’s back up a little. We have this idea, which goes all the way back to Aristotle, that human beings are distinguished from other animals by their capacity for reason. Is this a misleading picture? Should we not think of humans as uniquely rational creatures?

This is the traditional view. There is a counter-tradition, however, which says that human beings are the uniquely irrational animal. On this view, animals are rational to the extent that they do not get mired in deliberation and hesitation, but always just cut right to the chase and execute those actions that are perfectly suited to the sort of creatures they are, while we human beings stand there paralyzed by doubt and worry.

I am sympathetic to this view, though it can be carried too far. Obviously, we have been able to choose the correct course of action enough of the time to survive long enough to reproduce. We are a successful species, but not exceptionally so, and as far as I can tell not in virtue of being exceptionally well-endowed with reason.

That’s certainly one way to think of rationality. By that standard, you might say that human beings are cursed with too much consciousness, that our obsession with thinking creates more problems than it solves.

You might say that. But it’s not as if we think just because we are obsessed with thinking. Presumably, we human beings, as well as our hominid and pre-hominid ancestors, thought for a very long time before we began thinking about how this is possible and how it can go wrong.

Well, let’s talk about how it can go wrong. You write: “The desire to impose rationality, to make people or society more rational, mutates ... into spectacular outbursts of irrationality.” Can you give me an example of what you mean here?

The clearest instance in the book, which I set up as a sort of foundational myth, is the Pythagorean cult in the fifth century BC, which becomes so devoted to the perfect rationality of mathematics that it has trouble dealing with the discovery of the existence of irrational numbers . And so when one of its own, Hippasus of Metapontum, starts telling people outside the group that the world can’t be explained by mathematics alone, legend has it that the leader of the group had him drowned in a fit of anger.

The 18th-century French playwright and activist Olympe de Gouges is another example. In the spirit of reason, she famously argued that whatever the Universal Declaration of the Rights of Man — the civil rights document produced by the French Revolution in 1789 — said about men must also apply to women. And for that, the Jacobins cut her head off. So the response to her perfect rationality was extreme, murderous irrationality.

Something similar has followed countless revolutions since 1789, and many of these revolutions, notably the Marxist ones, have been at least nominally committed to the rational restructuring of society. I gather some Marxists are perfectly fine with seeing heads roll, and assume that it will only be the right heads that roll next time around and all present-day descendants of Olympe de Gouges will be spared. Or maybe they think it will never actually come to that.

rational thinking essay

I think it’s obvious enough why humans are irrational, but where does this mania for rationality come from? Why are we so desperate to impose order on the world and society in the first place?

I think we just got a bit carried away. In the modern period, anyway, rationality became a value first in science and technology, where it plainly had its place. Making the correct inferences and following the correct method meant more scientific breakthroughs, which meant faster and more powerful machines.

But then the idea caught on that society itself is a big machine, and that the human being is a small sub-machine within the big machine of society, and that these two kinds of machine can be perfected in the same way that we have managed to perfect the steam engine, the telegraph, and so on.

But this has always been a misguided approach to psychology and politics, based on a weak metaphor drawn from a narrow domain of human life — mechanical engineering — in which we actually do have a pretty good understanding of how things work and of how problems are fixed.

I wonder where all this leaves us. There are obviously limits to reason, and we can only do so much to curb our worst impulses. At the same time, we want a world that is more intelligent, more wise, more compassionate. But we also have to base our social and political systems on a realistic model of human nature.

I don’t really have any formulas to offer here. Caution, pragmatism, case-by-case consideration of questions of justice, all seem advisable to me. I am not a political theorist, let alone a policymaker, and I think I manage to get to the end of the book without pretending to be either of these.

In spite of everything I’ve said, I believe in some amount of redistributive justice, including taking away about 99.9 percent of the fortunes of Bezos, Zuckerberg, and others, and turning the big tech companies into public utilities. I just think this should be done with good laws and broad public support, in such a way as to make it inevitable and ultimately painless for everyone (after all, these men would still be multimillionaires after the great confiscation).

The big error of so many schemes to rationally improve the human condition has been to spread the belief that there must be some great event in order for the new order of things to take hold, that rationality must be stoked by irrationality in order to work. That’s Leninism in a nutshell. But if society is ever going to be organized rationally, getting there is going to be very boring.

I’m curious how you think about progress in a big-picture sense. Reading your book, I thought about the story people like Steven Pinker tell, which is essentially that human history is a bumpy but nonetheless steady march of reason and progress. What’s wrong with this narrative?

Some of the data are pretty compelling about overall improvements in human life. If you look at India just in the past few years, the number of people with access to plumbing has skyrocketed, and disease has correspondingly gone down significantly. This is part of the legacy of Narendra Modi, and it is likely that the new era of authoritarian capitalism, perfected by China with runners-up like Modi, Erdogan, Bolsonaro, and Trump trying to get in on the game, will likely involve some improvements in the standard of living, at least for members of favored groups.

But I’m not sure this counts as overall improvement. For one thing, it is proving to be, in the regimes I’ve cited, at the expense of someone else that the improvements are carried out. What’s more, it will all be for nothing if any of the apocalyptic scenarios of the near future, which all serious people take seriously, comes to pass.

I take Pinker’s point about how quality of life has improved, and yet I look at our civilization’s incapacity to curtail its own destruction. I look at the fact that we’ve built a civilization predicated on the destruction of our own environment, and we’re unable to change course because we’re too blinkered by short-term interests. That doesn’t feel like progress to me.

Pinker probably sincerely believes he’s got an answer to this question, but honestly, when I consider his argument about the steady improvement of things, I just want to say: Well, let’s check back in 50 years. Is the Amazon [River] still there? Have the nuclear weapons been used?

And as for the Enlightenment and the purported achievement of perpetual peace in Western Europe in the 20th century, is it not plain that these two great victories had to do, first of all, with the pillaging of the rest of the world, and, second of all, with the fact that since the end of World War II, Western Europe has been surrounded by two superpowers ready to blow up the world if anyone makes a false move? Of course Europeans have been behaving themselves!

Do you see the global resurgence of nativism and right-wing populism as a rejection of Enlightenment principles?

It’s an old dialectic. The populist right is articulating most of its opinions and aims in terms derived from the Enlightenment — distorted terms, but still the same terms. The clearest example of this is the invocation of “freedom of speech” as a bludgeon for pushing extreme-right ideology into the center of public discourse.

Is that to say that the rational and technocratic world built on Enlightenment principles will always produce these sorts of reactionary crises? And what exactly are these populist movements rejecting?

I think it’s a question of managing these tendencies so that they don’t rise to crisis level: managing them without heavy-handedly suppressing them, and at the same time without nurturing them. That’s a delicate balance, as we’ve been seeing in the past few years.

When I was a kid, I assumed it was good to allow Nazi parades in Skokie or wherever, in part because I believed this was an effective form of containment. I see now that I took for granted that these parades would never build to anything truly threatening, and I think it’s impossible to think that anymore. The parades have moved online, but with that minor difference accounted for, they are much, much larger than they were a few decades ago.

I’ll ask what might seem like a strange question: What’s the utility of irrationality in human life? How do our irrational instincts actually serve us?

I place a lot of good things under the heading “irrationality” — not just dreams but also drunkenness, stonedness, artistic creation, listening to stories by the campfire, enjoyment of music and dancing, all sorts of orgiastic revelry, mass events like concerts and sports matches, and so on. I think most people would agree that these things make life worth living. And I think it’s impossible to account for the value of these things in purely utilitarian terms.

I could make a utilitarian case for some of those things, but I know what you mean. Maybe the point here is that the choice isn’t between a rational or irrational society, but rather a question about how best to manage the tensions between these two forces.

That’s right. It’s all about managing it rather than suppressing it or, the opposite approach, letting it run loose. An analogy: Scientists who study addiction have noted the problems biochemically for some people with eating disorders are scarcely distinguishable from drug addictions. You can advise a person to quit heroin cold turkey, but what do you tell them if they’re addicted to food? Irrationality is more like food in this regard than like illicit drugs. You can’t eliminate it, but obviously if you’re bingeing, you’ve got a problem and should get some help.

If you’re right that we can’t contain our own stupidity, how should we think about the role of reason in human life?

I think the value of reason is exaggerated by some and downplayed by others. It’s also very often invoked disingenuously, as a bludgeon to assert one’s will. This is what Nietzsche understood so well about the history of rationalist philosophy, and it’s what we see vividly illustrated countless times each day by Twitter’s “reply guys,” who are always ready to jump in with a “Well, actually” to pretty much anything anyone says, and particularly if that person is a woman or someone they think they can easily upstage.

Now, what they are saying might be true and reasonable, but it’s just obvious that the reason they’re saying it has to do with self-glorification, venal ambition, and other base motives. From a certain point of view, the history of philosophy is a history of reply guys who just happen to be very good at masking the true nature of their project. I don’t necessarily think that, but that thought nevertheless comes to me whenever I hear someone exalting too fervently the importance and the power of reason.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Will you help keep Vox free for all?

At Vox, we believe that clarity is power, and that power shouldn’t only be available to those who can afford to pay. That’s why we keep our work free. Millions rely on Vox’s clear, high-quality journalism to understand the forces shaping today’s world. Support our mission and help keep Vox free for all by making a financial contribution to Vox today.

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

rational thinking essay

Next Up In Future Perfect

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

Senator Bernie Sanders raises a fist at a rally in support of United Auto Workers in front of a large banner that reads “UAW stand up.”

Want a 32-hour workweek? Give workers more power.

The Nickelodeon logo displayed on a phone screen and a laptop keyboard.

The harrowing “Quiet on Set” allegations, explained

A prison fence with thick rows of barbed wire. The sky surrounding is a deep blue with light streaming in from the right side.

The chaplain who doesn’t believe in God

A promotional photo for Beyoncé’s album “Cowboy Carter,” shows Beyoncé in a white cowboy hat and red, white, and blue outfit, with long white hair flowing behind her.

Beyoncé’s “Jolene” and country music’s scorned woman trope 

Mike Johnson, a middle-aged white man in a blue suit and red tie wearing glasses, looks downward with a serious expression.

Could Republican resignations flip the House to Democrats?

A smartphone screen shows the logo of the Truth Social app.

Truth Social just made Trump billions. Will it solve his financial woes?

Philosophy Now: a magazine of ideas

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

What Is It To Be Rational?

By v.b. shneider.

What is it to be rational? An individual appears to be rational, rational being his actions. But what does it mean to act in a rational way? Let us turn to the notion of rationality as a characteristic of human activity and those phenomena the notion in question describes.

The wide-ranging understanding of rationality may cause a danger of inaccurate conveyance of a chosen meaning in various contexts and thus calls for being fixed in a definite meaning. This “fixation” presupposes formulation of an exact definition. To choose the basis for the definition of the kind is of no problem. The names of notions bearing, as a rule, no indication of being associated with this or that meaning, what arguments should be offered in defence of such a choice? There are two main ways to choose the basis for a definition.

The first one is to turn to language, to the established tradition of using notions in various contexts. The main danger on the way is that the conservation of an initial meaning may put an end to unconventional trends of uncommon interpretations of the notion used, narrowing thus the sphere of its creative usage. Any notion depends on context, on the system of its consideration. That is why although philosophical categories have definite meanings they are mostly relative and liable to change their content even within the scope of one and the same philosophical tradition, allowing for historical tradition, context and aspect of the problem. This is the situation as far as the notion of rationality is concerned.

In the XX-th century the problem of rationality has become one of the central problems of philosophical investigations. The wide-ranging manifestations of “rationality phenomena” and the variety of methodological approaches in continental and Anglo-Saxon social philosophy and in the philosophy of science define the great compass of meanings of the notion of rationality.

The second way is to turn to reality, to those phenomena for which there are no generally recognised terms, so that it is up to researcher to choose any name to denote them. Thus, an astronomer discovering a new comet is justified in calling it any name, however extravagant it may sound. But in a scientific investigation such a freedom is to a great extent restricted by the fact that a word of an actual language entails a train of meanings likely to distort considerably the understanding of those phenomena to denote which is used. That is why the second way necessarily involves the elements of the first one to provide the happiest notation for a given phenomenon.

Dictionaries of modern European languages, English, French, German being the basic ones, testify to the fact that “norm”, “reason” and “expediency” are registered among the most fundamental meanings of the word “rationality”. Hence, let us define rationality as reasonably based normativity which guarantees an expedient process of activity. Then the question “What is it to be rational?“ might be provided with the following answer, no matter how general it may seem. A man is rational in his actions if they are performed in accordance with some sensible reasons which make the aim he pursues possible of attainment. Let us clarify our meaning.

Any activity possesses a universal structure: aim – means – result. Being an ideal image of a final result, a reverberation of demands objectively existent, aim as an element of activity characterises its predictable result in the consciousness of an individual. It is a fundamental element of its structure, a mode of constructing activity, an integral principle of reducing various actions to a system which possesses a quality of an absolute value within the universal structure of activity and has an outward valuative basing, that is a valuative basing of activity itself. Means of activity in the broadest sense of the word include the whole complex of conditions, acts and things, methods and ways which make the attainment of the aim, its predictable result possible. Thus, result as an element of activity appears to be an incarnation of its ideal image and project.

According to the definition, human activity is expedient and since the phenomenon of rationality pertains exclusively to the sphere of human activity, hence everything rational is expedient. Expediency means an absolute submission of all the elements of activity to its aim, such a set of elements which necessarily result in the attainment of the aim. Let us turn to one of the aspects of rationality, i.e. normativity, which will make our study of the former phenomenon still more thorough.

There are two principal types of norms in cultural reality. The first type is a traditional norm which has spontaneously arisen in the process of social development which, as a rule, is anonymous and handed down by means of customs, imitation and so on. This type includes customs and informal norms of different groups. But side by side with such norms there are some other norms which have arisen as a result of reasonable activity of consciousness or traditional norms critically reflected by reason. Norms of this type do not appear spontaneously and have an author. These norms are textually formed and based on certain logical argumentation. For instance, different juridical laws, administrative rules, technological standards, “Code of Napoleon” etc. To this class we also include norms of moral and etiquette, although these norms possess traditional character in greater degree than legal or technological norms and have neither unambiguous and strict wording nor unit codificational origin.

We assume that such norms, socially reflected, textually expressed and based on logical argumentation underlie a rational activity of people. Hence, not every kind of normatively regulated activity may be characterised as rational one.

So a theoretic model of rationality is a model of human behaviour and thinking, human activity on the whole, realised in accordance with norms which find their substantiation in the procedure of analytical activity of human reason.

By reasonably based norm we mean such a norm the adoption of which follows from a certain reasoning. In an ultimate case a logical form of such a basing is a simple syllogism.

As a matter of fact there are two modes of a reasonable basing of norms: valual and normative. The first one addresses to the sphere of values and relation between norms and values. This way of reasonable basing of norms may be used in case of possible reduction of norms to values. For instance:

A good action is obligatory. An observance of technological process is a good act. Consequently, an observance of technological process is obligatory.

But it is possible to construct a procedure of basing of norms within the scope of normative sphere. This basis presupposes an introduction of normative postulates (or so called presumptions). For example:

An action in accordance with rules (law) is obligatory. An observance of technological process is action in accordance with rules (law). Consequently, an observance of technological process is obligatory.

It should be noted that the examples given above illustrate only the principal modes of basing of norms on the most primitive level.

With the term “rationality” we would like to embrace such aspects of human activity which refer to analytical ability of reason, methodological planning, pragmatic calculation and expediency. Such an activity is performed, to our mind, by using normative means of its utilisation.

There is no denying the fact that reason (in the most common sense) gives rise to “rational”. Reason as a human ability naturally spreads into the sphere of human activity and the latter acquires reasonable character. But rationality, in our opinion, largely characterises formal aspects of activity, its technological side. Rationality is connected with the analytical, systematising and calculating functions of human reason, with an idea of method and algorithm.

Thus, rational activity is a normatively realised activity, that is generally accepted as a due activity but only such an activity which is realised in accordance with reasonably based normativity, which with necessity guarantees the achievement of the aim of the activity. That is why this activity is expedient. Now we shall consider expediency as a character of rational activity which is normatively realised.

Let there be an aim of activity and a class of activity means providing the attainment of the aim. Then an expediency is a characteristic of activity which describes inevitable achievement of the aim due to socially normalised means of activity. In accordance with normative interpretation of expediency, means of activity in form of necessary conditions of activity, certain subjects, methodological rules and different prescriptions – are consistent with the aim because of their normative status. It is obvious that normative interpretation of expediency narrows it as a characteristics of activity by the sphere of influence of social normativity. Considering this, none of the normative activity is rational. Thus, expediency as a characteristic of rational activity presents a necessary achievement of the aim of certain activity which is based on normativity based in its turn on the processes of analytical activity of human reason. Reflected by reason and reasonably based normativity presupposes a calculation of procedure of realisation of rational activity, presence of expedient standards and rules of realisation, presence of actual algorithm.

Hence, expediency as a characteristic of rational activity means the achievement of the aim by means of normative programme, algorithm which necessarily implies this achievement. An algorithm is a strict, easy and unambiguously interpreted description of a consistently realised decision (by means of separate steps) of any task from a certain class of tasks. For example, procedures of addition, subtraction, algorithm of Euclid etc. Observance of the procedure with necessity guarantees a correct result from the point of view of rules providing thus utilisation. Reasonably based normativity underlies the base of production and utilisation of any algorithm. Characteristic traits of any algorithm are as follows: determinativity, expediency and popularity. Speaking about normativity of algorithms, we would like to underline that owing to a prescriptive-descriptive character of norms which underlie algorithms, the latter are not only descriptions but prescriptions, rules, recommendations etc.

Such a prescriptive determination of based norms guarantees expediency of algorithmic activity as a necessary achievement of the aim and, moreover, in the shortest way possible. These properties of algorithms determine algorithms as attributive means and characteristics of rational activity.

Thus, rational activity is an such activity which is substantiated by norms (which are reasonably based) and is realised corresponding to algorithmic programme of its accomplishment.

In conclusion we would like to call our reader’s attention to the problem of correlation between different normative systems as bases of rational actions in social reality. There are several main different normative systems in culture: moral, law, science etc. Every normative system is constructed, as a rule, without any contradictions between the norms. But norms of different normative systems may be contradictory (for example: certain norms of moral and law). There is no problem if an action is controlled by norms of one normative system or by norms which are not contradictory. But there are cases where an action may be interpreted (and really controlled) from different normative systems by contradictory norms. In such situations bases of rationality are relative and propose a choice of normative interpretation to act.

Let us clarify our meaning with an example. What should a man do if he gets to know that his best friend whom he owes his life has committed a serious crime? Should he inform against his friend to the police or should he conceal the criminal? Let us assume that he is well aware of the fact that this action of his can be proved. The Criminal Code of a number of the countries includes an article (norm) prosecuting for concealment, penalty however are different. In the USSR the article in question ceased to exist in 1990. In situation like that there is no point in appealing for such a characteristics of activity as rationality until an individual makes his choice of the basis of action.

In real life an individual participates both actually and potentially in various spheres of human existence and hence, using the terminology of the theory of games, he plays several plays at once, functions of his gains being different and regulated by various normative systems. In real life all is interlinked: means turn into aims and aims become its means. Let us assume that an individual is aiming primarily to get to his work on time. Let us assume that in order to do this he should cross the road but in this site one is allowed to cross the street by the underground passage only. In case he goes down to the passage he will surely be late for his work. Let us assume that this alternative is not his fault but objectively conditioned. What should he do? If getting for his work on time is his ultimate value we are justified in concluding that he will break the traffic rules and will cut the street across to get to his work in the shortest way possible. Let us assume a policeman on his guard in this same part of the road. Then we can imagine a situation when the individual, rational as far as his ultimate aim is concerned, should wipe out “the limb of the law” – an undertaking worthy of a madman in a fit of rage. We should better suppose that the individual will correct his aim turning thus to crossing the street in accordance with the traffic rules into an aim of its own. We might just as well assume that to be late for a work entails a severe reprimand whereas crossing the street in a wrong way (place) threatens him with a long term of imprisonment. It is quite possible that in a situation like this the individual – like a gambler who throws his cards on the table since stakes are monstrously increasing – will probably prefer to come late for his job than to take a risk even if there is no policeman in sight.

Thus, in a number of cases basings of rationality turn out to be relative. What normative system should be preferable? How to be rational? In a case like this the question about rationality is incorrect. A question of the choice of a basing and aspect of normative interpretation of any action appears to be outside the scope of rationality. In order to choose a normative basing for an action a hierarchy of social and individual preferences becomes of utmost importance. It should be noted that there are significant basings of the kind in the cultural context, that is judicial and moral sanctions secured by state and traditions. Yet there may be a world of difference between socially regulated and individual preferences. That is why the choice of a basing in a situation like this from the point of view of an individual is a matter of his preferences of vital importance. That is an existential choice of Yours!

© V.B. Shneider 1991

Vladimir Shneider teaches philosophy at the Sverdlovsk Mining Institute, Sverdlovsk, U.S.S.R.

Advertisement

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy . X

Book cover

Encyclopedia of Personality and Individual Differences pp 4286–4288 Cite as

Rational Thinking

  • Nikki Blacksmith 3 , 4  
  • Reference work entry
  • First Online: 01 January 2020

84 Accesses

Analytical thinking ; Rational thinking style ; Reflective thinking

Rational thinking refers to differences across individuals in their tendency and need to process information in an effortful, analytical manner while using a rule-based system of logic.

Introduction

Rational thinking (or more formally, information processing) refers to differences across individuals in their tendency and need to process information in an effortful, analytical manner using a rule-based system of logic (Epstein et al. 1996 ; Scott and Bruce 1995 ; Stanovich and West 1998 ; Phillips et al. 2016 ). In other words, rational thinking is one’s preferred manner or style in which information from the environment is cognitively processed for sense-making. Although rational thinking deals with cognitive functioning, it is not a cognitive ability; it is a conative disposition – a natural tendency, impulse, or directed effort. Cognitive ability (a component of intelligence) refers to the capacity to...

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Epstein, S., Pacini, R., Denes-Raj, V., & Heier, H. (1996). Individual differences in intuitive-experiential and analytical-rational thinking styles. Journal of Personality and Social Psychology, 71 , 390–405.

Article   PubMed   Google Scholar  

Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59 , 255–278.

Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103 , 650–669.

Hamilton, K., Shih, S., & Mohammed, S. (2017). The predictive validity of the decision styles scale: An evaluation across task types. Personality and Individual Differences, 119 , 333–340.

Article   Google Scholar  

Marks, A. D. G., Hine, D. W., Blore, R. L., & Phillips, W. J. (2008). Assessing individual differences in adolescents’ preference for rational and experiential cognition. Personality and Individual Differences, 44 , 42–52.

Phillips, W. J., Fletcher, J. M., Marks, A. D. G., & Hine, D. W. (2016). Thinking styles and decision making: A meta-analysis. Psychological Bulletin, 142 , 260–290.

Reeve, C. L., & Bonaccio, S. (2011). The nature and structure of “intelligence.”. In T. Chamorro-Premuzic, A. Furnham, & S. von Stumm (Eds.), Handbook of individual differences (pp. 187–216). Oxford, England: Wiley-Blackwell.

Google Scholar  

Scott, S. G., & Bruce, R. A. (1995). Decision-making style: The development and assessment of a new measure. Educational and Psychological Measurement, 55 , 818–828.

Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, 127 , 161–188.

Stanovich, K. E., West, R. F., & Toplak, M. E. (2016). The rationality quotient: Toward a test of rational thinking . Cambridge, MA: The MIT Press.

Book   Google Scholar  

Download references

Author information

Authors and affiliations.

Foundational Science Research Unit, Consortium Research Fellows Program, Alexandria, VA, USA

Nikki Blacksmith

The George Washington University, Washington, DC, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nikki Blacksmith .

Editor information

Editors and affiliations.

Oakland University, Rochester, MI, USA

Virgil Zeigler-Hill

Todd K. Shackelford

Section Editor information

Department of Psychology, Wake Forest University, Winston-Salem, NC, USA

John F. Rauthmann

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Cite this entry.

Blacksmith, N. (2020). Rational Thinking. In: Zeigler-Hill, V., Shackelford, T.K. (eds) Encyclopedia of Personality and Individual Differences. Springer, Cham. https://doi.org/10.1007/978-3-319-24612-3_1897

Download citation

DOI : https://doi.org/10.1007/978-3-319-24612-3_1897

Published : 22 April 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-24610-9

Online ISBN : 978-3-319-24612-3

eBook Packages : Behavioral Science and Psychology Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

3.1: Critical Thinking in College Writing - From the Personal to the Academic

  • Last updated
  • Save as PDF
  • Page ID 86448

There is something about the term “critical thinking” that makes you draw a blank every time you think about what it means.* It seems so fuzzy and abstract that you end up feeling uncomfortable, as though the term is thrust upon you, demanding an intellectual effort that you may not yet have. But you know it requires you to enter a realm of smart, complex ideas that others have written about and that you have to navigate, understand, and interact with just as intelligently. It’s a lot to ask for. It makes you feel like a stranger in a strange land.

As a writing teacher I am accustomed to reading and responding to difficult texts. In fact, I like grappling with texts that have interesting ideas no matter how complicated they are because I understand their value. I have learned through my years of education that what ultimately engages me, keeps me enthralled, is not just grammatically pristine, fluent writing, but writing that forces me to think beyond the page. It is writing where the writer has challenged herself and then offered up that challenge to the reader, like a baton in a relay race. The idea is to run with the baton.

You will often come across critical thinking and analysis as requirements for assignments in writing and upper-level courses in a variety of disciplines. Instructors have varying explanations of what they actually require of you, but, in general, they expect you to respond thoughtfully to texts you have read. The first thing you should remember is not to be afraid of critical thinking. It does not mean that you have to criticize the text, disagree with its premise, or attack the writer simply because you feel you must. Criticism is the process of responding to and evaluating ideas, argument, and style so that readers understand how and why you value these items.

Critical thinking is also a process that is fundamental to all disciplines. While in this essay I refer mainly to critical thinking in composition, the general principles behind critical thinking are strikingly similar in other fields and disciplines. In history, for instance, it could mean examining and analyzing primary sources in order to understand the context in which they were written. In the hard sciences, it usually involves careful reasoning, making judgments and decisions, and problem solving. While critical thinking may be subject-specific, that is to say, it can vary in method and technique depending on the discipline, most of its general principles such as rational thinking, making independent evaluations and judgments, and a healthy skepticism of what is being read, are common to all disciplines. No matter the area of study, the application of critical thinking skills leads to clear and flexible thinking and a better understanding of the subject at hand.

To be a critical thinker you not only have to have an informed opinion about the text but also a thoughtful response to it. There is no doubt that critical thinking is serious thinking, so here are some steps you can take to become a serious thinker and writer.

Attentive Reading: A Foundation for Critical Thinking

A critical thinker is always a good reader because to engage critically with a text you have to read attentively and with an open mind, absorbing new ideas and forming your own as you go along. Let us imagine you are reading an essay by Annie Dillard, a famous essayist, called “Living like Weasels.” Students are drawn to it because the idea of the essay appeals to something personally fundamental to all of us: how to live our lives. It is also a provocative essay that pulls the reader into the argument and forces a reaction, a good criterion for critical thinking.

So let’s say that in reading the essay you encounter a quote that gives you pause. In describing her encounter with a weasel in Hollins Pond, Dillard says, “I would like to learn, or remember, how to live . . . I don’t think I can learn from a wild animal how to live in particular . . . but I might learn something of mindlessness, something of the purity of living in the physical senses and the dignity of living without bias or motive” (220). You may not be familiar with language like this. It seems complicated, and you have to stop ever so often (perhaps after every phrase) to see if you understood what Dillard means. You may ask yourself these questions:

  • What does “mindlessness” mean in this context?
  • How can one “learn something of mindlessness?”
  • What does Dillard mean by “purity of living in the physical senses?”
  • How can one live “without bias or motive?”

These questions show that you are an attentive reader. Instead of simply glossing over this important passage, you have actually stopped to think about what the writer means and what she expects you to get from it. Here is how I read the quote and try to answer the questions above: Dillard proposes a simple and uncomplicated way of life as she looks to the animal world for inspiration. It is ironic that she admires the quality of “mindlessness” since it is our consciousness, our very capacity to think and reason, which makes us human, which makes us beings of a higher order. Yet, Dillard seems to imply that we need to live instinctually, to be guided by our senses rather than our intellect. Such a “thoughtless” approach to daily living, according to Dillard, would mean that our actions would not be tainted by our biases or motives, our prejudices. We would go back to a primal way of living, like the weasel she observes. It may take you some time to arrive at this understanding on your own, but it is important to stop, reflect, and ask questions of the text whenever you feel stumped by it. Often such questions will be helpful during class discussions and peer review sessions.

Listing Important Ideas

When reading any essay, keep track of all the important points the writer makes by jotting down a list of ideas or quotations in a notebook. This list not only allows you to remember ideas that are central to the writer’s argument, ideas that struck you in some way or the other, but it also you helps you to get a good sense of the whole reading assignment point by point. In reading Annie Dillard’s essay, we come across several points that contribute toward her proposal for better living and that help us get a better understanding of her main argument. Here is a list of some of her ideas that struck me as important:

  • “The weasel lives in necessity and we live in choice, hating necessity and dying at the last ignobly in its talons” (220).
  • “And I suspect that for me the way is like the weasel’s: open to time and death painlessly, noticing everything, remembering nothing, choosing the given with a fierce and pointed will” (221).
  • “We can live any way we want. People take vows of poverty, chastity, and obedience—even of silence—by choice. The thing is to stalk your calling in a certain skilled and supple way, to locate the most tender and live spot and plug into that pulse” (221).
  • “A weasel doesn’t ‘attack’ anything; a weasel lives as he’s meant to, yielding at every moment to the perfect freedom of single necessity” (221).
  • “I think it would be well, and proper, and obedient, and pure, to grasp your one necessity and not let it go, to dangle from it limp wherever it takes you” (221).

These quotations give you a cumulative sense of what Dillard is trying to get at in her essay, that is, they lay out the elements with which she builds her argument. She first explains how the weasel lives, what she learns from observing the weasel, and then prescribes a lifestyle she admires—the central concern of her essay.

Noticing Key Terms and Summarizing Important Quotes

Within the list of quotations above are key terms and phrases that are critical to your understanding of the ideal life as Dillard describes it. For instance, “mindlessness,” “instinct,” “perfect freedom of a single necessity,” “stalk your calling,” “choice,” and “fierce and pointed will” are weighty terms and phrases, heavy with meaning, that you need to spend time understanding. You also need to understand the relationship between them and the quotations in which they appear. This is how you might work on each quotation to get a sense of its meaning and then come up with a statement that takes the key terms into account and expresses a general understanding of the text:

Quote 1: Animals (like the weasel) live in “necessity,” which means that their only goal in life is to survive. They don’t think about how they should live or what choices they should make like humans do. According to Dillard, we like to have options and resist the idea of “necessity.” We fight death—an inevitable force that we have no control over—and yet ultimately surrender to it as it is the necessary end of our lives.

Quote 2: Dillard thinks the weasel’s way of life is the best way to live. It implies a pure and simple approach to life where we do not worry about the passage of time or the approach of death. Like the weasel, we should live life in the moment, intensely experiencing everything but not dwelling on the past. We should accept our condition, what we are “given,” with a “fierce and pointed will.” Perhaps this means that we should pursue our one goal, our one passion in life, with the same single-minded determination and tenacity that we see in the weasel.

Quote 3: As humans, we can choose any lifestyle we want. The trick, however, is to go after our one goal, one passion like a stalker would after a prey.

Quote 4: While we may think that the weasel (or any animal) chooses to attack other animals, it is really only surrendering to the one thing it knows: its need to live. Dillard tells us there is “the perfect freedom” in this desire to survive because to her, the lack of options (the animal has no other option than to fight to survive) is the most liberating of all.

Quote 5: Dillard urges us to latch on to our deepest passion in life (the “one necessity”) with the tenacity of a weasel and not let go. Perhaps she’s telling us how important it is to have an unwavering focus or goal in life.

Writing a Personal Response: Looking Inward

Dillard’s ideas will have certainly provoked a response in your mind, so if you have some clear thoughts about how you feel about the essay this is the time to write them down. As you look at the quotes you have selected and your explanation of their meaning, begin to create your personal response to the essay. You may begin by using some of these strategies:

  • Tell a story. Has Dillard’s essay reminded you of an experience you have had? Write a story in which you illustrate a point that Dillard makes or hint at an idea that is connected to her essay.
  • Focus on an idea from Dillard’s essay that is personally important to you. Write down your thoughts about this idea in a first person narrative and explain your perspective on the issue.
  • If you are uncomfortable writing a personal narrative or using “I” (you should not be), reflect on some of her ideas that seem important and meaningful in general. Why were you struck by these ideas?
  • Write a short letter to Dillard in which you speak to her about the essay. You may compliment her on some of her ideas by explaining why you like them, ask her a question related to her essay and explain why that question came to you, and genuinely start up a conversation with her.

This stage in critical thinking is important for establishing your relationship with a text. What do I mean by this “relationship,” you may ask? Simply put, it has to do with how you feel about the text. Are you amazed by how true the ideas seem to be, how wise Dillard sounds? Or are you annoyed by Dillard’s let-me-tell-you-how-to-live approach and disturbed by the impractical ideas she so easily prescribes? Do you find Dillard’s voice and style thrilling and engaging or merely confusing? No matter which of the personal response options you select, your initial reaction to the text will help shape your views about it.

Making an Academic Connection: Looking Outward

First year writing courses are designed to teach a range of writing— from the personal to the academic—so that you can learn to express advanced ideas, arguments, concepts, or theories in any discipline. While the example I have been discussing pertains mainly to college writing, the method of analysis and approach to critical thinking I have demonstrated here will serve you well in a variety of disciplines. Since critical thinking and analysis are key elements of the reading and writing you will do in college, it is important to understand how they form a part of academic writing. No matter how intimidating the term “academic writing” may seem (it is, after all, associated with advanced writing and becoming an expert in a field of study), embrace it not as a temporary college requirement but as a habit of mind.

To some, academic writing often implies impersonal writing, writing that is detached, distant, and lacking in personal meaning or relevance. However, this is often not true of the academic writing you will do in a composition class. Here your presence as a writer—your thoughts, experiences, ideas, and therefore who you are—is of much significance to the writing you produce. In fact, it would not be farfetched to say that in a writing class academic writing often begins with personal writing. Let me explain. If critical thinking begins with a personal view of the text, academic writing helps you broaden that view by going beyond the personal to a more universal point of view. In other words, academic writing often has its roots in one’s private opinion or perspective about another writer’s ideas but ultimately goes beyond this opinion to the expression of larger, more abstract ideas. Your personal vision—your core beliefs and general approach to life— will help you arrive at these “larger ideas” or universal propositions that any reader can understand and be enlightened by, if not agree with. In short, academic writing is largely about taking a critical, analytical stance toward a subject in order to arrive at some compelling conclusions.

Let us now think about how you might apply your critical thinking skills to move from a personal reaction to a more formal academic response to Annie Dillard’s essay. The second stage of critical thinking involves textual analysis and requires you to do the following:

  • Summarize the writer’s ideas the best you can in a brief paragraph. This provides the basis for extended analysis since it contains the central ideas of the piece, the building blocks, so to speak.
  • Evaluate the most important ideas of the essay by considering their merits or flaws, their worthiness or lack of worthiness. Do not merely agree or disagree with the ideas but explore and explain why you believe they are socially, politically, philosophically, or historically important and relevant, or why you need to question, challenge, or reject them.
  • Identify gaps or discrepancies in the writer’s argument. Does she contradict herself? If so, explain how this contradiction forces you to think more deeply about her ideas. Or if you are confused, explain what is confusing and why.
  • Examine the strategies the writer uses to express her ideas. Look particularly at her style, voice, use of figurative language, and the way she structures her essay and organizes her ideas. Do these strategies strengthen or weaken her argument? How?
  • Include a second text—an essay, a poem, lyrics of a song— whose ideas enhance your reading and analysis of the primary text. This text may help provide evidence by supporting a point you’re making, and further your argument.
  • Extend the writer’s ideas, develop your own perspective, and propose new ways of thinking about the subject at hand.

Crafting the Essay

Once you have taken notes and developed a thorough understanding of the text, you are on your way to writing a good essay. If you were asked to write an exploratory essay, a personal response to Dillard’s essay would probably suffice. However, an academic writing assignment requires you to be more critical. As counter-intuitive as it may sound, beginning your essay with a personal anecdote often helps to establish your relationship to the text and draw the reader into your writing. It also helps to ease you into the more complex task of textual analysis. Once you begin to analyze Dillard’s ideas, go back to the list of important ideas and quotations you created as you read the essay. After a brief summary, engage with the quotations that are most important, that get to the heart of Dillard’s ideas, and explore their meaning. Textual engagement, a seemingly slippery concept, simply means that you respond directly to some of Dillard’s ideas, examine the value of Dillard’s assertions, and explain why they are worthwhile or why they should be rejected. This should help you to transition into analysis and evaluation. Also, this part of your essay will most clearly reflect your critical thinking abilities as you are expected not only to represent Dillard’s ideas but also to weigh their significance. Your observations about the various points she makes, analysis of conflicting viewpoints or contradictions, and your understanding of her general thesis should now be synthesized into a rich new idea about how we should live our lives. Conclude by explaining this fresh point of view in clear, compelling language and by rearticulating your main argument.

Modeling Good Writing

When I teach a writing class, I often show students samples of really good writing that I’ve collected over the years. I do this for two reasons: first, to show students how another freshman writer understood and responded to an assignment that they are currently working on; and second, to encourage them to succeed as well. I explain that although they may be intimidated by strong, sophisticated writing and feel pressured to perform similarly, it is always helpful to see what it takes to get an A. It also helps to follow a writer’s imagination, to learn how the mind works when confronted with a task involving critical thinking. The following sample is a response to the Annie Dillard essay. Figure 1 includes the entire student essay and my comments are inserted into the text to guide your reading.

Though this student has not included a personal narrative in his essay, his own world-vievvw is clear throughout. His personal point of view, while not expressed in first person statements, is evident from the very beginning. So we could say that a personal response to the text need not always be expressed in experiential or narrative form but may be present as reflection, as it is here. The point is that the writer has traveled through the rough terrain of critical thinking by starting out with his own ruminations on the subject, then by critically analyzing and responding to Dillard’s text, and finally by developing a strong point of view of his own about our responsibility as human beings. As readers we are engaged by clear, compelling writing and riveted by critical thinking that produces a movement of ideas that give the essay depth and meaning. The challenge Dillard set forth in her essay has been met and the baton passed along to us.

Screen-Shot-2017-06-29-at-2.43.38-PM-263x300.png

  • Write about your experiences with critical thinking assignments. What seemed to be the most difficult? What approaches did you try to overcome the difficulty?
  • Respond to the list of strategies on how to conduct textual analysis. How well do these strategies work for you? Add your own tips to the list.
  • Evaluate the student essay by noting aspects of critical thinking that are evident to you. How would you grade this essay? What other qualities (or problems) do you notice?

Works Cited

Dillard, Annie. “Living like Weasels.” One Hundred Great Essays. Ed. Robert DiYanni. New York: Longman, 2002. 217–221. Print.

  • Critical Thinking in College Writing. Authored by : Gita DasBender. Located at : http://writingspaces.org/sites/default/files/dasbender--critical-thinking.pdf . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Thinking and Reasoning

  • < Previous chapter
  • Next chapter >

15 Rational Argument

Department of Psychological Sciences Birkbeck, University of London London, England, UK

Birkbeck College University of London London, England, UK

  • Published: 21 November 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

Argumentation is an integral part of how we negotiate life in a complex world. In many contexts it matters, furthermore, that arguments be rational, not that they are simply convincing. Rational debate is subject to both procedural norms and to epistemic norms that allow the evaluation of argument content. This chapter outlines normative frameworks for argumentation (dialectical, logical, and Bayesian); it then summarizes psychological research on argumentation, drawn from cognitive psychology, as well as a number of applied domains.

Introduction: The Realm of Argumentation

Argumentation is an integral part of many aspects of our complex social worlds: from law, politics, academia, and business, to our personal lives. Though the term “argument” often carries negative connotations in everyday life, many different types of argumentative dialog can be distinguished, such as quarrels, bargaining or negotiation, educational dialogues, and, central to the present chapter, critical discussion (see, e.g., van Eemeren & Grootendorst, 2004 ; Walton, 2006a ). Argumentation in the sense of a critical discussion is about rational debate and has been defined as

… a verbal and social activity of reason aimed at increasing (or decreasing) the acceptability of a controversial standpoint for a listener or reader, by putting forward a constellation of propositions intended to justify (or refute) the standpoint before a “rational judge.” (van Eemeren, Grootendorst, & Snoeck Henkemans, 1996 , p. 5)

It is in this sense of rational debate that psychological research on argumentation uses the terms “argumentation” and “argument,” and it is the emphasis on rationality that distinguishes it from social psychological research on persuasion. Persuasion research has identified a wide range of factors that affect the degree to which a persuasive communication will be effective (see, e.g., Eagly & Chaiken, 1993 ; Maio & Haddock, 2010 ; also see Molden & Higgins, Chapter 20 ). In contrast to argumentation, persuasion research is concerned with “what works” and why, regardless of whether it is rational. Indeed, some of the aspects identified—such as mood, likeability of the speaker, or personal involvement of the listener—do not always lead to changes in convictions in ways that one might consider rational; nevertheless, many findings within the persuasion literature are also directly relevant to argumentation and will be discussed within this chapter. Within psychology, argumentation has also been the focus of developmental and education research. Here research has focused on the way children's argumentation skills develop (e.g., Kuhn, 1989 , 1991 ; Means & Voss, 1996 ) and examined ways in which critical thinking and argument skills might be fostered (Kuhn, 1991 , 2001 ). Finally, argumentation has close connections to the study of reasoning because inference as studied in reasoning research is an integral part of argument (on that relationship see, e.g., Chater & Oaksford, Chapter 2 ; Hahn & Oaksford, 2007a ; Mercier & Sperber, 2011 ; Oaksford & Hahn, 2007 ).

Argumentation, however, has not only been of interest within psychology. Most of the extant literature on argumentation resides in philosophy, where the topic has been pursued since the Greek philosophers, in particular Aristotle ( 2004 ). Most directly, argumentation has been studied within philosophical logic; relevant research can also be found, however, within the philosophy of science and within epistemology (the philosophical study of knowledge and justified belief). In all of these contexts, philosophers have typically focused on normative theories, that is, theories of how we should behave.

Computer scientists, finally, have sought to devise novel frameworks for dealing with dialectical information, seeking to capture the structural relationships between theses, rebuttals, and supporting arguments with the degree of explicitness necessary for the design of computational argumentation systems. The uses to which argumentation has been put, in particular within artificial intelligence, are manifold (for an overview see, e.g., Bench-Capon & Dunne, 2007 ). For example, argumentation has been integral to attempts to build legal expert systems, that is, systems capable of providing support for legal advice (e.g., Prakken, 2008 ; also see Spellman & Schauer, Chapter 36 ). Argumentation-based frameworks have also been employed for medical decision support systems (e.g., Fox, Glasspool, Grecu, Modgil, South, & Patkar, 2007 ; see also Patel et al., Chapter 37 ) and have gained considerable attention in the context of multiagent systems (e.g., Rahwan & Moraitis, 2009 ). Finally, computer scientists have developed software for the visualization of complex, interconnected sequences of arguments such as Araucaria (see e.g., Reed & Rowe, 2004 ). They have also developed software foundations for large-scale, socially contributed argumentative content on the Worldwide Web, which allow Web users to contribute, visualize, and analyze arguments on a particular theme (e.g., Rahwan, Zablith, & Reed, 2007 ).

In short, the wealth of practical contexts in which argumentation matters is matched by the interdisciplinary breadth with which the topic is studied. It should thus come as no surprise that there is also more than one theoretical framework that can be brought to bear on the issue of good argument.

Theoretical Frameworks

Statements about rationality necessarily contain an evaluative component. Argument, like any behavior, is rational because it is “good” relative to some standard, whether this standard be norms seeking to guarantee truth or consistency, or a suitable functional relationship to the goals of the agent in a particular environment (e.g., Nickerson, 2008 ; Oaksford & Chater, 2007 , 2009 ; see also Chater & Oaksford, Chapter 2 ; Griffiths et al., Chapter 3 ; Stanovich, Chapter 22 ).

Norms governing rational argument can crudely be classified into two broad categories: those aimed primarily at the content or structure of an argument, and those aimed at argumentative procedure. This distinction has its origins in two different philosophical projects aimed at characterizing what makes a good argument: so-called epistemic accounts, which are aimed at truth or justified belief, on the one hand, and so-called dialectical or procedural approaches, aimed at consensus, on the other.

For millennia, logic provided the normative framework for the evaluation of inference and, with that, argument: Logic provides an epistemic standard, enforcing consistency, and where reasoning proceeds from true premises, guaranteeing the truth of the conclusions that can be reached (see Evans, Chapter 8 ; Johnson-Laird, Chapter 9 ). However, logic is severely limited in its ability to deal with everyday informal argument (see also, e.g., Hamblin, 1970 ; Heysse, 1997 ; Johnson, 2000 ; as well as Boger, 2005 for further references). In particular, logic seems poorly equipped to deal with the uncertainty inherent in everyday reason.

One of the first to bring these limitations to prominence was Toulmin ( 1958 ). According to Toulmin, an argument can be broken down into several distinct components. Essential to all arguments are a claim (i.e., a conclusion in need of support), evidence (i.e., facts that are used to support the claim), and warrants (i.e., the reasons that are used to justify the connections between the evidence and the claim). Warrants may receive further support through additional, auxiliary statements as “backing.” Any of these components may be subject to qualifiers such as “sometimes,” “frequently,” or “almost always,” and may be challenged by rebuttals, that is counterarguments, which themselves involve the same structural components. The identification of these kinds of relationships has, for example, been integral to the visualization of argument structure and educational research. 1

In the context of argument, Toulmin was skeptical of absolute truth, and his model was inspired by the dialectical nature of the adversarial legal systems of the common law tradition, where a relative truth is established through a contest between proponents of two opposing positions. In the courtroom, the provision of evidence is constrained by procedural rules; for example, not all evidence is permissible. Hence, it is not the absolute truth about whether a defendant is guilty, for instance, that a trial seeks to establish, but a trial-relative truth, in the sense of what can be established within the constraints of those procedural norms.

Law has been taken as a leading example of argumentative dialog by many subsequent theorists (e.g., Rescher, 1977 ), and the argumentation literature is full of legally inspired concepts such as the notion of burden of proof . The most well-known burden of proof in law is the presumption of innocence in criminal trials: It is up to the prosecution to establish conclusively that a defendant is guilty, and failure to do so means the defendant goes free. Many argumentation theorists posit similar rules for rational discourse (see e.g., Walton, 1988 ; for a more critical perspective on the burden of proof in argumentation, see Hahn & Oaksford, 2007b ). 2

So-called pragma-dialectical theories of argumentation, in particular, have sought to identify the procedural norms governing rational debate (e.g., van Eemeren & Grootendorst, 1984 , 1992 , 2004 ; Walton, 1995 , 1998a ). The general flavor of these norms is well illustrated by a few rules from van Eemeren and Grootendorst's (2004) system:

Rule 2 states that the discussant who has called the standpoint of the other discussant into question in the confrontation stage is always entitled to challenge the discussant to defend this standpoint. (p. 137)

The next rule, Rule 3, governs the burden of proof, whereby it is the proponent of a claim who is obliged to provide sufficient support for it:

The discussant who is challenged by the other discussant to defend the standpoint that he has put forward in the confrontation stage is always obliged to accept this challenge, unless the other discussant is not prepared to accept any shared premises and discussion rules; the discussant remains obliged to defend the standpoint as long as he does not retract it and as long as he has not successfully defended it against the other discussant on the basis of the agreed premises and discussion rules. (p. 139)

In addition to such fundamental rules, theorists have proposed more specific rules governing, for instance, mode of presentation, and some examples of such rules will be discussed later.

The dialectical model of the courtroom, finally, has also been of influence in the development of nonclassical logics (e.g., Dung, 1995 ), which seek to overcome some of the inadequacies of classical logic, and, in particular, seek to capture reasoning under conditions of uncertainty (for a general overview of nonclassical logics, see Prakken & Vreeswijk, 2002 ).

Arguably, however, there is still something missing in procedural or dialectical approaches to argument as is readily apparent from the legal model itself: This is the “final evaluation.” In the courtroom, judges and/or juries reach a final verdict, and that overall evaluation itself should be rational. Procedural rules constrain this but do not fully determine its outcome. One may have several pieces of evidence, for example, all of which are permissible but which, individually, are more or less compelling. Furthermore, different pieces of evidence, potentially varying in individual strength, must be combined into an overall degree of support. 3

Such evaluation is crucially about specific content . Procedural rules alone are insufficient here, but so typically are the purely structural relationships between statements identified by classical and nonclassical logical systems. Knowing, in structural terms, simply that one statement “attacks” another does not allow a final evaluation of whether one is more convincing than the other, and hence outweighs it in final evaluation.

Concerning content, it has most recently been argued that Bayesian probability might provide appropriate epistemic norms for argumentation (Hahn & Oaksford, 2006a , b   2007a ; Korb, 2004 ; Oaksford & Hahn, 2004 ; also see Griffiths et al., Chapter 3 ). 4 The Bayesian approach to argumentation originated as an attempt to provide a formal treatment of the traditional catalog of fallacies of argumentation, examples of which are circular arguments such as “God exists, because the Bible says so and the Bible is the word of God” and so-called arguments from ignorance, such as “Ghosts exist because nobody has proven that they don't.” According to the Bayesian account, informal arguments such as these consist of a claim (“ghosts exist”) and evidence for that claim (“nobody has proven that they don't”). An individual's degree of belief in the claim is represented by a probability. Bayes' theorem, which follows from the fundamental axioms of probability theory, then provides a normative standard for belief revision; it thus provides a formal tool for evaluating how convinced that individuals should be about the claim in light of that particular piece of evidence. There are three probabilistic quantities involved in Bayes' theorem that determine what degree of conviction should be associated with a claim once a piece of evidence has been received: prior degree of belief in the claim, how likely the evidence would be if the claim were true, and how likely it would be if the claim were false. Specifically, Bayes' theorem states that:

where P ( h | e ) represents one's posterior degree of belief in a hypothesis, h , in light of the evidence, e , which can be calculated from one's initial, prior degree of belief, P ( h ), and how likely it is that the evidence one observed would have occurred if one's initial hypothesis was true, P ( e | h ), as opposed to if it was false, P ( e |¬ h ). The ratio of these latter two quantities, the likelihood ratio, provides a natural measure of the diagnosticity of the evidence, that is, its informativeness regarding the hypothesis or claim in question.

We will discuss the application of Bayes' theorem to individual arguments in more detail later, and we note here simply several general characteristics. First, what values any of these quantities take depends on what the statements in question are about, that is, the specific content of hypothesis and evidence. Second, the Bayesian framework, through its interpretation of probabilities as subjective degrees of belief, accords with the general intuition that argumentation contains an element of audience relativity (a property widely perceived to be characteristic of real-world arguments, see, e.g., Hahn & Oaksford, 2006a , 2006b ; Perelman & Olbrechts-Tyteca, 1969 ; Toulmin, 1958 ); that is, the same argument need not (rationally) have the same impact on all recipients. Nevertheless, Bayesian probability imposes real constraints on the beliefs an agent can have, both by guaranteeing probabilistic consistency between the beliefs of a given agent, and because the beliefs of different agents are, in certain cases, guaranteed to converge as these agents observe more and more evidence (see also, e.g., Hahn & Oaksford, 2006b ).

There has been (and continues to be) debate about the proper scope of the procedural and epistemic frameworks just discussed. They target different aspects of argumentation, but both in theory and practice these aspects are intertwined. Dialectical and epistemic concerns are related. For example, silencing opponents by force is undoubtedly a violation of dialectical, procedural norms for “good” argumentation; but it seems dubious even for those interested not in process, but only in truth, because the suppression of arguments in discourse means that the potentially strongest argument might not be heard (see also Goldman, 1994 ; Hahn & Oaksford, 2006b ). Likewise, pragma-dialectical theories have used discourse rules to evaluate fallacies of argumentation (e.g., van Eemeren & Grootendorst, 1992 , 2004 ; Walton, 1995 ). However, the problem that remains is that discourse rules typically do not provide enough constraints on content. It is not hard to find examples of arguments with the same structure, and in the same argumentative context, that nevertheless differ fundamentally in how intuitively compelling they seem, and this has been at the heart of recent criticisms of the pragma-dialectical approach to the fallacies (e.g., Hahn & Oaksford, 2006a ). Hence, normative theories of content and procedural theories can (as we will see) clash on occasion, but they ultimately pursue complementary goals (Goldman, 1994 ; Hahn & Oaksford, 2006b ), and both have an important role to play in shaping “rational debate.”

The Psychology of Argumentation

As detailed in the Introduction, psychological research has addressed argumentation from a number of different perspectives. In the following sections our main emphasis will be on basic research concerning argument quality; then in the remainder, we will provide brief introductions to research concerned with the development of argumentation skills, and educational attempts to foster argumentation, as well as argument in a number of specific practical contexts such as science and the courtroom.

Argument Quality

Foundational research on argument quality within psychology can itself be split naturally into research concerned with procedural aspects of argumentation, in particular pragma-dialectic norms governing rational debate, and into research concerned with the epistemic quality of the argument, and hence, the actual main substance of its content.

Procedural Aspects of Argumentation

Experimental research on procedural aspects of argumentation stems from a number of sources: cognitive psychologists, educational psychologists, communication researchers, argumentation theorists, and philosophers. This diversity of disciplines and theoretical backgrounds means that relevant psychological research is found beyond the confines of mainstream psychological outlets. In the following, we provide key examples.

A central topic for cognitive psychologists with an interest in pragma-dialectical aspects of argument has been the burden of proof (e.g., Bailenson, 2001 ; Bailenson & Rips, 1996 ; Rips, 1998 ; see Rips, Brem & Bailenson, 1999 for reviews). As noted earlier, the notion is derived from law, where burdens of proof are explicitly specified. In the context of psychological research, the notion has been operationalized by presenting participants with argumentative dialogues and asking them to indicate which proponent in the dialog “has the most to do in order to prove that he or she is right.” Experimental manipulations involve providing a proponent's challenge (“What is your evidence for that statement?”) earlier or later in dialogue, and whether a dialogue starts with a neutral claim (“Hi, good to see you”) and then an assertion by the second speaker, or directly with the assertion of the second speaker. Such manipulations are found to have an effect; however, it is debatable to what extent these tasks involve a burden of proof in any technical sense (see Hahn & Oaksford, 2007b ). It is clear that the evaluation of a series of arguments can be influenced by the order in which the component arguments are presented (see also McKenzie, Lee, & Chen, 2002 ). Crucially, order will affect the interpretation of material. For example, the order in which issues are put forward by a speaker are likely to be influenced by their perceived importance. This consideration allows corresponding inferences on the part of the listener, for example, about what the speaker is most concerned about. Likewise, changes in the order in which arguments and counterarguments appear can alter the perceived relevance of claims, and the degree to which they seem responsive to what has preceded them. Not seeking to refute a prior claim can be taken to signal tacit acceptance (see e.g., Clark, 1996 ), or at least the lack of a cogent counterargument, in the same way that the provision of weak (counter-) evidence can be taken to suggest that no stronger evidence is available. These latter inferences are (nonfallacious) examples of argument from ignorance, an argument form we turn to in detail later (see Harris, Corner, & Hahn, 2009 ). At the same time, “diffuse” responses may be seen to affect the perceived competence of the person supplying them (Davis & Holtgraves, 1984 ) or their credibility (O'Keefe, 1999 ). Social psychologists have studied extensively the effects of so-called one-sided versus two-sided messages; that is, the effectiveness of ignoring versus acknowledging counterarguments. O'Keefe ( 1999 ) provided a meta-analysis of the respective persuasive effects of such one- versus two-sided messages (for a review of studies on this topic, see also Allen, 1991 ). Based on a systematic evaluation of over 50 studies, O'Keefe concluded that two-sided arguments that address counterarguments are typically more persuasive than one-sided arguments; least persuasive are arguments that explicitly acknowledge counterarguments without trying to refute them. From a pragma-dialectical perspective, this ordering ties in with the fundamental procedural obligation to defend one's position when challenged (see earlier) in that the most persuasive arguments are those that at least attempt to discharge argumentative burdens (see also, O'Keefe, 2003 ); however, it is not clear that these are ultimately anything other than argument content effects (see also O'Keefe, 1999 , for discussion of this point).

As part of an ongoing project to consider the extent to which the normative quality of arguments squares with their actual persuasive effect as established within social psychological research, O'Keefe has also conducted a meta-analysis of the persuasive effects of standpoint explicitness, that is, the extent to which the degree of articulation with which a communicator presents his or her overall conclusion affects message effectiveness (O'Keefe, 1997a ). From a pragma-dialectical perspective, such explicitness can be linked to procedural obligations of the proponent of an argument to avoid “concealment, evasion, and artful dodging” and present information in such a way that allows critical scrutiny. In line with such obligations, social psychological research has found that better articulation seems to give rise to greater persuasive effect. O'Keefe's ( 1997b ) meta-analysis revealed corresponding effects for the degree of articulation in the actual argumentative support. In short, O'Keefe's findings demonstrate some correspondence between what might be considered normatively desirable and what, in persuasive terms, “actually works.”

Finally, procedural norms have been empirically investigated within the framework of “argumentational integrity” or fairness (e.g., Christmann, Mischo, & Groeben, 2000 ; Mischo, 2003 ; Schreier, Groeben, & Christmann, 1995 ). Here, studies have sought to examine people's sensitivity to the violation of procedural rules, such as, for example, that proponents within a debate should have equal opportunity to contribute, or that contributions to debate must be sincere. Research has also examined the persuasive costs of such rule violations, and it has sought to develop educational training programs to increase awareness of violations (Christmann, Mischo, & Flender, 2000 ).

Epistemic Aspects of Argument Quality

Logic and probability theory combine to provide powerful tools for the evaluation of arguments. Classical logic provides minimum standards by enforcing logical consistency and the avoidance of contradictions: A statement that is contradictory can never be true, and thus it constitutes neither a claim nor evidence worth consideration. Probability theory then constrains content beyond these minimal, logical requirements (see Chater & Oaksford, Chapter 2 ).

Both people's logical reasoning and their ability to deal appropriately with probabilities have been the focus of vast amounts of psychological research (for overviews, see e.g., Hardman, 2009 ; Manktelow, 2011 ). Broadly construed, all of this research is relevant. Psychological work on logical reasoning, for example, deals with very specific and very restricted “arguments.” The logical reasoning literature has often been scathing about people's logical abilities; however, there is reason to question the applicability of much of this research to everyday argumentation. For one, it is well documented that people's degree of conformity to logic is much affected by the exact content of materials (Manktelow, 2011 ; Wason & Johnson-Laird, 1972 ; see Evans, Chapter 8 ). It has been argued recently that people's abilities in this regard are much better when they are embedded in the kinds of argumentation contexts that logical reasoning supports in everyday life (Mercier & Sperber, 2011 ). Furthermore, many tasks that experimenters perceive to involve deduction may not necessarily do so from the perspective of the participant. In particular, it has been argued that people typically adopt a probabilistic interpretation of conditionals (if-then statements), and once this is taken into account their reasoning is far from bad (e.g., Evans & Over, 2004 ; Oaksford & Chater, 1994 , 2007 , 2009 ). This issue is discussed extensively elsewhere in this volume (Chater & Oaksford, Chapter 2 ), but we will also consider several types of conditional. Analogous points apply to the literature on probability judgment. For one, people argue primarily about things they care about and with which they have some degree of familiarity. Moreover, although some evidence, and hence argument, involves overt numbers, probabilities, and statistics (and hence limitations identified in previous research may readily apply), most everyday argument does not cite numerical quantities. Consequently, for most of the many different argument forms that arise in everyday discourse, experimental research has only just begun.

Much of that research has been centered around putative cases of human error, that is, fallacies in human reasoning. Hence, we will focus primarily on these fallacies in the following sections, before concluding with research addressing argument quality in a number of applied contexts.

Fallacies of Argumentation: A Litmus Test for Evaluation of Argument Quality

The fallacies have long been a focal point for philosophical research on argument quality. There is debate about how to best define the notion of fallacy (see, e.g., van Eemeren & Grootendorst, 2004 , for discussion). On an intuitive level, fallacies are simply arguments that might seem correct but aren't, that is, arguments that might persuade but really should not. Contemporary lists include more than 20 different fallacies (see e.g., Woods, Irvine, & Walton, 2004 ). The fallacies of circular argument (Walton, 1985 , 1991 ) and the argument from ignorance (Walton, 1992a ) have already been mentioned; other well-known fallacies are the slippery slope argument (“if we legalize assisted suicide, it will be euthanasia next;” Walton, 1992b ), the ad populum argument or appeal to popular opinion (“many people think this; it cannot be wrong;” Walton, 1980 , 1999 ), the ad hominem argument, which seeks to undermine the proponent of an argument instead of addressing the argument itself (e.g., Walton, 1987 , 1998b , 2000 , and references therein), the ad verecundiam argument also known as the appeal to authority (e.g., Walton, 1997 ), equivocation (e.g., Engel, 1986 ; Kirwan, 1979 ; Walton, 1996b ; Woods & Walton, 1979 ), and the ad baculum argument, which appeals to threats or force (“if you don't like that, you might find yourself out of a job;” Walton, 2000b ; Walton & Macagno, 2007 ). These informal arguments are pervasive in everyday discourse (for real-word examples, see also, e.g., Tindale, 2007 ), and critical thinking textbooks seek to educate about them. The goal of philosophical research on the fallacies has been to provide a comprehensive treatment—ideally a formal treatment—of these fallacies that can explain exactly why they are “bad” arguments (see e.g., Hamblin, 1970 ). In other words, the fallacies are a litmus test for our theories of argument quality.

The fallacies have also been investigated experimentally from both a broadly procedural, typically pragma-dialectical perspective (e.g., van Eemeren, Garssen, & Meuffels, 2009 ; Neuman, 2003 ; Neuman, Glassner, & Weinstock, 2004 ; Neuman, Weinstock, & Glasner, 2006 ; Neuman & Weitzman, 2003 ; Ricco, 2003 ; Rips, 2002 ), and from an epistemic perspective (e.g., Hahn & Oaksford, 2007a ; Oaksford & Hahn, 2004 ; Oaksford & Chater, 2007 , 2010a , b ). Although in many ways such experimental work has only just started, the general finding has been that people seem quite competent at identifying fallacious arguments.

A more detailed examination of both theory and experimental work on the fallacies also demonstrates why procedural approaches to argument quality are not sufficient. This is well illustrated with one of the more widely studied arguments, the argument from ignorance:

Ghosts exist, because nobody has proven that they don't. (1)

Historically, one of the main stumbling blocks for theoretical treatments of the fallacies was that most of the fallacies seem to involve seeming “exceptions” in the form of examples that do not seem as bad (see e.g., Walton, 1995 ). The following examples too are arguments from ignorance, but unlike (1) seem acceptable:

This drug is safe because clinical trials have found no evidence of side effects. (2) and The book is in the library, because the catalog does not say that it is on loan. (3)

Clearly, the inferences in all three of these cases are uncertain or defeasible; that is, the conclusions do not follow necessarily from the evidence. However, they seem sufficiently likely to be true, in light of the reason or evidence provided, that we readily base our actions on such arguments in everyday life. Examples such as (2), in particular, are widespread in socioscientific debate about the acceptability of new technologies (e.g., genetically modified foods, nanotechnology). Uncertainty is, of course, also present in positive inferences such as:

This drug causes side effects, because some participants in the clinical trial exhibited flu-like symptoms. (4)

Pragma-dialectical theories seek to explain why textbook examples of the argument from ignorance are poor arguments by considering them in a wider dialectical context, and attributing their insufficiency to the violation of discourse rules within that wider, overall argument. Specifically, arguments such as (1) are typically assumed to violate the burden of proof (see e.g., van Eemeren, Garssen & Meuffels, 2009 ; Walton, 1992a , 1995 , 1996a ). As we saw earlier, the burden of proof is assumed to demand that whoever makes a claim has to provide reasons for this claim when challenged. Stating that no one has disproved the existence of ghosts as a reason for believing in them constitutes an illegitimate attempt to shift that burden onto the other party, instead of providing an adequate reason oneself.

However, such an explanation seems forced for two reasons (see also Hahn & Oaksford, 2006a , 2007a , 2007b ). First, example (1) seems intuitively a weaker argument than (2) or (3) even when all are presented in isolation as they are here, that is, without any wider argumentative context and any indication of parties engaged in debate. Here it is unclear to whom obligations such as burdens of proof could be ascribed. Second, violations of one's burden of proof are a consequence of providing insufficient evidence, not a cause . The judgment that an argument seeks illegitimately to “shift the burden of proof” does not explain an argument's weakness; rather, it presupposes it. Weak arguments fall short of burdens of proof; strong ones do not. Consequently, it still needs to (independently) be determined why, for example, (1) is poor, in ways that (2), (3), and (4) are not.

The identification of very abstract relations such as “claim,” “warrant,” or “backing” as found in Toulmin's and similar systems also provides no guidance here. All four examples involve a claim and a reason given in support. Rather, an epistemic standard aimed at the specific content is required here. Logic has nothing to say about these arguments; none of them are deductively valid. 5

The probabilistic Bayesian framework, however, does make appropriate distinctions here (Hahn & Oaksford, 2006a ; Hahn & Oaksford, 2007a ; Oaksford & Hahn, 2004 ). The standard version of Bayes' theorem provided in Eq. 1 earlier applies directly to positive arguments such as (4). The claim that “this drug causes side effects” takes the place of hypothesis h ; the reason “because some participants in the clinical trial exhibited flu-like symptoms” takes the role of evidence e . This evidence is convincing to the extent that these symptoms are more likely if the claim is true, P(e | h) , than if it is not, P(e |¬ H) , for example, because it is winter and participants can catch a flu. As noted earlier, it is the specific content of the argument that fixes these probabilistic quantities, and argument strength will vary with the likelihood ratio (diagnosticity of the evidence). Observing, for example, that 95% of the participants in the trial displayed side effects within hours of taking the drug will provide greater support than observing a few who fell ill over a several-week period. Crucially, this approach allows one to capture content specific variation in the perceived strength of arguments of the same structure.

A negative argument such as (2) requires the corresponding negative form of Bayes' theorem:

Again, such negative arguments can be weaker or stronger: Failing to observe side effects in 50 studies provides far stronger evidence for the claim that the drug lacks side effects (is safe) than does observing no side effects in just a single study, and these differences are readily captured.

However, the Bayesian framework also identifies important consequences of differences in structure. Formal analysis reveals that across a broad range of possible (and in everyday life plausible) numerical values for both how likely the evidence would be if the claim were true and if it were false, positive arguments are stronger than their corresponding negative counterparts based on the same set of values (Hahn & Oaksford, 2007a ; Oaksford & Hahn, 2004 ). This observation provides an explanation for why arguments from ignorance are typically less convincing than corresponding arguments from positive evidence. In other words, the framework captures both characteristics of particular argument types, and of particular instantiations of these types.

From a formal perspective, there is also an important difference between different types of argument from ignorance, exemplified on the one hand by the ghosts example in (1), and, on the other, by the drug safety example in (2). Whereas the just-discussed drug safety example simply involves an inference from a negative (lack of observed ill effects in clinical trials) to a negative (lack of side effects of drug), the ghosts example involves an inference from a double negation (“no-one has proven that they don't”) to a positive (ghosts exist). This so-called epistemic closure case of the argument from ignorance (see, e.g., Walton, 1996a ) requires a further distinction, because people could fail to prove that ghosts don't exist not only because they tried and actually found evidence of existence, but also because they did not try at all. Hence, Hahn and Oaksford's ( 2007a ) Bayesian formalization of this type of argument from ignorance involves three possibilities: a report of positive evidence “ e ,” an explicit report of negative evidence “¬ e ,” and the third possibility, n (as in “nothing”), indicating that there is simply no evidence either way (i.e., neither an explicit reporting of e or ¬ e ). Such an approach is familiar from artificial intelligence where one might distinguish three possibilities in a database search regarding a proposition ( h ): the search can yield affirmative information ( e ), negative information (¬ e ), or return with nothing ( n ). Epistemic closure has been invoked here to license inferences from search failure (i.e., a query resulting in nothing) to nonexistence, given that the database is assumed to be complete.

The Bayesian formalization of epistemic closure is analogous, except that closure can be a matter of degree, ranging from complete closure, through moderate closure, to no closure at all. The corresponding version of Bayes' theorem for the ghosts example is, therefore,

where P (¬¬″ e ″| h )+ P (″¬ e ″| h )+ P ( n|h )=1.

How strong this argument is depends critically on the degree of closure. If the source is epistemically closed (i.e., the database complete), then the probability of a “nothing” response is zero, and everything reduces to the familiar binary case, where ¬?¬ e ? is the same as “ e ” (and one is effectively back to Eq. 1 ). As epistemic closure decreases, the probability of a null response increases, and the inference must become less strong (assuming that the quality of the explicit evidence we could obtain remains the same, that is, equally diagnostic; Hahn & Oaksford, 2007a , 2008 ). This fact explains why some cases of this argument, such as the library case in (3), are so much better than the ghosts example in (1). Electronic library catalogs are reasonably reliable, in that when they say a book is in the library, more often than not it actually is, because libraries try hard to keep their catalogs up to date. Likewise when catalogs say that a book is on loan, it typically is, and epistemic closure is high because loans are (in principle) always recorded.

Several experimental studies have examined the extent to which people's judgments about arguments from ignorance are concordant with Bayesian prescriptions (Corner & Hahn, 2009 ; Hahn, Harris, & Corner, 2009 ; Harris, Corner, & Hahn, 2009 ; Hahn, Oaksford, & Bayindir, 2005 ; Hahn & Oaksford, 2007a ; Oaksford & Hahn, 2004 ). Oaksford and Hahn ( 2004 ) provided participants with short dialogs between two characters, asking them to provide evaluative judgments of how convinced of a claim one of the characters should be in light of the evidence presented by the other. Two different scenarios were used, one involving drug safety and one involving TV violence. Participants saw both positive arguments (as in (4) above) and corresponding arguments from ignorance (such as (2) above). Also manipulated were the degree of prior belief the character in receipt of the evidence already had in the claim ( P(h) ), and the amount (and hence diagnosticity) of evidence received (e.g., one versus 50 clinical studies). Oaksford and Hahn found the expected main effects of prior belief, amount of evidence, and whether the argument was positive or an argument from ignorance. Participants saw the arguments from ignorance as acceptable but less acceptable than their positive counterparts, and that degree of acceptability was influenced by the prior beliefs of the characters in the dialog and the amount of evidence in the way predicted by a Bayesian account.

That participants distinguish clearly between what should be strong and weak versions of the argument from ignorance was confirmed in further studies. For example, Hahn and Oaksford ( 2007a , Exp. 3) presented participants with the classic textbook example of the ghosts (1) and the library example (2), both embedded in short dialogs as in Oaksford and Hahn ( 2004 ). Participants' ratings of how convinced a character in that dialog should be by the argument in question was significantly lower for the ghosts argument. Hahn et al. ( 2005 ) presented participants with positive arguments, as well as both types of argument from ignorance, and a negative argument involving explicit negative evidence. Furthermore, there were four different scenarios chosen to vary intuitively in the degree of epistemic closure. Participants' evaluations were sensitive not only to the individual argument structures but also to the variations in closure, and—as Hahn and Oaksford ( 2007a ) show—are well fit by the respective forms of Bayes' theorem.

Finally, Harris et al. ( 2009 ) examined how silence or “nothing” itself can support rational inference that takes the form of arguments from ignorance. Imagine the recipient of a reference request in the context of a job application, who might be informed merely that the applicant is “punctual and polite” and, on the basis of that, conclude that the applicant is poorly qualified for the job—the phenomenon of being “damned by faint praise.” Here, “punctual and polite” constitutes a positive piece of evidence, which should (marginally) increase favorable evaluation of the candidate. What is doing the damage is what is not being said, namely, the absence of any discussion of job-relevant skills. Such an inference from nothing, n , to the conclusion that the applicant is not qualified (¬ h ) is governed by this version of Bayes' theorem:

and is licensed wherever P ( n | h ) ? P ( n |¬ h ), that is, wherever the probability of a “nothing” response is less if the hypothesis is true than if it is false. In this case, the nonoccurrence itself is informative because it is suggestive of the fact that the hypothesis is false. Hence, the effect should be observed where a motivated (or positively inclined) but nonlying source is presenting an argument. By contrast, there is no reason for this negative inference in the case of a maximally uninformed source, P ( n | h ) ≈ P ( n |¬ h ), who simply knows nothing on the topic.

In line with this, Harris et al.'s ( 2009 ) experiment involving a fictitious academic reference found that being told that “James is punctual and polite” led to decreases in judgments of whether James should be admitted to a university course in mathematics, but only when it came from his math tutor (who was presumably well informed about his mathematical abilities), but not from his personal tutor (who had provided general guidance only). Moreover, it was not the case that punctuality and politeness were perceived to be negative themselves, as when following details about mathematical ability this information raised evaluations of James' suitability.

A further case of inference from “nothing,” finally, is the argument from ignorance we mentioned earlier: The failure to provide counterarguments or evidence can be taken to indicate either tacit agreement, or, at least, that the opponent has no counterevidence available. In this case, P ( n | h ) 〉 P ( n |¬ h ), and the lack of counterevidence suggests the claim is true. Moreover, the more salient a proponent has made a claim, thus indicating its perceived importance (for example, by introducing it early on), the more motivated the opponent should be to counter it, if possible. Hence, the more P ( n | h ) should be assumed to exceed P ( n |¬ h ), and thus the stronger the overall inference will be. It is in this way that changes to presentation order, or one- versus two-sided presentation, can directly affect the perceived content of the overall argument at hand.

Conditionals

A probabilistic approach also deals naturally with arguments and fallacies that arise with the conditional (if … then) in natural language. In this area of the psychology of reasoning, an explicitly epistemic approach has emerged that is closely related to the research on argumentation we have so far reviewed (e.g., Oaksford & Chater, 2007 , 2010a , b ).

Conditional Fallacies

Many accounts of the fallacies include deductive fallacies such as those attaching to the conditional, if p (antecedent), then q (consequent). Two such fallacies are denying the antecedent and affirming the consequent. These fallacies are typically included among those that require an explanation in terms of the discourse rules involved in their use in argumentation (e.g., Godden & Walton, 2004 ). An example of denying of antecedent (DA) is:

If a bird is a swan, then it is white. That bird was not a swan. Therefore, that bird was not white. (5)

Godden and Walton's approach is to point out that while clearly a truth functional fallacy—as there are nonwhite swans—deploying this line of reasoning in argument against someone who is using this conditional to argue that a particular bird was white may be a sound strategy. Of course, if whether “that bird was white” is the topic of an argument, then it is clear that the parties in the argument disagree, and hence there is some uncertainty about the bird's color. The party deploying the conditional originally (“Pro”) argues that the bird is white via a modus ponens (MP) argument:

(Pro) If a bird is a swan, then it is white. That bird was a swan; therefore, it was white.

To refute Pro's argument, the respondent (“Con”) must deny one of the premises. Con chooses to “deny the antecedent,” that is, to deny that the bird was a swan, from which it “follows” that the bird was not white:

(Con) But that bird was not a swan; therefore, it was not white.

However, as Godden and Walton observe, in this example, the falsity of the consequent—the bird was not white—does not follow from this use of denying the antecedent in the way that it would if it were logically valid, that is, it is not truth preserving. Rather, its deployment here undermines another property of Pro's intended conclusion, what “might be called its admissibility , or that it follows or that it is established or that it may be concluded , or perhaps even that it should be believed ” (Godden & Walton, 2004 , p. 232). A subjective probabilistic approach to argument strength for the conditional can cash out this last intuition with respect to whether Pro's conclusion is believable (Oaksford & Hahn, 2007 ).

According to Hahn and Oaksford's ( 2006a , 2007a ) account of argument strength, people are concerned with the subjective degree of belief they should have in the conclusion given their degrees of belief in the premises. Oaksford and Chater ( 2007 ; Oaksford, Chater, & Larkin, 2000 ) proposed just such a model for the conditional. We will not rehearse the theory in any detail. It depends on people possessing prior degrees of belief in the conditional, given by the conditional probability, P 0 (q|p) , and in the marginals, P 0 (p), P 0 (q) (the “0” subscript denotes prior probabilities; a “1” subscript denotes posterior probabilities). These probabilities define a prior probability distribution from which, for example, the probability P 0 (¬ q |¬ p ) can also be derived. People are assumed to derive their degrees of belief in the conclusions of conditional inferences by Bayesian conditionalization. So, for modus ponens, when Pro asserts that “that bird was a swan,” for Bayesian conditionalization to apply, she is asking her audience (here Con) to assume that P 1 ( p ) = 1, from which it follows that one's degree of belief in the conclusion, P 1 ( q ), should be P 0 ( q | p ).

It is straightforward to derive contexts in which denying the antecedent is stronger than modus ponens. So if Con's beliefs lined up pretty closely to such a context, his counterargument by denying the antecedent could be stronger than Pro's initial argument by modus ponens. Suppose that the swans they are talking about are in a bird sanctuary containing equal numbers of white and black swans, and that most of the birds in the sanctuary are neither white nor swans. The following distribution captures these facts: P 0 ( q | p ) = .5, P 0 ( p ) = .1, P 0 ( q ) = .1. On this distribution, a bird is nine times more likely to be white given it is a swan than that it is not a swan. However, the probability that the bird is white given it is a swan is only .5, i.e., P 1 ( q ) = .5. That is, on this distribution, Pro's argument while logically valid, only yields a .5 degree of belief in the conclusion. Con's argument can be characterized as noting that priors matter and that it is highly unlikely that the bird was a swan; that is, Pro's assertion that Pr 1 ( p ) = 1 is unlikely to be true. For Bayesian conditionalization to apply, this has to be incorporated into an argument as the assumption that P 1 ( p ) = 0, and so P 1 (¬ p ) = 1. The posterior probability is P 1 (¬ q ) = P 0 (¬ q |¬ p ) = .94. That is, on this distribution, Con's DA argument is stronger than Pro's MP argument, in the sense of yielding a higher degree of belief in its conclusion.

One may also avoid the fiction of Con (or Pro) asking his or her interlocutor to assume that the categorical premise is certain, P 1 (¬ p ) = 1. By using Jeffrey conditionalization—a generalization of Bayesian conditionalization to when the categorical premise is uncertain (see, e.g., Jeffrey, 2004 )—one could just argue that the probability that the bird was a swan equals one's subjective degree of belief, that is, .1, then P 1 (¬ q ) = P 0 (¬ q |¬ p ) P 1 (¬ p ) + P 0 (¬ q | p ) P 1 ( p ) = .94×.9 + .5×.1 = .896. This reformulation does not change things by very much, that is, DA is still the stronger argument, and we should believe its conclusion more than the conclusion of the MP argument. So by developing an account of argument strength using subjective probability, we can generate a measure of how much the conclusion of the DA argument “ should be believed .” It remains for Con to persuade Pro that the distributions on which the former bases his argument map on to the way the world actually is.

We then confront an issue we have raised before (Hahn & Oaksford, 2007a ): What comes first, the assessment of the strength of the respective arguments given what Con believes, or the deployment of the DA argumentative strategy? It seems clear that deploying DA in this context is appropriate because Con believes it to be the stronger argument. The burden of proof of course then returns to Con to establish that the context is as Con believes it to be.

However, there are features of this context that suggest that one might rarely be justified in deploying DA in argument. First, ignoring the priors, Pro's MP argument is more forceful. It should lead to greater change in people's degree of belief in the conclusion. This is because the bird is 9 times more likely to be white given it is a swan but only 1.88 times more likely not to be white given it is not a swan. Oaksford and Hahn ( 2007 ) proposed the likelihood ratio as a possible prior independent measure of argument force . Ignoring priors is a well-documented phenomenon in the psychology of judgment and decision making (Bar-Hillel, 1980 ; Kahneman & Tversky, 1973 ). Hence, it is understandable why despite the low prior, Pro (even if he believed the prior to be low) might view MP as a “good” argument to put forward. Con's DA counterargument is in a sense a reminder that the priors do matter, and that in this case one should be more convinced that the bird is not white.

Second, however, one might question whether there are many circumstances in which the DA argument is justified because it is the stronger argument. One of the assertability conditions on the conditional, at least according to Adams ( 1998 ), is that P 0 ( q | p ) is high, certainly greater than .5. When this is the case, MP will usually be a stronger argument than DA, that is, P 0 ( q | p ) 〉 P 0 (¬ q |¬ p ). So, by introducing the conditional, Pro may be implicitly asserting that MP is stronger than DA. This situation would seem to warrant a different argumentative strategy on Con's part, that is, denying the conditional premise rather than the antecedent. Moreover, in the psychology of belief revision it has been found that in response to a contradiction of the conclusion of an MP inference, people choose to revise their belief in the conditional rather than their belief in the categorical premise (Elio & Pelletier, 1997 ). In arguing that Pro—or Pro and Con's audience—should believe the opposite of the conclusion of the MP inference, Con is in a similar position and hence may normally choose to deny the conditional rather than the antecedent.

Deontic and Utility Conditionals

In the psychology of reasoning, it has been observed that conditional sentences frequently describe situations that have associated utilities, which may figure in various argumentative strategies and fallacies (Manktelow & Over, 1987 ). This observation was first made in the context of deontic reasoning (Cheng & Holyoak, 1985 ; Cosmides, 1989 ), that is, about what you should and should not do, using conditionals like,

If you are drinking beer, you must be over 18 years of age. (6)

An enforcer of such a deontic regulation will place a high utility on detecting cheaters, that is, people who are drinking beer but who are not over 18 years of age. Oaksford and Chater ( 1994 ) showed how people's behavior on the deontic version of Wason's selection task could be captured by assuming that people are attempting to maximize expected utility in their selection behavior. More recently, Perham and Oaksford ( 2005 ) showed how this calculation could be modified by emotions to explain effects when explicitly threatening words were used in the antecedent of a deontic conditional. Bonnefon ( 2009 ) has developed a classification scheme for utility conditionals where the positive or negative utility is associated with either the antecedent or consequent of a conditional, and there have been several papers investigating such effects (Evans, Neilens, Handley, & Over, 2008 ; Thompson, Evans, & Handley, 2005 ).

Evans et al. ( 2008 ) investigated a variety of conditionals expressing conditional tips, warnings, threats, and promises. For example, “If you go camping this weekend, then it will rain” is a clear warning not to go camping. The higher P ( q | p ) and the more negative the utility associated with the consequent, U ( q ), that is, rain, the more persuasive such a conditional warning is to the conclusion that action p should not be taken, ¬ p , that is, you should not go camping. For warnings, Evans et al ( 2008 ) argued that the decision about whether to perform the action p is based on the prima facie utility of the action itself, U ( p ), less the expected disutility of the action to which p could lead, P ( q | p ) U ( q ), that is,

Hahn and Oaksford ( 2007a ) provided a very similar analysis of the slippery slope argument (SSA), which has often been regarded as an argumentative fallacy.

Slippery Slope Arguments

SSAs are typically expressed in conditional form (seeCorner, Hahn, & Oaksford, 2011 ):

If we allow gay marriage, then in the future people will want to marry their pets. (7) If voluntary euthanasia is legalized, then in the future there will be more cases of “medical murder.” (8) If we accept voluntary ID cards in the UK, we will end up with compulsory ID cards in the future. (9)

These examples have all actually been put forward in the media by groups arguing that the antecedent actions of (7) to (9) should not be taken. As these examples show, SSAs can vary greatly in strength. Like conditional warnings, the conclusion people are invited to draw is ¬ p , for example, one should not allow gay marriage. (7) is weak because of the very low value of P ( q | p ), whatever we may think of the merits of interspecies marriage. (8) is stronger because this probability is higher but also because “medical murder” is clearly so undesirable, that is, U ( q ) is strongly negative. (9) is even stronger because P ( q | p ) seems very close to 1 and the consequent is highly undesirable (for some).

What differs between SSAs and warnings is that, whereas for warnings P ( q | p ) is assessed just by reference to prior world knowledge, for SSAs there seems to be an implied mechanism that leads to the consequent action from the antecedent action. This mechanism suggests that an act of categorizing an item a (gay couples) under a category F (can marry), that is, Fa , will lead to other items b (interspecies “couples”) also falling under the same category, Fb . Hahn and Oaksford ( 2007a ) proposed that such a “category boundary re-appraisal” mechanism may explain why people find slippery slope arguments so compelling.

Specifically, current theories of conceptual structure typically agree that encountering instances of a category at the category boundary should extend that boundary for subsequent classifications, and there is a wealth of empirical evidence to support this (see Rips et al., Chapter 11 ). In particular there are numerous experimental demonstrations of so-called exemplar effects, that is, effects of exposure to particular instances and their consequences for subsequent classification behavior (e.g., Lamberts, 1995 ; Nosofsky, 1986 , 1988a , 1988b ). For example, observing that a dog that weighs 10 kg is considered underweight invites the conclusion that a dog that weighs 10.5 kg is also underweight. With only the information that a 5 kg dog is underweight, and a 15 kg dog is overweight, however, one might not be so compelled to draw this conclusion. This is because of the similarity between 10 kg and 10.5 kg and the comparative dissimilarity with either 5 kg or 15 kg. Similarly, one may argue that (7) is a poor argument and so Pr( q | p ) is low because of the dissimilarity between same-sex human relations and interspecies relations, and that hence it is clear that there is no likelihood of slippage of the category “can marry” from one case to the other.

Corner, Hahn, and Oaksford ( 2011 ) have shown that people's confidence in judging that various acts fall under a certain category is directly related to their willingness to endorse a corresponding SSA, and that this relationship is moderated by the similarity between the acts. For example, participants who are told that assault in possession of a knife has been categorized as having a tariff of less than 20 years imprisonment may confidently decide that assault in possession of a gun would also be given the same tariff. These same participants also endorse the slippery slope argument to the conclusion that assault in possession of a knife should be given a tariff of greater than 20 years because giving it less than 20 years may lead to assault in possession of a gun also being given the lower tariff. In this condition in Corner et al.'s ( 2011 ) Experiment 3, decision confidence in the categorization judgment and SSA endorsement were substantially correlated, r (25) = .47. When assault without a weapon is substituted for assault in possession of a knife , that is, an offense less close to assault in possession of a gun in similarity space, decision confidence and SSA endorsement rates become less correlated. In this dissimilar condition in Corner et al.'s ( 2011 ) Experiment 3, the correlation between decision confidence in the categorization judgment and SSA endorsement fell dramatically, r (24) = .04. The moderating effect of similarity was confirmed in a moderated regression analysis (Aguinis, 2004 ).

In sum, current work on conditionals and associated arguments shows that a Bayesian account of argument strength with associated utilities (SSAs) and without them (Denying the antecedent) provides a rational understanding of how these apparent fallacies of reasoning and argumentation function. The assessment of the conditional probability, P ( q | p ), is central for assessing argument strength in both cases. For conditional inferences and fallacies, this is a reflection of world knowledge and can be assessed using the generic Ramsey Test for conditionals (Edgington, 1995 ; Ramsey, 1931 ). The Ramsey Test runs as follows: Add the antecedent, p , to your stock beliefs, make adjustments to accommodate this belief, and read off your degree of belief in the consequent, q and this then is P ( q | p ). However, the Ramsey test is a philosophical prescription crying out for a psychological, algorithmic level explanation (Oaksford & Chater, 2007 , 2010b , c ). Constraint satisfaction processes in neural networks provide one possible implementation (Oaksford & Chater, 2010b ). We have argued that for many SSAs, P ( q | p ) is determined via category boundary reappraisal processes, which may represent a further algorithmic instantiation of the Ramsey Test (Corner, Hahn, & Oaksford, 2011 ; Hahn & Oaksford, 2007a ).

Circular Arguments

The best-known fallacy within the catalog is circularity or “begging the question.” It is also the fallacy that has attracted the most attention and that, arguably, generates the most confusion. Different types of circular argument have been distinguished. One, often termed equivalency circularity (see e.g., Walton, 2005 ), takes the form “ A , therefore A .” This argument type has been theoretically puzzling to philosophers because it is logically valid, and thus in some ways “good;” but it is unlikely to occur very often in practice. A second type is the so-called dependency circularity, of which one example is so-called self-dependent justification as in “God exists because the Bible says so and the Bible is the word of God.” Here the evidence presupposes, and in that sense depends on, the very conclusion it is seeking to support; specifically, the Bible cannot be the word of God if God does not exist, which of course is the claim at stake. Though this example will strike most as a rather weak argument, it has been pointed out that many scientific inferences are also self-dependent in that they involve theory-laden observations, and that this self-dependence does not necessarily rule them out as legitimate arguments (for actual scientific examples see Brown, 1993, 1994). This is revealed by a probabilistic analysis (e.g., Hahn & Oaksford, 2007a ; Shogenji, 2000 ), which makes clear that self-dependent arguments as found frequently in science can be acceptable, because the conclusion can be held tentatively, and evidence diagnostic of that conclusion will, when actually observed, increase the posterior of degree belief we have in that conclusion (even though the conclusion itself is involved in the interpretation of the evidence).

There have been a number of psychological experiments examining people's awareness of circularity and their ability to distinguish stronger from weaker circular arguments (e.g., Hahn & Oaksford, 2007a ; Rips, 2002 ). While such studies have generally found that people perform competently, there is clearly a limit here. Psychological researchers themselves frequently accuse others of circularity, and closer inspection suggests that this charge is often overused (Hahn, 2011 ). Moreover, the distinctions between different types of circular arguments are not always appreciated.

In fact, circular arguments provide a key demonstration of how useful and important the formal tools of probability theory are. Bayesian prescriptions often seem intuitively obvious, and there is a view that the theory of probabilities is “at bottom nothing but common sense reduced to calculus” (Laplace, 1814 / 1951 ). However, there are cases where our everyday intuitions break down. Circularity is a case in point. To provide one more example, we might find it intuitive that a sequence or chain of supporting evidence cannot be circular. This is to say that we might find it intuitive that a chain of evidence E 0 , E 1 … E n where E 1 supports E 0 , E 2 , supports E 1 , and so on (in the familiar sense of support P (E 0 |E 1 )〉 P (E 0 |¬E 1 )) cannot ultimately loop back on itself and have E n supported only by E 0 , that is, the very piece of evidence support was being sought for. However, Atkinson and Peijnenburg ( 2010 ) show mathematically that this is entirely possible, both for finite and infinite loops, and that the presence of such loops does not preclude the possibility that the probability of P (E 0 ) be well defined. To illustrate with one of their examples:

A: Peter read parts of the philosopher Kant's magnum opus the “Critique of Pure Reason.” B: Peter is himself a philosopher. C: Peter knows that Kant defended the synthetic a priori.

Here it is both possible and plausible that A is supported (i.e., made more probable) by B , which in turn is supported by C which itself is supported by A . Although one might intuitively suspect that the circularity here means that the (unconditional) probability assigned to A must always remain “up in the air,” this is mathematically not so, and given an appropriate assignment of conditional probabilities ( P (A|B), P (B|C, P (C|A)), a definite value for P (A) ensues.

The fallacies discussed so far are arguably the most prominent in the philosophical literature, but as noted earlier, they by no means exhaust the catalog of fallacies. The theoretical issues that the remaining fallacies raise parallel those discussed so far (see Hahn & Oaksford, 2006a ), but the majority have not yet been the subject of psychological research. Moreover, argumentation theorists have identified many further, nonfallacious argumentation schemes (see Walton, Reed, & Macagno, 2008 ), which have not been examined by psychologists. Reasoning research within psychology has gradually started to move beyond the very narrow set of (deductive) paradigms that have dominated it for the last decades, and this wealth of different types of argument should provide both fertile and practically important ground.

Argumentation Applied

The practical importance of argumentation to our everyday lives is reflected in the fact that there has also been a wealth of psychological research on argumentation in applied contexts, and this body of research, if anything, exceeds that of the fundamental research we have described so far. Research into the development of children's argument skills is frequently closely tied to the educational goal of fostering and improving such skills (e.g., Anderson, Chinn, Chang, Waggoner, & Yi, 1997 ; Brem & Rips, 2000 ; Genishi & DiPaolo, 1982 ; Glassner, Weinstock, & Neuman, 2005 ; Kuhn, 1989 , 1991 , 2001 ; Means & Voss, 1996 ).

A considerable amount of this research has been concerned specifically with science (Kuhn, 1993 ). Much of it has been focused on tracking the development and quality of scientific reasoning (i.e., the use of hypotheses and evidence) in children (see, e.g., Klaczynski, 2000 ; Kuhn, Cheney & Weinstock, 2000 ; Kuhn & Udell, 2003 ; also Dunbar & Klahr, Chapter 35 ) and has been linked to educational policy and programs designed to address deficits in scientific literacy (for reviews, see, e.g., Norris & Phillips, 1999 ; Sadler, 2004 ; see also Koedinger & Roll, Chapter 40 ).

A popular strategy in analyzing student use and recognition of different types of scientific argument has been to apply Toulmin's ( 1958 ) model. The ability to use (and recognize in other people) different aspects of argumentation such as data or warrants is used as an indicator of comprehension and rhetorical competence (e.g., Driver, Newton, & Osborne, 2000 ; Erduan, Simon, & Osborne, 2004 ; Jiminez-Aleixandre, 2002 ; Jiminez-Aleixandre, Rodriguez, & Duschl, 2000 ; Kortland, 1996 ; von Aufschaiter, Erduran, Osborne, & Simon, 2008 ). However, the limitations of this scheme discussed earlier have made themselves felt. In the words of Driver et al. ( 2000 ), “Toulmin's analysis … is limited as, although it can be used to assess the structure of arguments, it does not lead to judgments about their correctness … ” (p. 294). Toulmin's model is a powerful tool for specifying the component parts of arguments and classifying them accordingly. It deals only, however, with the structure of different arguments, not their content.

Some researchers have developed their own, typically qualitative, evaluation schemes (e.g., Korpan, Bisanz, Bisanz, & Henderson, 1997 ; Kuhn, Shaw, & Felton, 1997 ; Norris, Phillips, & Korpan, 2003 ; Patronis, Potari, & Spiliotopolou, 1999 ; Ratcliffe, 1999 ; Takao & Kelly, 2003 ), but these do not necessarily extend in applicability even to other studies.

In short, norms for the evaluation of argument content have been missed in this area. In fact, the Bayesian framework seems particularly suited to this context, as it has been so influential in the philosophy of science where it has been used to explain the ways in which scientists construct, test, and eliminate hypotheses, design experiments, and statistically analyze data (e.g., Bovens & Hartmann, 2003 ; Earman, 1992 ; Howson & Urbach, 1993 ). Corner and Hahn ( 2009 ) also demonstrate the utility of Bayesian probability as a heuristic framework for psychological research in this area. Specifically, they sought to examine whether there were system atic differences in the way participants evaluated arguments with scientific content (e.g., about genetically modified foods or climate change) and arguments about mundane, everyday topics (e.g., the availability of tickets for a concert). Although these arguments differ radically in content, they can be compared to each other, because both can be compared to Bayesian prescriptions. Are people's evaluations more normative in one of these areas, and if yes, where do the systematic deviations and discrepancies reside? Once again, probability theory provides a tool for fine-grained analysis.

Finally, there has been considerable interest in argument and evidence within the courtroom (see Spellman & Schauer, Chapter 36 ). This is part of extensive psychological research into jury decision making involving mock juries. This research has often focused on the role of narratives or “stories” in legal argument. A story, in this sense, is a series of interrelated episodes that involves initiating events, goals, actions, consequences, and accompanying states (typically) in the order in which they occurred (e.g., Pennington & Hastie, 1981 ). Experimental research with mock juries has sought support for the claim that evidence is most naturally organized within such stories, and when presented in this way, is more memorable and has greater persuasive effects (see e.g., Pennington & Hastie, 1981 , 1986 , 1988 , 1992 ; Voss & van Dyke, 2001 ).

A narrative structure is, however, not the only way in which legal arguments are made (see e.g., Schum, 1993 ), and, in terms of persuasive success not necessarily always the most effective (see Spiecker & Worthington, 2003 ). Other factors that have been examined are, for example, the impact of the comprehensiveness of opening statements by prosecution and defense and the extent to which these statements foreshadow subsequent arguments (Pyszczynski & Wrightsman, 1981 ; see also Pyszczynski, Greenberg, Mack, & Wrightsman, 1981 ). Many have examined issues surrounding possible bias through prior beliefs (e.g., Giner-Sorolla, Chaiken, & Lutz, 2002 ; on biased assimilation more generally (see Lord & Taylor, 2009 ; also see Molden & Higgins, Chapter 20 ); and there has been research into how argument evaluation and overall decisions interact (Carlson & Russo, 2001 ; Holyoak & Simon; 1999 ; Simon, Pham, Le, & Holyoak, 2001 ). Finally, there have been many studies concerned with different aspects of testimony (see e.g., Eaton, Ball, & O'Callaghan, 2001 ; Ivkovic & Hans, 2003 ; McQuiston-Surrett & Saks, 2009 ; Skolnick & Shaw, 2001 ; Weinstock & Flaton, 2004 ).

Normative concerns are integral to the legal process and the study of argument evaluation within law. However, law has, historically, tended to mistrust Bayesian probability and its appropriateness for capturing uncertainty in law (see, e.g., Tillers & Green, 1988 ); and psychological research evaluating testimony relative to Bayesian prescriptions has been focused on quantitative, statistical evidence (in particular, probability estimates associated with biological profile evidence such as blood typing, hair fibers, and DNA; for a review see, e.g., Kaye & Koehler, 1991 ). Such studies reveal difficulties for many (mock) jurors in dealing with explicit numerical information. Beyond this, the dominant experimental finding here has been that participants underweight the evidence relative to Bayes' theorem. This finding is consistent with a long line of studies within social psychology examining quantitative aspects of belief revision (e.g., Edwards, 1968 ; Fischoff & Beyth-Marom, 1983 ; but see also Corner, Harris, & Hahn, 2010 ; Erev, Wallsten, & Budescu, 1994 ). One contributing factor to this apparent deficit could be that participants treat the expert sources providing the evidence as less reliable than the experimenters assume. Schklar and Diamond ( 1999 ) specifically examined this possibility and found that basing calculations of “normative” posteriors on participants' own assessments significantly decreased the gap between normative and descriptive, although it did not remove it entirely.

The issue of testimony, of course, extends beyond just the courtroom. It can be argued that the majority of our knowledge depends, in one form or other, on the testimony of others (e.g., Spellman & Tenney, 2010 ). Hence, it is fundamental to argument evaluation that not just the content of an argument but also its source be taken into account. In general, the same evidence presented by a less reliable source should lead to a smaller change in degree of belief; moreover, characteristics of the argument or evidence itself and of the reporting source should interact multiplicatively from a normative, Bayesian perspective (see, e.g., Schum, 1981 ). Initial studies involving simple day-to-day arguments find support for the position that participants have some basic sensitivity to this general relationship (Hahn, Harris, & Corner, 2009 ; see that paper also for discussion of social psychological research on this issue).

That the utility of the Bayesian framework extends beyond overtly statistical contexts is also illustrated by recent work on the notion of coherence, that is, the extent to which multiple testimonies “fit together.” Coherence is a central notion within the story model mentioned earlier, and recent work within Bayesian epistemology by the philosophers Stephan Hartmann and Luc Bovens has sought to understand how, when, and why informational coherence impacts belief change (Bovens & Hartmann, 2003 ). A recent experimental investigation found that participants' judgments (regarding witness reports to the police on the putative location of a body) corresponded closely to Bayesian predictions, and clearly went beyond a simple averaging strategy (Harris & Hahn, 2009 ).

Finally, as was argued in the context of research on science education and communication earlier, the Bayesian framework can provide an invaluable methodological tool. Many of the questions that have been asked about testimony, such as whether it is more or less convincing than physical evidence (e.g., Skolnick & Shaw, 2001 ), or whether expert witnesses are more convincing when they provide qualitative or quantitative evidence (e.g., McQuiston-Surrett & Saks, 2009 ; for research on qualitative versus quantitative arguments more generally see also the reviews of Hornikx, 2005 , 2007 , and the studies of Hoeken & Hustinx, 2009 ) can only really be addressed if the specific content of the evidence is controlled for. It is in the nature of these particular research questions that it is typically not possible to exactly equate the content, and vary only the dimension of interest, thus leaving results inherently confounded. However, the Bayesian framework can contribute to such research because it allows such content-specific variation to be factored out (Corner & Hahn, 2009 ). Priors, posteriors, and likelihoods provide a common currency within which different types of arguments can be contrasted in search of presentation or domain-specific systematic differences.

Conclusions and Future Directions

Rather trivially, reviews of research literatures often end with calls for more research. However, in the case of argumentation, empirical research has, in many ways, genuinely only just commenced. The range of material discussed in this chapter has, to our knowledge, not even previously been brought together, and there are undoubtedly aspects where this synthesis remains incomplete. Procedural and epistemic norms for argument evaluation can now be brought together in order to develop a unified comprehensive treatment that was not previously possible. As outlined in the Introduction, argumentation is an inherently interdisciplinary field with enormous theoretical and practical interest. Psychology has a central role to play in this. Decades of social psychological research have found that argument quality seems the most influential factor in persuasion, but persuasion research has lacked the theoretical tools to address the question of what makes an argument “good.” This issue has been identified time and again as the most serious gap in persuasion research (see, e.g., Fishbein & Ajzen, 1981 , 2009 ; Johnson, Maio, & Smith-McLallen, 2005 ; O'Keefe, 1995 ). As detailed in this chapter, norms for argument quality exist; it remains to be explored in detail how sensitive people are to these.

Acknowledgments

Many thanks go to Adam Corner, Adam Harris, and Nick Chater for their role in helping shape our views on argument. Many thanks also to Jos Hornikx, Hans Hoeken, Lance Rips, and Keith Holyoak for helpful comments on earlier versions of the chapter.

It should be noted that Toulmin's system is not the only, or even oldest, system like this. One of the most influential systems for representing dependencies between arguments is the Wigmore chart (for an introduction to this and other systems for displaying argument structure, see, e.g., Schum, 1994 ).

It is worth noting that there have always been connections between legal argument and other types of argumentation, in particular within the rhetorical tradition, for example, in Whately ( 1828 ), but Toulmin's work gave considerable impetus to the view that legal argument constitutes a role model for rational debate more generally.

It is not enough for parties in a debate to simply agree upon a procedure or criteria for evaluation in order for the debate to qualify as rational (but see van Eemeren & Grootendorst, 2004 , p. 151): basing argument evaluation on the reading of entrails, position of the stars, or drawing of lots, for example, would seem incompatible with rational debate even if both parties approved of these methods.

In its emphasis on probabilities, the Bayesian approach to argumentation has close links to early work on attitude change involving the subjective probability model, or “probabilogical model,” of cognitive consistency (McGuire 1960a , 1960b , 1960c ; Wyer, 1970 ; Wyer & Goldberg, 1970 ), which has also had some application in argumentation studies (e.g., Allen, Burrell, & Egan, 2000 ; Hample, 1977 ). The probabilogical model draws on the law of total probability to relate experimentally induced changes in one or more propositions to attendant changes in another (typically logically) related proposition (see also, Eagly & Chaiken, 1993 , Ch. 5 , for an introduction).

Nor are any of the many nonclassical logics described in Prakken and Vreeswijk ( 2002 ) helpful here. Though often aimed specifically at dealing with uncertainty, the support and consequence relationships they specify are, like Toulmin's system, too undifferentiated to distinguish the four cases. In addition, desirable core properties of classical logic are typically lost.

Adams, E. W. ( 1998 ). A primer of probability logic . Stanford, CA: CLSI Publications.

Google Scholar

Google Preview

Aguinis, H. ( 2004 ). Regression analysis for categorical moderators . New York: Guilford Press.

Allen, M. ( 1991 ). Meta-analysis comparing the persuasiveness of one-sided and two-sided messages.   Western Journal of Speech Communication , 55, 390–404.

Allen, M., Burrell, N., & Egan, T. ( 2000 ) Effects with multiple causes: evaluating arguments using the subjective probability model.   Argumentation and Advocacy , 37, 109–116.

Anderson, R. C., Chinn, C., Chang, J., Waggoner, M., & Yi, H. ( 1997 ). On the logical integrity of children's arguments.   Cognition and Instruction , 15, 135–167.

Aristotle. ( 2004 ). On sophistical refutations (W. A. Pickard-Cambridge, Trans.). Whitefish, MT: Kessinger Publishing Co.

Atkinson, D., & Peijnenburg, J. ( 2010 ). Justification by infinite loops.   Notre Dame Journal of Formal Logic , 51, 407–416.

von Aufschaiter, C., Erduran, S., Osborne, J., & Simon, S. ( 2008 ) Arguing to learn and learning to argue: Case studies of how students' argumentation relates to their scientific knowledge.   Journal of Research in Science Teaching , 45, 101–131.

Bailenson, J. ( 2001 ). Contrast ratio: Shifting burden of proof in informal arguments.   Discourse Processes , 32, 29–41.

Bailenson, J. N., & Rips, L. J. ( 1996 ). Informal reasoning and burden of proof.   Applied Cognitive Psychology , 10, S3-S16.

Bar-Hillel, M. ( 1980 ). The base-rate fallacy in probability judgments.   Acta Psychologica , 44, 211–233.

Bench-Capon, T. M., & Dunne, P. E. ( 2007 ). Argumentation in artificial intelligence.   Artificial Intelligence , 171, 619–641.

Boger, G. ( 2005 ). Subordinating truth – is acceptability acceptable?   Argumentation , 19, 187–238.

Bonnefon, J. F. ( 2009 ). A theory of utility conditionals: Paralogical reasoning from decision theoretic leakage.   Psychological Review , 116, 888–907.

Bovens, L., & Hartmann, S. ( 2003 ). Bayesian epistemology . Oxford, England: Oxford University Press.

Brem, S., & Rips, L. ( 2000 ). Explanation and evidence in informal argument.   Cognitive Science , 24, 573–604.

Brown, H. ( 1993 ). A theory-laden observation can test a theory.   British Journal for the Philosophy of Science , 44, 555–559.

Brown, H. ( 1994 ). Circular justifications.   PSA , 1, 406–414.

Carlson, K. A., & Russo, J. E. ( 2001 ). Biased interpretation of evidence by mock jurors.   Journal of Experimental Psychology: Applied , 7, 91–103.

Cheng, P. W., & Holyoak, K. J. ( 1985 ). Pragmatic reasoning schemas.   Cognitive Psychology , 17, 391–416.

Christmann, U., Mischo, C., & Flender, J. ( 2000 ). Argumentation al integrity: A training program for dealing with unfair argumentational contributions. Argumentation , 14, 339–360.

Christmann, U., Mischo, C., & Groeben, N. ( 2000 ). Components of the evaluation of integrity violations in argumentative discussions: Relevant factors and their relationships.   Journal of Language and Social Psychology , 19, 315–341.

Clark, H. H. ( 1996 ). Using language . New York: Cambridge University Press.

Corner, A. J., & Hahn, U. ( 2009 ). Evaluating science arguments: Evidence, uncertainty & argument strength.   Journal of Experimental Psychology: Applied , 15, 199–212.

Corner, A., Hahn, U., & Oaksford, M. ( 2011 ). The psychological mechanism of the slippery slope argument.   Journal of Memory and Language , 64, 133–152.

Corner, A. J., Harris, A. J. L., & Hahn, U. ( 2010 ). Conservatism in belief revision and participant skepticism. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 1625–1630). Austin, TX: Cognitive Science Society.

Cosmides, L. ( 1989 ). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task.   Cognition , 31, 187–276.

Davis, D., & Holtgraves, T. ( 1984 ). Perceptions of unresponsive others: Attributions, attraction, understandability and memory of their utterances.   Journal of Experimental Social Psychology , 20, 383–408.

Driver, R., Newton, P., & Osborne, J. ( 2000 ). Establishing the norms of scientific argumentation in classrooms.   Science Education , 84, 287–312.

Dung, P. M. ( 1995 ). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.   Artificial Intelligence , 77, 321–357.

Eagly, A. H., & Chaiken, S. ( 1993 ). The psychology of attitudes . Belmont, CA: Thompson/Wadsworth.

Earman, J. ( 1992 ). Bayes or bust ? Cambridge, MA: MIT Press.

Eaton, T. E., Ball, P., & O'Callaghan, M. G. ( 2001 ). Child-witness and defendant credibility: Child evidence presentation mode and judicial instructions.   Journal of Applied Social Psychology , 31, 1845–1858.

Edgington, D. ( 1995 ). On conditionals.   Mind , 104, 235–329.

Edwards, W. ( 1968 ). Conservatism in human information processing. In B. Kleinmuntz (Ed.), Formal representation of human judgment (pp. 17–52). New York: Wiley.

Elio, R., & Pelletier, F. J. ( 1997 ). Belief change as propositional update.   Cognitive Science , 21, 419–460.

Engel, S. M. ( 1986 ). Explaining equivocation.   Metaphilosophy , 17, 192–199.

Erev, I., Wallsten, T. S., & Budescu, D. V. ( 1994 ). Simultaneous over and under confidence: The role of error in judgement processes.   Psychological Review , 101, 519–527.

Erduan, S., Simon, S., & Osborne, J. ( 2004 ). TAPping into argumentation: Developments in the application of Toulmin's argument pattern for studying science discourse.   Science Education , 88, 915–933.

Evans, J. St. B. T., & Over, D. E. ( 2004 ). If . Oxford, England: Oxford University Press.

Evans, J. St. B. T., Neilens, H., Handley, S., & Over, D. ( 2008 ). When can we say ‘if’ ?. Cognition , 108, 100–116.

Fishbein, M., & Ajzen, I. ( 1981 ). Acceptance, yielding, and impact: Cognitive processes in persuasion. In R. Petty, T. Ostrom, & T. Brock (Eds.), Cognitive responses in persuasion (pp. 339–359). Hillsdale, NJ: Erlbaum.

Fishbein, M., & Ajzen, I. ( 2009 ). Predicting and changing behavior: The reasoned action approach . New York: Taylor Francis.

Fischoff, B., & Beyth-Marom, R. ( 1983 ). Hypothesis evaluation from a Bayesian perspective.   Psychological Review , 90, 239–260.

Fox, J., Glasspool, D., Grecu, D., Modgil, S., South, M., & Patkar, V. ( 2007 ). Argumentation-based inference and decision-making.   IEEE Intelligent Systems , 22, 34–41.

Genishi, C., & DiPaolo, M. ( 1982 ). Learning through argument in a preschool. In L. C. Wilkinson (Ed.), Communicating in the classroom (pp. 49–68). New York: Academic Press.

Giner-Sorolla, R., Chaiken, S., & Lutz, S. ( 2002 ). Validity beliefs and general ideology can influence legal case judgments differently.   Law and Human Behavior , 26, 507–526.

Glassner, A., Weinstock, M., & Neuman, Y. ( 2005 ). Pupils' evaluation and generation of evidence and explanation in argumentation.   British Journal of Educational Psychology , 75, 105–118.

Godden, D. M., & Walton, D. ( 2004 ). Denying the antecedent as a legitimate argumentative strategy: A dialectical model.   Informal Logic , 24, 219–243.

Goldman, A. I. ( 1994 ) Argumentation and social epistemology.   The Journal of Philosophy , 91, 27–49.

Hahn, U. ( 2011 ). The problem of circularity in evidence, argument and explanation.   Perspectives on Psychological Science , 6, 172–182.

Hahn, U., Harris, A. J. L., & Corner, A. J. ( 2009 ). Argument content and argument source: An exploration.   Informal Logic , 29, 337–367.

Hahn, U., & Oaksford, M. ( 2006 a). A Bayesian approach to informal argument fallacies.   Synthese , 152, 207–236.

Hahn, U., & Oaksford, M. ( 2006 b). Why a normative theory of argument strength and why might one want it to be Bayesian?   Informal Logic , 26, 1–24.

Hahn, U., & Oaksford, M. ( 2007 a). The rationality of informal argumentation: A Bayesian approach to reasoning fallacies.   Psychological Review , 114, 704–732.

Hahn, U., & Oaksford, M. ( 2007 b). The burden of proof and its role in argumentation.   Argumentation , 21, 39–61.

Hahn, U., & Oaksford, M. ( 2008 ) Inference from absence in language and thought. In N. Chater & M. Oaksford (Eds.), The probabilistic mind (pp. 121–142). New York: Oxford University Press.

Hahn, U., Oaksford, M., & Bayindir, H. ( 2005 ). How convinced should we be by negative evidence? In B. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 887–892). Mahwah, NJ: Erlbaum.

Hamblin, C. L. ( 1970 ). Fallacies . London: Methuen.

Hample, D. ( 1977 ). Testing a model of value argument and evidence.   Communication Monographs , 44, 106–120.

Harris, A. J. L., & Hahn, U. ( 2009 ) Bayesian rationality in evaluating multiple testimonies: Incorporating the role of coherence.   Journal of Experimental Psychology: Learning, Memory and Cognition , 35, 1366–1373.

Harris, A. J., Corner, A. J., & Hahn, U. ( 2009 ) “Damned by faint praise”: A Bayesian account. In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 292–297). Austin, TX: Cognitive Science Society.

Hardman, D. ( 2009 ). Judgement and decision making . London: John Wiley & Sons.

Heysse, T. ( 1997 ). Why logic doesn't matter in the (philosophical) study of argumentation.   Argumentation , 11, 211–224.

Hoeken, H., & Hustinx, L. ( 2009 ). When is statistical evidence superior to anecdotal evidence in supporting probability claims.   Human Communication Research , 39, 491–510.

Holyoak, K. J., & Simon, D. ( 1999 ). Bidirectional reasoning in decision making by constraint satisfaction.   Journal of Experimental Psychology: General , 128, 3–31.

Hornikx, J. ( 2005 ). A review of experimental research on the relative persuasiveness of anecdotal, statistical, causal, and expert evidence.   Studies in Communication Sciences , 5, 205–216.

Hornikx, J. ( 2007 ). Is anecdotal evidence more persuasive than statistical evidence? A comment on classic cognitive psychological studies.   Studies in Communication Sciences , 7, 151–164.

Howson, C., & Urbach, P. ( 1993 ) Scientific reasoning: The Bayesian approach . La Salle, IL: Open Court.

Ivkovic, S. K., & Hans, V. P. ( 2003 ). Jurors' evaluations of expert testimony: Judging the messenger and the message.   Law and Social Inquiry , 28, 441–482.

Jeffrey, R. ( 2004 ). Subjective probability: The real thing . Cambridge, England: Cambridge University Press.

Jimenez-Aleixandre, M. P. ( 2002 ). Knowledge producers or knowledge consumers? Argumentation and decision making about environmental management.   International Journal of Science Education , 24, 1171–1190.

Jimenez-Aleixandre, M. P., Rodriguez, A. B., & Duschl, R. A. ( 2000 ). “Doing the lesson” or “doing science”: Argument in High School genetics.   Science Education , 84, 757–792.

Johnson, R. H. ( 2000 ). Manifest rationality: A pragmatic theory of argument . Mahwah, NJ: Erlbaum.

Johnson, B. T., Maio, G. R., & Smith-McLallen, A. ( 2005 ). Communication and attitude change: Causes, processes, and effects. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The handbook of attitudes (pp. 617–669). Mahwah, NJ: Erlbaum.

Kahneman, D., & Tversky, A. ( 1973 ). On the psychology of prediction.   Psychological Review , 80, 237–257.

Kaye, D. H., & Koehler, D. J. ( 1991 ). Can jurors understand probabilistic evidence?   Journal of the Royal Statistical Society A , 154, 75–81.

Kirwan, C. ( 1979 ) Aristotle and the so-called fallacy of equivocation.   Philosophical Quarterly , 29, 33–46.

Klaczynski, P. ( 2000 ). Motivated scientific reasoning biases, epistemological biases, and theory polarisation: A two process approach to adolescent cognition.   Child Development , 71, 1347–1366.

Korb, K. ( 2004 ). Bayesian informal logic and fallacy.   Informal Logic , 23, 41–70.

Korpan, C. A., Bisanz, G. L., Bisanz, J., & Henderson, J. M. ( 1997 ). Assessing literacy in science: Evaluation of scientific news briefs.   Science Education , 81, 515–532.

Kortland, K. ( 1996 ). An STS case study about students' decision making on the waste issue.   Science Education , 80, 673–689.

Kuhn, D. ( 1989 ). Children and adults as intuitive scientists.   Psychological Review , 96, 674–689.

Kuhn, D. ( 1991 ). The skills of argument . Cambridge, England: Cambridge University Press.

Kuhn, D. ( 1993 ). Science as argument: Implications for teaching and learning scientific thinking.   Science Education , 77, 319–337.

Kuhn, D. ( 2001 ). How do people know?   Psychological Science , 12, 1–8.

Kuhn, D., & Udell, W. ( 2003 ). The development of argument skills.   Child Development , 74, 1245–1260.

Kuhn, D., Cheney, R., & Weinstock, M. ( 2000 ). The development of epistemological understanding.   Cognitive Development , 15, 309–328.

Kuhn, D., Shaw, V., & Felton, M. ( 1997 ). Effects of dyadic interaction on argumentative reasoning.   Cognition and Instruction , 15, 287–315.

Lamberts, K. ( 1995 ). Categorization under time pressure.   Journal of Experimental Psychology: General , 124, 161–180.

Laplace, P. S. ( 1951 ). A philosophical essay on probabilities (F. W. Truscott & F. L. Emory, Trans.). New York: Dover Publications. (Original work published 1814).

Lord, C. G., & Taylor, C. A. ( 2009 ). Biased assimilation: Effects of assumptions and expectations on the interpretation of new evidence.   Social and Personality Psychology Compass , 3, 827–841.

Manktelow, K. ( 2011 ). Reasoning and thinking . Hove, England: Taylor Francis.

Manktelow, K. I., & Over, D. E. ( 1987 ). Reasoning and rationality.   Mind and Language , 2, 199–219.

Maio, G. R., & Haddock, G. G. ( 2010 ). The psychology of attitudes and attitude change . London: Sage.

Means, M. L., & Voss, J. F. ( 1996 ). Who reasons well? Two studies of informal reasoning among children of different grade, ability, and knowledge levels.   Cognition and Instruction , 14, 139–179.

Mercier, H., & Sperber, S. ( 2011 ). Why do humans reason? Arguments for an argumentative theory.   Behavioural and Brain Sciences , 34, 57–74.

McGuire, W. J. ( 1960 a). Cognitive consistency and attitude change.   Journal of Abnormal and Social Psychology , 60, 345–353.

McGuire, W. J. ( 1960 b). Direct and indirect persuasive effects of dissonance-producing messages.   Journal of Abnormal and Social Psychology , 60, 354–358.

McGuire, W. J. ( 1960 c). A syllogistic analysis of cognitive relationships. In C. L. Hovland & M. J. Rosenberg (Eds.), Attitude organization and change: An analysis of consistency among attitude components (pp. 65–111). New Haven, CT: Yale University Press.

McKenzie, C. R. M., Lee, S. M., & Chen, K. K. ( 2002 ). When negative evidence increases confidence: Change in belief after hearing two sides of a dispute.   Journal of Behavioral Decision Making , 15, 1–18.

McQuiston-Surrett, D., & Saks, M. J. ( 2009 ). The testimony of forensic identification science: What expert witnesses say and what fact finders hear.   Law and Human Behavior , 33, 436–453.

Mischo, C. ( 2003 ). Cognitive, emotional and verbal response in unfair everyday discourse.   Journal of Language and Social Psychology , 22 (1), 119–131.

Neuman, Y. ( 2003 ). Go ahead, prove that God does not exist!   Learning and Instruction , 13, 367–380.

Neuman, Y., Glassner, A., & Weinstock, M. ( 2004 ). The effect of a reason's truth-value on the judgment of a fallacious argument.   Acta Psychologica , 116, 173–184.

Neuman, Y., Weinstock, M. P., & Glasner, A. ( 2006 ). The effect of contextual factors on the judgment of informal reasoning fallacies.   Quarterly Journal of Experimental Psychology: Human Experimental Psychology , 59 (A), 411–425.

Neuman, Y., & Weitzman, E. ( 2003 ). The role of text representation in students' ability to identify fallacious arguments.   Quarterly Journal of Experimental Psychology: Human Experimental Psychology , 56 (A), 849–864.

Nickerson, R. S. ( 2008 ). Aspects of rationality: Reflections on what it means to be rational and whether we are . Hove, UK: Psychology Press.

Norris, S. P., & Phillips, L. M. ( 1999 ). How literacy in its fundamental sense is central to scientific literacy.   Science Education , 87, 224–240.

Norris, S. P., Phillips, L. M., & Korpan, C. A. ( 2003 ). University students' interpretation of media reports of science and its relationship to background knowledge, interest and reading difficulty.   Public Understanding of Science , 12, 123–145.

Nosofsky, R. M. ( 1986 ). Attention, similarity, and the identification-categorization relationship.   Journal of Experimental Psychology: General , 115, 39–57.

Nosofsky, R. M. ( 1988 a). Exemplar-based accounts of the relations between classification, recognition, and typicality.   Journal of Experimental Psychology: Learning, Memory and Cognition , 14, 700–708.

Nosofsky, R. M. ( 1988 b). Similarity, frequency and category representation.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 14, 54–65.

Oaksford, M., & Chater, N. ( 1994 ). A rational analysis of the selection task as optimal data selection.   Psychological Review , 101, 608–631.

Oaksford, M., & Chater, N. ( 2007 ). Bayesian rationality: The probabilistic approach to human reasoning . Oxford, England: Oxford University Press.

Oaksford, M., & Chater, N. ( 2009 ). Precis of “Bayesian rationality: The probabilistic approach to human reasoning.”   Behavioral and Brain Sciences , 32, 69–120.

Oaksford, M., & Chater, N. (Eds.). ( 2010 a). Cognition and conditionals : Probability and logic in human thinking . Oxford, England: Oxford University Press.

Oaksford, M. , & Chater, N. ( 2010 b). Conditionals and constraint satisfaction: Reconciling mental models and the probabilistic approach? In M. Oaksford & N. Chater (Eds.), Cognition and conditionals: Probability and logic in human thinking (pp. 309–334). Oxford, England: Oxford University Press.

Oaksford, M., & Chater, N. ( 2010 c). Causation and conditionals in the cognitive science of human reasoning. [Special issue. J. C. Perales, & D. R. Shanks, Eds. Causal learning beyond causal judgment ]. Open Psychology Journal , 3, 105–118.

Oaksford, M., Chater, N., & Larkin, J. ( 2000 ). Probabilities and polarity biases in conditional inference.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 26, 883–899.

Oaksford, M., & Hahn, U. ( 2004 ). A Bayesian approach to the argument from ignorance.   Canadian Journal of Experimental Psychology , 58, 75–85.

Oaksford, M., & Hahn, U. ( 2007 ). Induction, deduction and argument strength in human reasoning and argumentation. In A. Feeney & E. Heit (Eds.), Inductive reasoning (pp. 269–301). Cambridge, England: Cambridge University Press.

O'Keefe, D. J. ( 1995 ). Argumentation studies and dual-process models of persuasion. In F. H. van Eemeren, R. Grootendorst, J. A. Blair, & C. A. Willard (Eds.), Proceedings of the Third ISSA Conference on Argumentation. Vol. 1: Perspectives and approaches (pp. 3–17). Amsterdam, Netherlands: Sic Sat.

O'Keefe, D. J. ( 1997 a). Standpoint explicitness and persuasive effect: A meta-analytic review of the effects of varying conclusion articulation in persuasive messages.   Argumentation and Advocacy , 34, 1–12.

O'Keefe, D. J. ( 1997 b). Justification explicitness and persuasive effect: A meta-analytic review of the effects of varying support articulation in persuasive messages.   Argumentation and Advocacy , 35, 61–75.

O'Keefe, D. J. ( 1999 ). How to handle opposing arguments in persuasive messages: A meta-analytic review of the effects of one-sided and two-sided messages.   Communication Yearbook , 22–209-256.

O'Keefe, D. J. ( 2003 ). The potential conflict between normatively good argumentative practice and persuasive success. In F. H. van Eemeren, J. A. Blair, C. A. Willard, & A. F. Snoeck Henkemans (Eds.), Anyone who has a view: Theoretical contributions to the study of argumentation (pp. 309–318). Dordrecht, Netherlands: Kluwer Academic Publishers.

Patronis, T., Potari, D., & Spiliotopolou, V. ( 1999 ). Students' argumentation in decision-making on a socio-scientific issue: Implications for teaching.   International Journal of Science Education , 21, 745–754.

Pennington, N., & Hastie, R. ( 1981 ). Juror decision-making models: The generalization gap.   Psychological Bulletin , 89, 246–287.

Pennington, N., & Hastie, R. ( 1986 ). Evidence evaluation in complex decision making.   Journal of Personality and Social Psychology , 51, 242–258.

Pennington, N., & Hastie, R. ( 1988 ). Explanation-based decision making: Effects of memory structure on judgment.   Journal of Experimental Psychology , 14, 521–533.

Pennington, N., & Hastie, R. ( 1992 ). Explaining the evidence: Tests of the story model for juror decision making.   Journal of Personality and Social Psychology , 62 (2), 189–206.

Perelman, C., & Olbrechts-Tyteca, L. ( 1969 ). The new rhetoric: A treatise on argumentation . Notre Dame, IN: University of Notre Dame Press.

Perham, N. R., & Oaksford, M. ( 2005 ). Deontic reasoning with emotional content: Evolutionary psychology or decision theory?   Cognitive Science , 29, 681–718.

Prakken, H. ( 2008 ). AI & law on legal argument: Research trends and application prospects.   SCRIPTed , 5, 449–454. doi: 10.2966/scrip.050308.449.

Prakken, H., & Vreeswijk, G. A.W. ( 2002 ). Logics for defeasible argumentation. In D. M. Gabbay & F. Guenthner (Eds.), Handbook of philosophical logic (2nd ed., Vol 4, pp. 219–318). Dordrecht/Boston/London: Kluwer Academic Publishers.

Pyszczynski, T., & Wrightsman, L. S. ( 1981 ). The effects of opening statements on mock jurors' verdicts in a simulated criminal trial.   Journal of Applied Social Psychology , 11, 301–313.

Pyszczynski, T., Greenberg, J., Mack, D., & Wrightsman, L. ( 1981 ). Opening statements in a jury trial: The effect of promising more than the evidence can show.   Journal of Applied Social Psychology , 11, 434–444.

Rahwan, I., & Moraitis, P. (Eds.). ( 2009 ). Argumentation in multi-agent systems. Fifth International Workshop, ArgMAS 2008. In Lecture Notes in Artificial Intelligence (Vol. 5384). Heidelberg, Germany: Springer.

Rahwan, I., Zablith, F., & Reed, C. ( 2007 ). Laying the foundations for a world wide argument web.   Artificial Intelligence , 171, 897–921.

Ramsey, F. P. ( 1931 ). The foundations of mathematics and other logical essays . London: Routledge and Kegan Paul.

Ratcliffe, M. ( 1999 ). Evaluation of abilities in interpreting media reports of scientific research.   International Journal of Science Education , 21, 1085–1099.

Reed, C., & Rowe, G. ( 2004 ). Araucaria: Software for argument analysis, diagramming and representation.   International Journal of Artificial Intelligence Tools , 13, 961–980.

Rescher, N. ( 1977 ). Dialectics: A controversy oriented approach to the theory of knowledge . Albany, NY: SUNY Press.

Ricco, R. B. ( 2003 ). The macrostructure of informal arguments: A proposed model and analysis.   Quarterly Journal of Experimental Psychology: Human Experimental Psychology , 56 (A), 1021–1051.

Rips, L. J. ( 1998 ). Reasoning and conversation.   Psychological Review , 105, 411–441.

Rips, L. J. ( 2002 ). Circular reasoning.   Cognitive Science , 26, 767–795.

Rips, L. J., Brem, S. K., & Bailenson, J. N. ( 1999 ). Reasoning dialogues.   Current Directions in Psychological Science , 8, 172–177.

Sadler, T. D. ( 2004 ). Informal reasoning regarding socioscientific issues: A critical review of research.   Journal of Research in Science Teaching , 41, 513–536.

Schklar, J., & Diamond, S. S. ( 1999 ). Juror reactions to DNA evidence: Errors and expectancies.   Law and Human Behavior , 23, 159–184.

Shogenji, T. ( 2000 ). Self-dependent justification without circularity.   British Journal for the Philosophy of Science , 51, 287–298.

Schreier, M., Groeben, N., & Christmann, U. ( 1995 ). “That's not fair!” Argumentation al integrity as an ethics of argumentative communication. Argumentation , 9, 267–289.

Schum, D. A. ( 1981 ). Sorting out the effects of witness sensitivity and response-criterion placement upon the inferential value of testimonial evidence.   Organizational Behavior and Human Performance , 27, 153–196.

Schum, D. A. ( 1993 ). Argument structuring and evidence evaluation. In R. Hastie (Ed.), Inside the juror: The psychology of juror decision making (pp. 175–191). Cambridge, England: Cambridge University Press.

Schum, D. A. ( 1994 ). The evidential foundations of probabilistic reasoning . Evanston, IL: Northwestern University Press.

Skolnick, P., & Shaw, J. I. ( 2001 ). A comparison of eyewitness and physical evidence on mock-juror decision making.   Criminal Justice and Behavior , 28, 614–630.

Simon, D., Pham, L. B., Le, Q. A., & Holyoak, K. J. ( 2001 ). The emergence of coherence over the course of decision making.   Journal of Experimental Psychology: Learning, Memory, and Cognition , 27, 1250–1260.

Spellman, B. A., & Tenney, J. R. ( 2010 ). Credible testimony in and out of court.   Psychonomic Bulletin and Review , 17, 168–173.

Spiecker, S. C., & Worthington, D. L. ( 2003 ). The influence of opening statement/closing argument organizational strategy on juror verdict and damage awards.   Law and Human Behavior , 27, 437–456.

Takao, A.Y., & Kelly, G. J. ( 2003 ). Assessment of evidence in university students' scientific writing.   Science and Education , 12, 341–363.

Thompson, V. A., Evans J. St. B. T., & Handley, S. J. ( 2005 ). Persuading and dissuading by conditional argument.   Journal of Memory and Language , 53, 238–257.

Tillers, P., & Green, E. (Eds.). ( 1988 ). Probability and inference in the law of evidence: The uses and limits of Bayesianism . Dordrecht, Netherlands: Kluwer Academic Publishers.

Tindale, C. W. ( 2007 ). Fallacies and argument appraisal . New York: Cambridge University Press.

Toulmin, S. E. ( 1958 ). The uses of argument . Cambridge, England: Cambridge University Press.

van Eemeren, F. H., Garssen, B., & Meuffels, B. ( 2009 ). Fallacies and judgments of reasonableness: Empirical research concerning pragmadialectical discussion rules . Dordrecht, Netherlands: Springer.

van Eemeren, F. H., & Grootendorst, R. ( 1984 ). Speech acts in argumentative discussions. A theoretical model for the analysis of discussions directed towards solving conflicts of opinion . Berlin, Germany: De Gruyter.

van Eemeren, F. H., & Grootendorst, R. ( 1992 ). Argumentation, communication, and fallacies . Hillsdale, NJ: Erlbaum.

van Eemeren, F. H., & Grootendorst, R. ( 1987 ). Fallacies in pragma-dialectical perspective.   Argumentation , 1, 283–301.

van Eemeren, F. H., & Grootendorst, R. ( 2004 ). A systematic theory of argumentation. The pragma-dialectical approach . Cambridge, England: Cambridge University Press.

van Eemeren, F. H., Grootendorst, R., & Snoeck Henkemans, F. ( 1996 ). Fundamentals of argumentation theory . Mahwah, NJ: Erblaum.

Voss, J. F., & Van Dyke, J. A. ( 2001 ). Narrative structure, information certainty, emotional, content, and gender as factors in a pseudo jury decision-making task.   Discourse Processes , 32, 215–243.

Walton, D. N. ( 1980 ). Why is the ad populum a fallacy?   Philosophy and Rhetoric , 13, 264–278.

Walton, D. N. ( 1985 ). Are circular arguments necessarily vicious?   American Philosophical Quarterly , 22, 263–274.

Walton, D. N. ( 1987 ). The ad hominem argument as an informal fallacy.   Argumentation , 1, 317–331.

Walton, D. N. ( 1988 ). The burden of proof.   Argumentation , 2, 233–254.

Walton, D. N. ( 1991 ). Begging the question: Circular reasoning as a tactic in argumentation . New York: Greenwood Press.

Walton, D. N. ( 1992 a). Nonfallacious arguments from ignorance.   American Philosophical Quarterly , 29, 381–387.

Walton, D. N. ( 1992 b). Slippery slope arguments . Oxford, England: Oxford University Press.

Walton, D. N. ( 1995 ). A pragmatic theory of fallacy . Tuscaloosa: The University of Alabama Press.

Walton, D. N. ( 1996 a). Arguments from ignorance . Philadelphia: Pennsylvania State University Press.

Walton, D. N. ( 1996 b). Fallacies arising from ambiguity . Dordrecht, Netherlands: Kluwer Academic Publishers.

Walton, D. N. ( 1997 ). Appeal to expert opinion: Arguments from authority . University Park: Pennsylvania State University Press.

Walton, D. N. ( 1998 a). The new dialectic: Conversational contexts of argument . Toronto, ON: University of Toronto Press.

Walton, D. N. ( 1998 b). Ad hominem arguments . Tuscaloosa: University of Alabama Press.

Walton, D. N. ( 1999 ). Appeal to popular opinion . University Park: Pennsylvania State University Press.

Walton, D. N. ( 2000 ). Case study of the use of a circumstantial ad hominem in political argumentation.   Philosophy and Rhetoric , 33, 101–115.

Walton, D. N. ( 2005 ). Begging the question in arguments based on testimony.   Argumentation , 19, 85–113.

Walton, D. N. ( 2006 a). Fundamentals of critical argumentation . Cambridge, England: Cambridge University Press.

Walton, D. N. ( 2000 b). Scare tactics: Arguments that appeal to fear and threats (Argumentation Library Series). Dordrecht, Netherlands: Kluwer Academic Publishers.

Walton, D. N., & Macagno, F. ( 2007 ). The fallaciousness of threats: Character and ad baculum.   Argumentation , 21, 63–81.

Walton, D. N., Reed, C., & Macagno, F. ( 2008 ). Argumentation schemes . Cambridge, England: Cambridge University Press.

Wason, P. C., & Johnson-Laird, P. N. ( 1972 ). Psychology of reasoning: Structure and content . Cambridge, MA: Harvard University Press.

Weinstock, M. P., & Flaton, R. A. ( 2004 ). Evidence coverage and argument skills: Cognitive factors in a juror's verdict choice.   Journal of Behavioral Decision Making , 17, 191–212.

Whately, R. ( 1828 /1963). Elements of rhetoric (D. Ehninger, Ed.). Carbondale: University of Southern Illinois Press.

Woods, J., & Walton, D. ( 1979 ). Equivocation and practical logic.   Ratio , 21, 31–43.

Woods, J., Irvine, A., & Walton, D. N. ( 2004 ). Argument: Critical thinking, logic and the fallacies (Rev. ed.). Toronto, ON: Prentice Hall.

Wyer, R. S., Jr. ( 1970 ). Quantitative prediction of belief and opinion change: A further test of a subjective probability model.   Journal of Personality and Social Psychology , 16, 559–570.

Wyer, R. S., Jr., & Goldberg, L. ( 1970 ). A probabilistic analysis of the relationships among beliefs and attitudes.   Psychological Review , 77, 100–120.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Things you buy through our links may earn Vox Media a commission

How to Train Yourself to Be a More Rational Thinker

rational thinking essay

By now, nearly everyone — or at least everyone who’s taken a psychology or business course — is familiar with human foibles like the better-than-average effect (at many tasks, most of us think we’re better than most people) or illusory correlation (we easily read relationships into randomness). The psychologists Daniel Kahneman and Amos Tversky popularized the ideas of biases and heuristics in the 1970s; more recently, psychologist Dan Ariely put the spotlight back on the concept of human irrationality with his 2008 book, Predictably Irrational . I myself have gainfully contributed to the cottage industry of looking smart by saying we’re dumb.

And yet somehow, despite such faulty brains, we are a species that has landed on the moon, and that sometimes even manages to get along. Apparently, under the right circumstances, we can pay attention to facts, straighten our slanted beliefs, and make prudent decisions. Just what are these elusive circumstances, and where can I get some?

Consider this something like a tool chest for rationality. It’s far from comprehensive, and it focuses more on epistemic matters like avoiding bias than instrumental ones like avoiding procrastination, but if you can master even one implement — and use it regularly — you’ll be ahead of most people.

When our judgment misses the mark, it often means we’ve aimed too high. In realms from dating to business, we’re overconfident and overly optimistic. We believe what we want to believe. Discussions with other people can sometimes bring us back to earth, but there are also ways to tap multiple perspectives inside ourselves. Whenever you have a surefire idea that you know will work, try this: Think of reasons it won’t. (Or, alternatively, if you’re sure it won’t, think of reasons it will.) For any belief, argue against it .

For example, in one study , managers were asked to guess whether the liabilities of a particular company were greater than $1.9 billion, and to rate their confidence. About 54 percent were correct, but the average confidence was 72 percent. Other managers were asked to give an answer, then think of a reason they might be wrong, and guess again. This time, 62 percent were correct, but their average confidence level stayed about the same — meaning their over confidence dropped.

Another way to use multiple perspectives is to imagine yourself not as yourself, but as an onlooker. In 1985, when Andy Grove was the president of Intel, he faced a choice: The company made its money off memory, but Japanese companies were gobbling up market share. Should Intel persist in memory, or focus more energy on processors, another area they’d been dabbling in? In his memoir, Grove recounts a conversation he had with Intel’s CEO, Gordon Moore:

I looked out the window at the Ferris Wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and I asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why shouldn’t you and I walk out the door, come back in, and do it ourselves?”

Grove called it the revolving-door test. And if you know Intel, you know the rest.

Supporting the revolving-door test, research shows that if we step outside of ourselves and look at our situation from a distance, we can avoid some of our biases. In one set of studies published in 2012, people made more accurate estimates of how long it would take to complete certain tasks, like writing a letter or painting a room, if they pictured themselves doing it as an onlooker would. You could also call the revolving-door or third-person test the advice test : What would you tell someone in your situation ?

Or, another strategy: What would a whole group of people tell you? When you tap the “wisdom of the crowd,” most people will be wrong — but, critically, they’ll likely be wrong in different ways. If you average their responses, you’ll get something closer to the truth than most of the individual guesses.

And if you’re on your own with no group to turn to, you can tap the “wisdom of the inner crowd.” “People don’t use everything they know every time they make a decision or form a judgment,” says Jack Soll, a management professor at Duke University. In one study from 2008, participants were asked to guess at figures, like the percentage of the world’s airports in the U.S.; when they were later asked to guess again, the average of their two answers bested either on its own. Performance improved even more when the second guess came three weeks later.

A 2009 study , meanwhile, combines self-arguing with the internal crowd, for even better results. Some people estimated historical dates, then were asked to assume they were wrong, offer reasons why, and give a different estimate, which was averaged with the first. Others just gave two estimates, which were averaged. Members of the first group ended up with more accurate answers than members of the second (though neither strategy was as effective as averaging two people’s guesses).

Sometimes, you want to prepare for a range of possible scenarios, but overconfidence in your predictions narrows the range you actually consider. One study looked at 13,300 stock-market estimates over a decade, and found that the market’s real performance fell within executives’ 80 percent confidence intervals (the range they felt 80 percent certain the returns would fall within) only 36 percent of the time. In a book chapter and Harvard Business Review article on “debiasing,” Soll and his co-authors suggested conjuring three separate estimates instead of a range: your most likely estimate, plus high and low estimates you think are unlikely, but not unrealistic. This technique tends to widen the outcomes people consider, allowing them to prepare for both the best and the worst.

If you’re deciding among a set of concrete options, such as which restaurant to go to or which candidate to hire, compare them side by side, rather than one at a time, Soll says. This gives you a reference point on variables that may be hard to evaluate on their own — an employee who brought in a million dollars in sales, for example, might seem like a high performer until you see that someone else on the team has $10 million in sales. And when making comparisons, forming a gestalt of each is not always best, as irrelevant factors might seep in without your permission. For example, if you’re interviewing multiple people, it’s better to conduct structured interviews, in which you ask everyone the same questions and score each answer, instead of conducting freewheeling conversations and forming an overall impression at the end. It’s too easy to be swayed by factors that we don’t think we should be swayed by, like shared hobbies. In fact, a recent study found that unstructured interviews didn’t just provide useless information — they also diluted good information, making them worse than no interview at all.

Another way to avoid being swayed by factors we don’t want to sway us is to consider how options are framed. “It’s well-known that people don’t make decisions about outcomes; they make decisions about descriptions of outcomes,” says Spencer Greenberg, an applied mathematician who studies rationality. A raft of studies show that judgment can be swayed by incidental variables, like how hungry we are or how a question is phrased. In a classic example of framing, people are more willing to flip a switch that diverts a trolley from five people toward one when it’s described as saving five lives versus killing one. The point isn’t that phrasing it one way or the other leads to a less rational choice — there isn’t one objectively right answer. The irrationality lies in the fact that people’s choices depend on things that, upon reflection, they would tell you shouldn’t matter.

One way to reduce framing effects is to consider two versions of the same option side by side. In the trolley problem, when you realize that killing one and saving five are the same thing, language might play a smaller role. In your daily life, perhaps you’re completing an unexciting project simply to avoid the pain of “wasting” the already expended effort, a mistake known as the sunk-cost fallacy. But what if you do what we might call a mirror-framing? By quitting, you’d “waste” the time you’ve put in (killing one), but you’d also “free” the time for other projects (saving five). Same act, two mirrored perspectives.

Or maybe you haven’t even imagined all the possible options. Maybe instead of killing one or killing five, there’s a third track covered in shrubbery. Try imagining that the options you’re deciding between are no longer available, and you might come up with something even better. “When it comes to decisions, we need to expand our horizon,” Ariely, who’s also a professor at Duke, tells me. “We need to think about what other things don’t come to mind naturally.” Greenberg has created a site called ClearerThinking.org that offers tools for — you guessed it — clearer thinking. One tool helps users learn to avoid “explanation freeze,” or the lazy tendency to stick with the first explanation we come up with: The site provides examples and downsides (unnecessary catastrophizing, dangerous complacence), then offers practice by asking readers to list not one but three plausible explanations for a scenario. It takes effort, but it’s a good habit.

Step one to knowing the truth is wanting to know the truth. We often don’t — motivated reasoning leads us to see the world in the way most amenable to our current aims, and many researchers see this fun-house reality as a feature, not a bug. For example, the cognitive scientist Dan Sperber has put forth the argumentative theory of reasoning, which holds that reasoning did not evolve to refine beliefs, but to advance them and to defend against others’. That’s because we’re highly social, and it often pays more to convince others of a reality that benefits us — I’m the best candidate; I deserve the last cookie — than it does to know who really is the best candidate, or who really does deserve the last cookie. Similarly, we use reasoning to defend against others’ arguments by picking them apart.

This is why reasoning actually works pretty well collectively, as the strongest argument emerges in battle-hardened form from group discussion, but not so well individually, when we have no whetstone to hone our own assertions. “When people talk with each other, and argue with each other, they reach better beliefs and better decisions,” says Hugo Mercier, a cognitive scientist who has worked with Sperber on the theory of argumentative reasoning. “We suck quite a bit at doing that on our own, because that’s not what we evolved for.”

Arguing, then, is a great way to reach the truth — much better than huddling with like-minded teammates, which tends to lead to polarization. And there may be ways to amplify what you gain from arguing. Psychologist Anatol Rapoport diverted people from straw-man arguments for their own good. Daniel Dennett summarized Rapoport’s advice in his own book, Intuition Pumps and Other Tools for Thinking :

  • You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
  • You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  • You should mention anything you have learned from your target.
  • Only then are you permitted to say so much as a word of rebuttal or criticism.

Not only will you conscript a more willing accomplice in your search for truth, but the exercise in itself will help you extract valuable material from the other side’s beliefs. Julia Galef, a writer who co-founded the nonprofit Center for Applied Rationality, calls this the steel-man argument: Be generous and argue against the best version of your opponent’s beliefs that you can forge. Galef also tries to shift her motives during an argument from winning at all costs to wresting the most value. She tells herself that if she “loses,” there’s a consolation prize: She gets to take home a copy of her opponent’s weapon and use it to win the next round against someone else.

Stopping yourself in the heat of debate and redefining your aims does not come naturally. So, Galef says she recommends developing “mindfulness, the ability to detect the subtle emotional texture of your thinking — for example, that feeling of vindication when you read an article arguing for something you already believe. Or that feeling of scorn when you read something that contradicts your views.” Awareness, in turn, might lead to action: “Once you start noticing the emotional drives shaping your reasoning,” she says, “it’s much easier to accept that you’re not being totally objective most of the time. But that isn’t automatic. It’s something you cultivate.”

Greenberg also notes “how important being able to deal with negative emotions is when it comes to being more rational. If you’re trying to figure out the truth, that means when someone points out a flaw in your reasoning, you need to be able to admit that.” Short-term loss, long-term win.

We often prefer to retain our biases, even when they’re called out. No one I talked to had much hope for increasing rationality in political debate, because we have little incentive to find political truth. “Many of the beliefs we have about politics have absolutely no practical importance for us,” Mercier says. “It’s not going to affect our lives one way or the other if we believe that global warming is real, if we believe that Hillary should go to jail.” But while a single vote rarely matters, vocal support for one side buys you important allegiances. “There are plenty of incentives to believe something flattering to your own views, something that means your political ‘tribe’ was right and virtuous all along,” Galef says. “But what incentives do we have to figure out the truth? Figuring out the truth is effortful, it requires self-control, and it gets in the way of your ability to cheer for your ‘side.’”

Arguably, in such cases, rationality would be detrimental to our well-being (if not to the health of the democracy). And there’s the paradox: If irrationality helps us, is it not, then, rational? The argumentative theory of reasoning holds that our ancestors benefited from bias, or else we wouldn’t be so biased today. And in 1976, the evolutionary biologist Robert Trivers suggested that self-deception evolved for the sake of other-deception: The better you convince yourself you deserve that cookie, the more believably you can convince others. (Recent studies have supported his idea.) Research has shown that overconfidence also enhances social status — even when it’s revealed as overconfidence. And Ariely tells me that “it’s important to realize that we don’t always want to be more rational. Think about something like emotions. Yes, there’s some emotions we don’t want, but there are other emotions — like love and compassion and caring about other people — that we certainly don’t want to eliminate.” Even unpleasant emotions serve a purpose .

When I noted the cottage industry of “looking smart by saying we’re dumb,” I was half-kidding about the dumb part. Kahneman and Tversky described our flaws as the result of mental shortcuts, ones taken by a cleverly efficient brain. And my own book on magical thinking is subtitled “How Irrational Beliefs Keep Us Happy, Healthy, and Sane.” So do we really want to be more rational, in the sense of knowing the truth about things?

Yes. Sometimes. Epistemic rationality (clear thinking) can get in the way of instrumental rationality (efficacious thinking), but usually it helps it. Seeing the real lay of the land often — not always, but often — gets you to your destination, whether it’s a job or a spouse or a cookie. So knowing neat tricks for clarifying thought is essential. But first, you have to know when to use them, and to have the guts to do so.

  • rationality
  • science of us

The Cut Shop

Most viewed stories.

  • Madame Clairevoyant: Horoscopes for the Week of March 31–April 6
  • The Case for Marrying an Older Man
  • What We Know About the Mommy Vlogger Accused of Child Abuse
  • When Your Kid Is the Classroom Problem Child
  • Everyone Named in the Latest Diddy Lawsuit

Editor’s Picks

rational thinking essay

Most Popular

What is your email.

This email will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

Sign In To Continue Reading

Create your free account.

Password must be at least 8 characters and contain:

  • Lower case letters (a-z)
  • Upper case letters (A-Z)
  • Numbers (0-9)
  • Special Characters (!@#$%^&*)

As part of your account, you’ll receive occasional updates and offers from New York , which you can opt out of anytime.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Rationalism vs. Empiricism

In its most general terms, the dispute between rationalism and empiricism has been taken to concern the extent to which we are dependent upon experience in our effort to gain knowledge of the external world. It is common to think of experience itself as being of two kinds: sense experience, involving our five world-oriented senses, and reflective experience, including conscious awareness of our mental operations. The distinction between the two is drawn primarily by reference to their objects: sense experience allows us to acquire knowledge of external objects, whereas our awareness of our mental operations is responsible for the acquisition of knowledge of our minds. In the dispute between rationalism and empiricism, this distinction is often neglected; rationalist critiques of empiricism usually contend that the latter claims that all our ideas originate with sense experience.

It is generally agreed that most rationalists claim that there are significant ways in which our concepts and knowledge are gained independently of sense experience. To be a rationalist, however, does not require one to claim that our knowledge is acquired independently of any experience: at its core, the Cartesian Cogito depends on our reflective, intuitive awareness of the existence of occurrent thought. Rationalists generally develop their view in two steps. First, they argue that there are cases where the content of our concepts or knowledge outstrips the information that sense experience can provide. Second, they construct accounts of how reason, in some form or other, provides that additional information about the external world.

Most empiricists present complementary lines of thought. First, they develop accounts of how experience alone -- sense experience, reflective experience, or a combination of the two -- provides the information that rationalists cite, insofar as we have it in the first place. Second, while empiricists attack the rationalists’ accounts of how reason is a primary source of concepts or knowledge, they show that reflective understanding can and usually does supply some of the missing links (famously, Locke believed that our idea of substance, in general, is a composite idea, incorporating elements derived from both sensation and reflection, e.g. Essay, 2.23.2).

The distinction between rationalism and empiricism is not without problems. One of the main issues is that almost no author falls neatly into one camp or another: it has been argued that Descartes, for instance, who is commonly regarded as a representative rationalist (at least with regard to metaphysics), had clear empiricist leanings (primarily with regard to natural philosophy, where sense experience plays a crucial role, according to Clarke 1982). Conversely, Locke, who is thought to be a paradigmatic empiricist, argued that reason is on equal footing with experience, when it comes to the knowledge of certain things, most famously of moral truths ( Essay, 4.3.18). In what follows, we clarify what this distinction has traditionally been taken to apply to, as well as point out its (by now) widely-recognized shortcomings.

1.1 Rationalism

1.2 empiricism, 2. the intuition/deduction thesis, 3. the innate knowledge thesis, 4. the innate concept thesis, other internet resources, related entries, 1. introduction.

The dispute between rationalism and empiricism takes place primarily within epistemology, the branch of philosophy devoted to studying the nature, sources, and limits of knowledge. Knowledge itself can be of many different things and is usually divided among three main categories: knowledge of the external world, knowledge of the internal world or self-knowledge, and knowledge of moral and/or aesthetical values. We may find that there are category-specific conditions that must be satisfied for knowledge to occur and that it is easier or more difficult to shape certain questions and answers, depending on whether we focus on the external world or on the values. However, some of the defining questions of general epistemology include the following.

What is the nature of propositional knowledge, knowledge that a particular proposition about the world, ourselves, morality, or beauty is true?

To know a proposition, we must believe it and it must be true, but something more is required, something that distinguishes knowledge from a lucky guess. Let’s call this additional element ‘warrant’. A good deal of philosophical work has been invested in trying to determine the nature of warrant.

How can we gain knowledge?

We can form true beliefs just by making lucky guesses. How to gain warranted beliefs is less clear. Moreover, to know the external world or anything about beauty, for instance, we must be able to think about the external world or about beauty, and it is unclear how we gain the concepts we use in thought or what assurance, if any, we have that the ways in which we divide up the world using our concepts correspond to divisions that actually exist.

What are the limits of our knowledge?

Some aspects of the external world, ourselves, or the moral and aesthetical values may be within the limits of our thought but beyond the limits of our knowledge; faced with competing descriptions of them, we cannot know which description is true. Some aspects of the external world, ourselves, or the moral and aesthetical values may even be beyond the limits of our thought, so that we cannot form intelligible descriptions of them, let alone know that a particular description is true.

The disagreement between rationalism and empiricism primarily concerns the second question, regarding the sources of our concepts and knowledge. In some instances, the disagreement on this topic results in conflicting responses to the other questions as well. The disagreement may extend to incorporate the nature of warrant or where the limits of our thought and knowledge are. Our focus here will be on the competing rationalist and empiricist responses to the second question.

There are three main theses that are usually seen as relevant for drawing the distinction between rationalism and empiricism, with a focus on the second question. While the first thesis has been traditionally seen as distinguishing between rationalism and empiricism, scholars now mostly agree that most rationalists and empiricists abide by the so-called Intuition/Deduction thesis , concerning the ways in which we become warranted in believing propositions in a particular subject area.

The Intuition/Deduction Thesis : Some propositions in a particular subject area, S, are knowable by us by intuition alone; still others are knowable by being deduced from intuited propositions.

Intuition is a form of direct, immediate insight. Intuition has been likened to (a sort of internal) perception by most rationalists and empiricists alike. Intellectually grasping a proposition, we just “see” it to be true in such a way as to form a true, warranted belief in it. (As discussed in Section 2 below, the nature of this intellectual “seeing” needs explanation.) Deduction is a process in which we derive conclusions from intuited premises through valid arguments, ones in which the conclusion must be true if the premises are true. We intuit, for example, that the number three is prime and that it is greater than two. We then deduce from this knowledge that there is a prime number greater than two. Intuition and deduction thus provide us with knowledge that is independent, for its justification, of experience. This type of knowledge, since Kant, is commonly called “ a priori ”.

We can generate different versions of the Intuition/Deduction thesis by substituting different subject areas for the variable ‘S’. Several rationalists and empiricists take mathematics to be knowable by intuition and deduction. Some place ethical truths in this category. Some include metaphysical claims, such as that God exists, we have free will, and our mind and body are distinct substances.

The second thesis that is relevant to the distinction between rationalism and empiricism is the Innate Knowledge thesis .

The Innate Knowledge Thesis : We have knowledge of some truths in a particular subject area, S, as part of our nature.

The Innate Knowledge thesis asserts the existence of knowledge whose source is our own nature: we are born with this knowledge; it doesn’t depend, for its justification, on our accessing it via particular experiences. Our innate knowledge is not learned through either experience or intuition/deduction. It is just part of our nature. Experiences may trigger a process by which we bring this knowledge to consciousness, but these experiences do not provide us with the knowledge itself. It has in some way been with us all along. According to some rationalists, we gained the knowledge in an earlier existence. According to others, God provided us with it at creation. Still others say it is part of our nature through natural selection.

We get different versions of the Innate Knowledge thesis by substituting different subject areas for the variable ‘S’. The more subjects included within the range of the thesis or the more controversial the claim to have knowledge in them, the more radical the form of rationalism. Stronger and weaker understandings of warrant yield stronger and weaker versions of the thesis as well. Empiricists reject this thesis: Locke, for instance, dedicates the whole first book of the Essay to show that such knowledge, even if it existed, would be of little use to us.

The third important thesis that is relevant to the distinction between rationalism and empiricism is the Innate Concept thesis.

The Innate Concept Thesis : We have some of the concepts we employ in a particular subject area, S, as part of our rational nature.

According to the Innate Concept thesis, some of our concepts are not gained from experience. They are part of our rational nature in such a way that, while sense experiences may trigger a process by which they are brought to consciousness, experience does not provide the concepts or determine the information they contain. Some claim that the Innate Concept thesis is entailed by the Innate Knowledge Thesis; a particular instance of knowledge can only be innate if the concepts that are contained in the known proposition are also innate. This is Locke’s position ( Essay , 1.4.1). Others, such as Carruthers, argue against this connection (1992, pp. 53–54). The content and strength of the Innate Concept thesis varies with the concepts claimed to be innate. The more a concept seems removed from experience and the mental operations we can perform on experience the more plausibly it may be claimed to be innate. Since we do not experience perfect triangles but do experience pains, our concept of the former is a more promising candidate for being innate than our concept of the latter.

The Intuition/Deduction thesis, the Innate Knowledge thesis, and the Innate Concept thesis are essential to rationalism. Since the Intuition/Deduction thesis is equally important to empiricism, the focus in what follows will be on the other two theses. To be a rationalist is to adopt at least one of them: either the Innate Knowledge thesis, regarding our presumed propositional innate knowledge, or the Innate Concept thesis, regarding our supposed innate knowledge of concepts.

Rationalists vary the strength of their view by adjusting their understanding of warrant. Some take warranted beliefs to be beyond even the slightest doubt and claim that intuition provide beliefs of this high epistemic status. Others interpret warrant more conservatively, say as belief beyond a reasonable doubt, and claim that intuition provide beliefs of that caliber. Still another dimension of rationalism depends on how its proponents understand the connection between intuition, on the one hand, and truth, on the other. Some take intuition to be infallible, claiming that whatever we intuit must be true. Others allow for the possibility of false intuited propositions.

Two other closely related theses are generally adopted by rationalists, although one can certainly be a rationalist without adopting either of them. The first is that sense experience cannot provide what we gain from reason.

The Indispensability of Reason Thesis : The knowledge we gain in subject area, S, by intuition and deduction, as well as the ideas and instances of knowledge in S that are innate to us, could not have been gained by us through sense experience.

The second is that reason is superior to sense experience as a source of knowledge.

The Superiority of Reason Thesis : The knowledge we gain in subject area S by intuition and deduction or have innately is superior to any knowledge gained by sense experience.

How reason is superior needs explanation, and rationalists have offered different accounts. One view, generally associated with Descartes ( Rules, Rule II and Rule III, pp. 1–4), is that what we know by intuition is certain, beyond even the slightest doubt, while what we believe, or even know, on the basis of sense experience is at least somewhat uncertain. Another view, generally associated with Plato ( Republic 479e-484c), locates the superiority of a priori knowledge in the objects known. What we know by reason alone, a Platonic form, say, is superior in an important metaphysical way, e.g. unchanging, eternal, perfect, a higher degree of being, to what we are aware of through sense experience.

Most forms of rationalism involve notable commitments to other philosophical positions. One is a commitment to the denial of scepticism for at least some area of knowledge. If we claim to know some truths by intuition or deduction or to have some innate knowledge, we obviously reject scepticism with regard to those truths. Rationalism in the form of the Intuition/Deduction thesis is also committed to epistemic foundationalism, the view that we know some truths without basing our belief in them on any others and that we then use this foundational knowledge to know more truths.

Empiricists also endorse the Intuition/Deduction thesis, but in a more restricted sense than the rationalists: this thesis applies only to relations of the contents of our minds, not also about empirical facts, learned from the external world. By contrast, empiricists reject the Innate Knowledge and Innate Concept theses. Insofar as we have knowledge in a subject, our knowledge is gained , not only triggered, by our experiences, be they sensorial or reflective. Experience is, thus, our only source of ideas. Moreover, they reject the corresponding version of the Superiority of Reason thesis. Since reason alone does not give us any knowledge, it certainly does not give us superior knowledge. Empiricists need not reject the Indispensability of Reason thesis, but most of them do.

The main characteristic of empiricism, however, is that it endorses a version of the following claim for some subject area:

The Empiricism Thesis : We have no source of knowledge in S or for the concepts we use in S other than experience.

To be clear, the Empiricism thesis does not entail that we have empirical knowledge. It entails that knowledge can only be gained, if at all , by experience. Empiricists may assert, as some do for some subjects, that the rationalists are correct to claim that experience cannot give us knowledge. The conclusion they draw from this rationalist lesson is that we do not know at all. This is, indeed, Hume's position with regard to causation, which, he argues, is not actually known, but only presupposed to be holding true, in virtue of a particular habit of our minds.

We have stated the basic claims of rationalism and empiricism so that each is relative to a particular subject area. Rationalism and empiricism, so relativized, need not conflict. We can be rationalists in mathematics or a particular area of mathematics and empiricists in all or some of the physical sciences. Rationalism and empiricism only conflict when formulated to cover the same subject. Then the debate, Rationalism vs. Empiricism, is joined. The fact that philosophers can be both rationalists and empiricists has implications for the classification schemes often employed in the history of philosophy, especially the one traditionally used to describe the Early Modern Period of the seventeenth and eighteenth centuries leading up to Kant. It is standard practice to group the philosophers of this period as either rationalists or empiricists and to suggest that those under one heading share a common agenda in opposition to those under the other. Thus, Descartes, Spinoza and Leibniz are the Continental Rationalists in opposition to Locke, Hume, and Reid, the British Empiricists. Such general classification schemes should only be adopted with great caution. The views of the individual philosophers are a lot more subtle and complex than the simple-minded classification suggests. (See Loeb (1981) and Kenny (1986) for important discussions of this point.) Locke rejects rationalism in the form of any version of the Innate Knowledge or Innate Concept theses, but he nonetheless adopts the Intuition/Deduction thesis with regard to our knowledge of God’s existence, in addition to our knowledge of mathematics and morality. Descartes and Locke have remarkably similar views on the nature of our ideas, even though Descartes takes many to be innate, while Locke ties them all to experience. The rationalist/empiricist classification also encourages us to expect the philosophers on each side of the divide to have common research programs in areas beyond epistemology. Thus, Descartes, Spinoza and Leibniz are mistakenly seen as applying a reason-centered epistemology to a common metaphysical agenda, with each trying to improve on the efforts of the one before, while Locke, Hume, and Reid are mistakenly seen as gradually rejecting those metaphysical claims, with each consciously trying to improve on the efforts of his predecessors. It is also important to note that the rationalist/empiricist distinction is not exhaustive of the possible sources of knowledge. One might claim, for example, that we can gain knowledge in a particular area by a form of Divine revelation or insight that is a product of neither reason nor sense experience. In short, when used carelessly, the labels ‘rationalist’ and ‘empiricist,’ as well as the slogan that is the title of this essay, ‘Rationalism vs. Empiricism,’ can impede rather than advance our understanding.

An important wrinkle for using this classification scheme in the history of philosophy is that it leaves out discussions of philosophical figures who did not focus their efforts on understanding whether innate knowledge is possible or even fruitful to have. Philosophy in the early modern period, in particular, is a lot richer than this artificial, simplifying distinction makes it sound. There is no clear way of grouping Hobbes with either camp, let alone Elizabeth of Bohemia, Anne Conway, George Berkeley, Émilie du Châtelet, or Mary Shepherd. This distinction, initially applied by Kant, is responsible for giving us a very restrictive philosophical canon, which does not take into account developments in the philosophy of emotions, philosophy of education, and even disputes in areas of philosophy considered more mainstream, like ethics and aesthetics.

Unless restricted to debates regarding the possibility of innate knowledge, this distinction is best left unused. The most interesting form of the debate occurs when we take the relevant subject to be truths about the external world, the world beyond our own minds. A full-fledged rationalist with regard to our knowledge of the external world holds that some external world truths are and must be innate and that this knowledge is superior to any that sense experience could ever provide. The full-fledged empiricist about our knowledge of the external world replies that, when it comes to the nature of the world beyond our own minds, experience is our sole source of information. Reason might inform us of the relations among our ideas, but those ideas themselves can only be gained, and any truths about the external reality they represent can only be known, on the basis of experience. This debate concerning our knowledge of the external world will generally be our main focus in what follows.

Historically, the rationalist/empiricist dispute in epistemology has extended into the area of metaphysics, where philosophers are concerned with the basic nature of reality, including the existence of God and such aspects of our nature as free-will and the relation between the mind and body. Several rationalists (e.g., Descartes, Meditations ) have presented metaphysical theories, which they have claimed to know by intuition and/or deduction alone. Empiricists (e.g., Hume, Treatise) have rejected the theories as either speculation, beyond what we can learn from experience, or nonsensical attempts to describe aspects of the world beyond the concepts experience can provide. The debate raises the issue of metaphysics as an area of knowledge. Kant puts the driving assumption clearly:

The very concept of metaphysics ensures that the sources of metaphysics can’t be empirical. If something could be known through the senses, that would automatically show that it doesn’t belong to metaphysics; that’s an upshot of the meaning of the word ‘metaphysics.’ Its basic principles can never be taken from experience, nor can its basic concepts; for it is not to be physical but metaphysical knowledge, so it must be beyond experience. ( Prolegomena , Preamble, I, p. 7)

The possibility then of metaphysics so understood, as an area of human knowledge, hinges on how we resolve the rationalist/empiricist debate. The debate also extends into ethics. Some moral objectivists (e.g., Ross 1930 and Huemer 2005) take us to know some fundamental objective moral truths by intuition, while some moral skeptics, who reject such knowledge (e.g., Mackie 1977), find the appeal to a faculty of moral intuition utterly implausible. More recently, the rationalist/empiricist debate has extended to discussions (e.g., Bealer 1999 and Alexander & Weinberg 2007) of the very nature of philosophical inquiry: to what extent are philosophical questions to be answered by appeals to reason or experience?

The Intuition/Deduction thesis claims that we can know some propositions by intuition and still more by deduction. Since traditionally this thesis was thought to be rejected by empiricists and adopted only by rationalists, it is useful to become more familiar with it. In a very narrow sense, only rationalists seem to adopt it. However, the current consensus is that most empiricists (e.g., Locke, Hume, Reid) have been willing to accept a version of the thesis, namely inasmuch as it is restricted to propositions solely about the relations among our own concepts. We can, they agree, know by intuition that our concept of God includes our concept of omniscience. Just by examining the concepts, we can intellectually grasp that the one includes the other. The debate between rationalists and empiricists is joined when the former assert, and the latter deny, the Intuition/Deduction thesis with regard to propositions that contain substantive information about the external world. Rationalists, such as Descartes, have claimed that we can know by intuition and deduction that God exists and created the world, that our mind and body are distinct substances, and that the angles of a triangle equal two right angles, where all of these claims are truths about an external reality independent of our thought. Such substantive versions of the Intuition/Deduction thesis are our concern in this section.

One defense of the Intuition/Deduction thesis assumes that we know some substantive external world truths, adds an analysis of what knowledge requires, and concludes that our knowledge must result from intuition and deduction. Rationalists and empiricists alike claim that certainty is required for scientia (which is a type of absolute knowledge of the necessary connections that would explain why certain things are a certain way) and that certainty about the external world is beyond what empirical evidence can provide. Empiricists seem happy to then conclude that the type of knowledge of the external world that we can acquire does not have this high degree of certainty and is, thus, not scientia . This is because we can never be sure our sensory impressions are not part of a dream or a massive, demon orchestrated, deception. A rationalist like Descartes of the Meditations , claims that only intuition can provide the certainty needed for such knowledge. This, after his arguing in the Rules that, when we “review all the actions of the intellect by means of which we are able to arrive at a knowledge of things with no fear of being mistaken,” we “recognize only two: intuition and deduction” ( Rules , Rule III, p. 3).

This line of argument is one of the least compelling in the rationalist arsenal. First, the assumption that knowledge requires certainty comes at a heavy cost, as it rules out so much of what we commonly take ourselves to know. Second, as many contemporary rationalists accept, intuition is not always a source of certain knowledge. The possibility of a deceiver gives us a reason to doubt our intuitions as well as our empirical beliefs. For all we know, a deceiver might cause us to intuit false propositions, just as one might cause us to have perceptions of nonexistent objects. Descartes’s classic way of meeting this challenge in the Meditations is to argue that we can know with certainty that no such deceiver interferes with our intuitions and deductions. They are infallible, as God guarantees their truth. The problem, known as the Cartesian Circle, is that Descartes’s account of how we gain this knowledge begs the question, by attempting to deduce the conclusion that all our intuitions are true from intuited premises. Moreover, his account does not touch a remaining problem that he himself notes ( Rules , Rule VII, p. 7): Deductions of any appreciable length rely on our fallible memory.

A more plausible argument for the Intuition/Deduction thesis again assumes that we know some particular, external world truths, and then appeals to the nature of what we know, rather than to the nature of knowledge itself, to argue that our knowledge must result from intuition and deduction. Leibniz, in New Essays , tells us the following:

The senses, although they are necessary for all our actual knowledge, are not sufficient to give us the whole of it, since the senses never give anything but instances, that is to say particular or individual truths. Now all the instances which confirm a general truth, however numerous they may be, are not sufficient to establish the universal necessity of this same truth, for it does not follow that what happened before will happen in the same way again. … From which it appears that necessary truths, such as we find in pure mathematics, and particularly in arithmetic and geometry, must have principles whose proof does not depend on instances, nor consequently on the testimony of the senses, although without the senses it would never have occurred to us to think of them… ( New Essays , Preface, pp. 150–151)

Leibniz goes on to describe our mathematical knowledge as “innate,” and his argument is more commonly directed to support the Innate Knowledge thesis rather than the Intuition/Deduction thesis. For our purposes here, we can relate it to the latter, however: We have substantive knowledge about the external world in mathematics, and what we know in that area, we know to be necessarily true. Experience cannot warrant beliefs about what is necessarily the case. Hence, experience cannot be the source of our knowledge. The best explanation of our knowledge is that we gain it by intuition and deduction. Leibniz mentions logic, metaphysics, and morals as other areas in which our knowledge similarly outstrips what experience can provide. Judgments in logic and metaphysics involve forms of necessity beyond what experience can support. Judgments in morals involve a form of obligation or value that lies beyond experience, which only informs us about what is the case rather than about what ought to be.

The strength of this argument varies with its examples of purported knowledge. Insofar as we focus on controversial claims in metaphysics, e.g., that God exists, that our mind is a distinct substance from our body, the initial premise that we know the claims is less than compelling. Taken with regard to other areas, however, the argument clearly has legs. We know a great deal of mathematics, and what we know, we know to be necessarily true. None of our experiences warrants a belief in such necessity, and we do not seem to base our knowledge on any experiences. The warrant that provides us with knowledge arises from an intellectual grasp of the propositions which is clearly part of our learning. Similarly, we seem to have such moral knowledge as that, all other things being equal, it is wrong to break a promise and that pleasure is intrinsically good. No empirical lesson about how things are can warrant such knowledge of how they ought to be.

This argument for the Intuition/Deduction thesis raises additional questions which rationalists must answer. Insofar as they maintain that our knowledge of necessary truths in mathematics or elsewhere by intuition and deduction is substantive knowledge of the external world, they owe us an account of this form of necessity. Many empiricists stand ready to argue that “necessity resides in the way we talk about things, not in the things we talk about” (Quine 1966, p. 174). Similarly, if rationalists claim that our knowledge in morals is knowledge of an objective form of obligation, they owe us an account of how objective values are part of a world of apparently valueless facts.

Perhaps most of all, any defenders of the Intuition/Deduction thesis owe us an account of what intuition is and how it provides warranted true beliefs about the external world. What is it to intuit a proposition and how does that act of intuition support a warranted belief? Their argument presents intuition and deduction as an explanation of assumed knowledge that can’t—they say—be explained by experience, but such an explanation by intuition and deduction requires that we have a clear understanding of intuition and how it supports warranted beliefs. Metaphorical characterizations of intuition as intellectual “grasping” or “seeing” are not enough, and if intuition is some form of intellectual “grasping,” it appears that all that is grasped is relations among our concepts, rather than facts about the external world, as the empiricists defenders of intuition and deduction argue. One current approach to the issue involves an appeal to Phenomenal Conservatism (Huemer 2001), the principle that if it seems to one as if something is the case, then one is prima facie justified in believing that it is so. Intuitions are then taken to be a particular sort of seeming or appearance: “[A]n intuition that p is a state of its seeming to one that p that is not dependent on inference from other beliefs and that results from thinking about p, as opposed to perceiving, remembering, or introspecting” (Huemer 2005, p. 102). Just as it can visually seem or appear to one as if there’s a tree outside the window, it can intellectually seem or appear to one as if nothing can be both entirely red and entirely green. This approach aims to demystify intuitions; they are but one more form of seeming-state along with ones we gain from sense perception, memory, and introspection. It does not, however, tell us all we need to know. Any intellectual faculty, whether it be sense perception, memory, introspection or intuition, provides us with warranted beliefs only if it is generally reliable. The reliability of sense perception stems from the causal connection between how external objects are and how we experience them. What accounts for the reliability of our intuitions regarding the external world? Is our intuition of a particular true proposition the outcome of some causal interaction between ourselves and some aspect of the world? What aspect? What is the nature of this causal interaction? That the number three is prime does not appear to cause anything, let alone our intuition that it is prime. As Michael Huemer (2005, p. 123) points out in mounting his own defense of moral intuitionism, “The challenge for the moral realist, then, is to explain how it would be anything more than chance if my moral beliefs were true, given that I do not interact with moral properties.”

These issues are made all the more pressing by the classic empiricist response to the argument. The reply is generally credited to Hume and begins with a division of all true propositions into two categories.

All the objects of human reason or inquiry may naturally be divided into two kinds, to wit, “Relations of Ideas,” and “Matters of Fact.” Of the first are the sciences of Geometry, Algebra, and Arithmetic, and, in short, every affirmation which is either intuitively or demonstratively certain. That the square of the hypotenuse is equal to the square of the two sides is a proposition which expresses a relation between these figures. That three times five is equal to half of thirty expresses a relation between these numbers. Propositions of this kind are discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe. Though there never were a circle or triangle in nature, the truths demonstrated by Euclid would forever retain their certainty and evidence. Matters of fact, which are the second objects of human reason, are not ascertained in the same manner, nor is our evidence of their truth, however great, of a like nature with the foregoing. The contrary of every matter of fact is still possible, because it can never imply a contradiction and is conceived by the mind with the same facility and distinctness as if ever so conformable to reality. ( Enquiry , 4.1, p. 24)

Intuition and deduction can provide us with knowledge of necessary truths such as those found in mathematics and logic, but such knowledge is not substantive knowledge of the external world. It is only knowledge of the relations of our own ideas. If the rationalist shifts the argument so it appeals to knowledge in morals, Hume’s reply is to offer an analysis of our moral concepts by which such knowledge is empirically gained knowledge of matters of fact.

Morals and criticism are not so properly objects of the understanding as of taste and sentiment. Beauty, whether moral or natural, is felt more properly than perceived. Or if we reason concerning it and endeavor to fix the standard, we regard a new fact, to wit, the general taste of mankind, or some other fact which may be the object of reasoning and inquiry. ( Enquiry , 12.3, p. 122)

If the rationalist appeals to our knowledge in metaphysics to support the argument, Hume denies that we have such knowledge.

If we take in our hand any volume--of divinity or school metaphysics, for instance--let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion. ( Enquiry , 12.3, p. 123)

An updated version of this general empiricist reply, with an increased emphasis on language and the nature of meaning, is given in the twentieth-century by A. J. Ayer’s version of logical positivism. Adopting positivism’s verification theory of meaning, Ayer assigns every cognitively meaningful sentence to one of two categories: either it is a tautology, and so true solely by virtue of the meaning of its terms and provides no substantive information about the world, or it is open to empirical verification. There is, then, no room for knowledge about the external world by intuition or deduction.

There can be no a priori knowledge of reality. For … the truths of pure reason, the propositions which we know to be valid independently of all experience, are so only in virtue of their lack of factual content … [By contrast] empirical propositions are one and all hypotheses which may be confirmed or discredited in actual sense experience. (Ayer 1952, pp. 86; 93–94)

The rationalists’ argument for the Intuition/Deduction thesis goes wrong at the start, according to empiricists, by assuming that we can have substantive knowledge of the external world that outstrips what experience can warrant. We cannot.

This empiricist reply faces challenges of its own. Our knowledge of mathematics seems to be about something more than our own concepts. Our knowledge of moral judgments seems to concern not just how we feel or act but how we ought to behave. The general principles that provide a basis for the empiricist view, e.g. Hume’s overall account of our ideas, the Verification Principle of Meaning, are problematic in their own right.

In all, rationalists have an argument for the Intuition/Deduction thesis relative to our substantive knowledge of the external world, but its success rests on how well they can answer questions about the nature and epistemic force of intuition made all the more pressing by the classic empiricist reply.

The Innate Knowledge thesis asserts that we have a priori knowledge, that is knowledge independent, for its justification, of sense experience, as part of our rational nature. Experience may trigger our awareness of this knowledge, but it does not provide us with it. The knowledge is already there.

Plato presents an early version of the Innate Knowledge thesis in the Meno as the doctrine of knowledge by recollection. The doctrine is motivated in part by a paradox that arises when we attempt to explain the nature of inquiry. How do we gain knowledge of a theorem in geometry? We inquire into the matter. Yet, knowledge by inquiry seems impossible ( Meno , 80d-e). We either already know the theorem at the start of our investigation or we do not. If we already have the knowledge, there is no place for inquiry. If we lack the knowledge, we don’t know what we are seeking and cannot recognize it when we find it. Either way we cannot gain knowledge of the theorem by inquiry. Yet, we do know some theorems.

The doctrine of knowledge by recollection offers a solution. When we inquire into the truth of a theorem, we both do and do not already know it. We have knowledge in the form of a memory gained from our soul’s knowledge of the theorem prior to its union with our body. We also lack some knowledge because, in our soul’s unification with the body, it has forgotten the knowledge and now needs to recollect it. Thus, learning the theorem allows us, in effect, to recall what we already know.

Plato famously illustrates the doctrine with an exchange between Socrates and a young slave, in which Socrates guides the slave from ignorance to mathematical knowledge. The slave’s experiences, in the form of Socrates’ questions and illustrations, are the occasion for his recollection of what he learned previously. Plato’s metaphysics provides additional support for the Innate Knowledge Thesis. Since our knowledge is of abstract, eternal Forms, which clearly lie beyond our sensory experience, it is independent, for its justification, of experience.

Contemporary supporters of Plato’s position are scarce. The initial paradox, which Plato describes as a “trick argument” ( Meno , 80e), rings sophistical. The metaphysical assumptions in the solution need justification. The solution does not answer the basic question: Just how did the slave’s soul learn the theorem? The Intuition/Deduction thesis offers an equally, if not more, plausible account of how the slave gains this type of knowledge that is independent of experience. Nonetheless, Plato’s position illustrates the kind of reasoning that has caused many philosophers to adopt some form of the Innate Knowledge thesis. We are confident that we know certain propositions about the external world, but there seems to be no adequate explanation of how we gained this knowledge short of saying that it is innate. Its content is beyond what we directly gain in experience, as well as what we can gain by performing mental operations on what experience provides. It does not seem to be based on an intuition or deduction. That it is innate in us appears to be the best explanation.

Noam Chomsky argues along similar lines in presenting what he describes as a “rationalist conception of the nature of language” (1975, p. 129). Chomsky argues that the experiences available to language learners are far too sparse to account for their knowledge of their language. To explain language acquisition, we must assume that learners have an innate knowledge of a universal grammar capturing the common deep structure of natural languages. It is important to note that Chomsky’s language learners do not know particular propositions describing a universal grammar. They have a set of innate capacities or dispositions which enable and determine their language development. Chomsky gives us a theory of innate learning capacities or structures rather than a theory of innate knowledge. His view does not support the Innate Knowledge thesis as rationalists have traditionally understood it. As one commentator puts it, “Chomsky’s principles … are innate neither in the sense that we are explicitly aware of them, nor in the sense that we have a disposition to recognize their truth as obvious under appropriate circumstances. And hence it is by no means clear that Chomsky is correct in seeing his theory as following the traditional rationalist account of the acquisition of knowledge” (Cottingham 1984, p. 124). Indeed, such a theory, which places nativism at the level of mental capacities or structures enabling us to acquire certain types of knowledge rather than at the level of knowledge we already posses, is akin to an empiricist take on the issue. Locke and Reid, for instance, believe that the human mind is endowed with certain abilities that, when developed in the usual course of nature, will lead us to acquire useful knowledge of the external world. The main idea is that it is part of our biology to have a digestive system that, when fed the right kind of food, allows us to process the required nutrients to enable us to continue to live for a while. Similarly, it is part of our biology to have a mental architecture that, when fed the right kind of information and experiences, allows us to process that information and transform it into knowledge. The knowledge itself is no more innate than the processed nutrients are. On a view like this, no knowledge is innate; however, we are born with certain capabilities and disposition that enable us to acquire knowledge, just as we are equipped with certain organs that allow our bodies to function well while we’re alive.

Peter Carruthers (1992) argues that we have innate knowledge of the principles of folk-psychology. Folk-psychology is a network of common-sense generalizations that hold independently of context or culture and concern the relationships of mental states to one another, to the environment and states of the body and to behavior (1992, p. 115). It includes such beliefs as that pains tend to be caused by injury, that pains tend to prevent us from concentrating on tasks, and that perceptions are generally caused by the appropriate state of the environment. Carruthers notes the complexity of folk-psychology, along with its success in explaining our behavior and the fact that its explanations appeal to such unobservables as beliefs, desires, feelings, and thoughts. He argues that the complexity, universality, and depth of folk-psychological principles outstrips what experience can provide, especially to young children who by their fifth year already know a great many of them. This knowledge is also not the result of intuition or deduction; folk-psychological generalizations are not seen to be true in an act of intellectual insight. Carruthers concludes, “[The problem] concerning the child’s acquisition of psychological generalizations cannot be solved, unless we suppose that much of folk-psychology is already innate, triggered locally by the child’s experience of itself and others, rather than learned” (1992, p. 121).

Empiricists, and some rationalists, attack the Innate Knowledge thesis in two main ways. First, they offer accounts of how sense experience or intuition and deduction provide the knowledge that is claimed to be innate. Second, they directly criticize the Innate Knowledge thesis itself. The classic statement of this second line of attack is presented in Locke’s Essay . Locke raises the issue of just what innate knowledge is. Particular instances of knowledge are supposed to be in our minds as part of our rational make-up, but how are they “in our minds”? If the implication is that we all consciously have this knowledge, it is plainly false. Propositions often given as examples of innate knowledge, even such plausible candidates as the principle that the same thing cannot both be and not be, are not consciously accepted by children and those with severe cognitive limitations. If the point of calling such principles “innate” is not to imply that they are or have been consciously accepted by all rational beings, then it is hard to see what the point is. “No proposition can be said to be in the mind, which it never yet knew, which it never yet was conscious of” ( Essay , 1.2.5). Proponents of innate knowledge might respond that some knowledge is innate in that we have the capacity to have it. That claim, while true, is of little interest, however. “If the capacity of knowing, be the natural impression contended for, all the truths a man ever comes to know, will, by this account, be every one of them, innate; and this great point will amount to no more, but only an improper way of speaking; which whilst it pretends to assert the contrary, says nothing different from those, who deny innate principles. For nobody, I think, ever denied, that the mind was capable of knowing several truths” ( Essay , 1.2.5). Locke thus challenges defenders of the Innate Knowledge thesis to present an account of innate knowledge that allows their position to be both true and interesting. A narrow interpretation of innateness faces counterexamples of rational individuals who do not meet its conditions. A generous interpretation implies that all our knowledge, even that clearly provided by experience, is innate.

Defenders of innate knowledge take up Locke’s challenge. Leibniz responds in New Essays by appealing to an account of innateness in terms of natural potential to avoid Locke’s dilemma. Consider Peter Carruthers’ similar reply.

We have noted that while one form of nativism claims (somewhat implausibly) that knowledge is innate in the sense of being present as such (or at least in propositional form) from birth, it might also be maintained that knowledge is innate in the sense of being innately determined to make its appearance at some stage in childhood. This latter thesis is surely the most plausible version of nativism. (1992, p. 51)

Carruthers claims that our innate knowledge is determined through evolutionary selection (p. 111). Evolution has resulted in our being determined to know certain things (e.g. principles of folk-psychology) at particular stages of our life, as part of our natural development. Experiences provide the occasion for our consciously believing the known propositions but not the basis for our knowledge of them (p. 52). Carruthers thus has a ready reply to Locke’s counterexamples of children and cognitively limited persons who do not believe propositions claimed to be instances of innate knowledge. The former have not yet reached the proper stage of development; the latter are persons in whom natural development has broken down (pp. 49–50).

A serious problem for the Innate Knowledge thesis remains, however. We know a proposition only if it is true, we believe it and our belief is warranted. Rationalists who assert the existence of innate knowledge are not just claiming that, as a matter of human evolution, God’s design or some other factor, at a particular point in our development, certain sorts of experiences trigger our belief in particular propositions in a way that does not involve our learning them from the experiences. Their claim is even bolder: In at least some of these cases, our empirically triggered, but not empirically warranted, belief is nonetheless warranted and so known. How can these beliefs be warranted if they do not gain their warrant from the experiences that cause us to have them or from intuition and deduction?

Some rationalists think that a reliabilist account of warrant provides the answer. According to Reliabilism, beliefs are warranted if they are formed by a process that generally produces true beliefs rather than false ones. The true beliefs that constitute our innate knowledge are warranted, then, because they are formed as the result of a reliable belief-forming process. Carruthers maintains that “Innate beliefs will count as known provided that the process through which they come to be innate is a reliable one (provided, that is, that the process tends to generate beliefs that are true)” (1992, p. 77). He argues that natural selection results in the formation of some beliefs and is a truth-reliable process.

An appeal to Reliabilism, or a similar causal theory of warrant, may well be the best way to develop the Innate Knowledge thesis. Even so, some difficulties remain. First, reliabilist accounts of warrant are themselves quite controversial. Second, rationalists must give an account of innate knowledge that maintains and explains the distinction between innate knowledge and non-innate knowledge, and it is not clear that they will be able to do so within such an account of warrant. Suppose for the sake of argument that we have innate knowledge of some proposition, P . What makes our knowledge that P innate? To sharpen the question, what difference between our knowledge that P and a clear case of non-innate knowledge, say our knowledge that something is red based on our current visual experience of a red table, makes the former innate and the latter not innate? In each case, we have a true, warranted belief. In each case, presumably, our belief gains its warrant from the fact that it meets a particular causal condition, e.g., it is produced by a reliable process. In each case, the causal process is one in which an experience causes us to believe the proposition at hand (that P ; that something is red), for, as defenders of innate knowledge admit, our belief that P is “triggered” by an experience, as is our belief that something is red. The insight behind the Innate Knowledge thesis seems to be that the difference between our innate and non-innate knowledge lies in the relation between our experience and our belief in each case. The experience that causes our belief that P does not “contain” the information that P , while our visual experience of a red table does “contain” the information that something is red. Yet, exactly what is the nature of this containment relation between our experiences, on the one hand, and what we believe, on the other, that is missing in the one case but present in the other? The nature of the experience-belief relation seems quite similar in each. The causal relation between the experience that triggers our belief that P and our belief that P is contingent, as is the fact that the belief-forming process is reliable. The same is true of our experience of a red table and our belief that something is red. The causal relation between the experience and our belief is again contingent. We might have been so constructed that the experience we describe as “being appeared to redly” caused us to believe, not that something is red, but that something is hot. The process that takes us from the experience to our belief is also only contingently reliable. Moreover, if our experience of a red table “contains” the information that something is red, then that fact, not the existence of a reliable belief-forming process between the two, should be the reason why the experience warrants our belief. By appealing to Reliabilism, or some other causal theory of warrant, rationalists may obtain a way to explain how innate knowledge can be warranted. They still need to show how their explanation supports an account of the difference between innate knowledge and non-innate knowledge. So, Locke's criticism -- that there is no true distinction between innate versus non-innate knowledge that rationalists may draw -- still stands, in the face of the best rationalist defense of the Innate Knowledge thesis.

According to the Innate Concept thesis, some of our concepts have not been gained from experience. They are instead part of our rational make-up, and experience simply triggers a process by which we consciously grasp them. The main concern motivating the rationalist should be familiar by now: the content of some concepts seems to outstrip anything we could have gained from experience. An example of this reasoning is presented by Descartes in the Meditations . Although he sometimes seems committed to the view that all our ideas are innate (Adams 1975 and Gotham 2002), he there classifies our ideas as adventitious, invented by us, and innate. Adventitious ideas, such as a sensation of heat, are gained directly through sense experience. Ideas invented by us, such as our idea of a hippogriff, are created by us from other ideas we possess. Innate ideas, such as our ideas of God, of extended matter, of substance, and of a perfect triangle, are placed in our minds by God at creation. Consider Descartes’s argument that our concept of God, as an infinitely perfect being, is innate. Our concept of God is not directly gained in experience, as particular tastes, sensations, and mental images might be. Its content is beyond what we could ever construct by applying available mental operations to what experience directly provides. From experience, we can gain the concept of a being with finite amounts of various perfections, one, for example, that is finitely knowledgeable, powerful and good. We cannot however move from these empirical concepts to the concept of a being of infinite perfection. (“I must not think that, just as my conceptions of rest and darkness are arrived at by negating movement and light, so my perception of the infinite is arrived at not by means of a true idea but by merely negating the finite,” Third Meditation, p. 94.) Descartes supplements this argument by another. Not only is the content of our concept of God beyond what experience can provide, the concept is a prerequisite for our employment of the concept of finite perfection gained from experience. (“My perception of the infinite, that is God, is in some way prior to my perception of the finite, that is myself. For how could I understand that I doubted or desired—that is lacked something—and that I was not wholly perfect, unless there were in me some idea of a more perfect being which enabled me to recognize my own defects by comparison,” Third Meditation, p. 94).

An empiricist response to this general line of argument is given by Locke ( Essay , 1.4.1–25). First, there is the problem of explaining what it is for someone to have an innate concept. If having an innate concept entails consciously entertaining it at present or in the past, then Descartes’s position is open to obvious counterexamples. Young children and people from other cultures do not consciously entertain the concept of God and have not done so. Second, there is the objection that we have no need to appeal to innate concepts in the first place. Contrary to Descartes’s argument, we can explain how experience provides all our ideas, including those the rationalists take to be innate, and with just the content that the rationalists attribute to them.

Leibniz’s New Essays offers a rationalist reply to the first concern. Where Locke puts forth the image of the mind as a blank slate on which experience writes, Leibniz offers us the image of a block of marble, the veins of which determine what sculpted figures it will accept ( New Essays , Preface, p. 153). Leibniz’s metaphor contains an insight that Locke misses. The mind plays a role in determining the nature of its contents. This point does not, however, require the adoption of the Innate Concept thesis. Locke might still point out that we are not required to have the concepts themselves and the ability to use them, innately. In contemporary terms, what we are required to have is the right hardware that allows for the optimal running of the actual software. For Locke, there are no constrains here; for Leibniz, only a particular type of software is, indeed, able to be supported by the extant hardware. Put differently, the hardware itself determines what software can be optimally run, for a Leibnizian.

Rationalists have responded to the second part of the empiricist attack on the Innate Concept thesis—the empiricists’ claim that the thesis is without basis, as all our ideas can be explained as derived from experience—by focusing on difficulties in the empiricists’ attempts to give such an explanation. The difficulties are illustrated by Locke’s account. According to Locke, experience consists in external sensation and inner reflection. All our ideas are either simple or complex, with the former being received by us passively in sensation or reflection and the latter being built by the mind from simple materials through various mental operations. Right at the start, the account of how simple ideas are gained is open to an obvious counterexample acknowledged, but then set aside, by Hume in presenting his own empiricist theory. Consider the mental image of a particular shade of blue. If Locke is right, the idea is a simple one and should be passively received by the mind through experience. Hume points out otherwise:

Suppose therefore a person to have enjoyed his sight for thirty years and to have become perfectly acquainted with colors of all kinds, except one particular shade of blue, for instance, which it never has been his fortune to meet with; let all the different shades of that color, except that single one, be placed before him, descending gradually from the deepest to the lightest, it is plain that he will perceive a blank where that shade is wanting and will be sensible that there is a greater distance in that place between the contiguous colors than in any other. Now I ask whether it be possible for him, from his own imagination, to supply this deficiency and raise up to himself the idea of that particular shade, though it had never been conveyed to him by his senses? I believe there are but few will be of the opinion that he can… ( Enquiry , 2, pp. 15–16)

Even when it comes to such simple ideas as the image of a particular shade of blue, the mind seems to be more than a blank slate on which experience writes. The main question is whether the veins in Leibniz’s metaphor should count as part of the knowledge or just as part of our biological mental architecture: all the knowledge we can ever acquire is constrained by the type of beings we are. This does not require our positing that concepts be part of the inner workings, at the beginning of our lives.

On the other hand, consider, too, our concept of a particular color, say red. Critics of Locke’s account have pointed out the weaknesses in his explanation of how we gain such a concept by the mental operation of abstraction on individual cases. For one thing, it makes the incorrect assumption that various instances of a particular concept share a common feature. Carruthers puts the objection as follows:

In fact problems arise for empiricists even in connection with the very simplest concepts, such as those of colour. For it is false that all instances of a given colour share some common feature. In which case we cannot acquire the concept of that colour by abstracting the common feature of our experience. Thus consider the concept red . Do all shades of red have something in common? If so, what? It is surely false that individual shades of red consist, as it were, of two distinguishable elements a general redness together with a particular shade. Rather, redness consists in a continuous range of shades, each of which is only just distinguishable from its neighbors. Acquiring the concept red is a matter of learning the extent of the range. (1992, p. 59)

For another thing, Locke’s account of concept acquisition from particular experiences seems circular: “For noticing or attending to a common feature of various things presupposes that you already possess the concept of the feature in question.” (Carruthers 1992, p. 55)

Consider in this regard Locke’s account of how we gain our concept of causation.

In the notice that our senses take of the constant vicissitude of things, we cannot but observe, that several particulars, both qualities and substances; begin to exist; and that they receive this their existence from the due application and operation of some other being. From this observation, we get our ideas of cause and effect. ( Essay , 2.26.1)

We get our concept of causation from our observation that some things receive their existence from the application and operation of some other things. Yet, to be able to make this observation, we must have our minds primed to do so. Rationalists argue that we cannot make this observation unless we already have the concept of causation. Empiricists, on the other hand, argue that our minds are constituted in a certain way, so that we can gain our ideas of causation and of power in a non-circular manner.

Rationalists would argue that Locke’s account of how we gain our idea of power displays a similar circularity.

The mind being every day informed, by the senses, of the alteration of those simple ideas, it observes in things without; and taking notice how one comes to an end, and ceases to be, and another begins to exist which was not before; reflecting also on what passes within itself, and observing a constant change of its ideas, sometimes by the impression of outward objects on the senses, and sometimes by the determination of its own choice; and concluding from what it has so constantly observed to have been, that the like changes will for the future be made in the same things, by like agents, and by the like ways, considers in one thing the possibility of having any of its simple ideas changed, and in another the possibility of making that change; and so comes by that idea which we call power. ( Essay , 2.21.1)

We come by the idea of power though considering the possibility of changes in our ideas made by experiences and our own choices. Yet, to consider this possibility—of some things making a change in others—we must already have a concept of power, rationalists would say. Empiricists, on the other hand, would point out, again, that what we actually need is for our minds to be able to recognize this, by having the correct abilities and faculties. Just as we don’t need to have a concept telling us how it is that we have binocular vision, being able to recognize change would be cashed out by us having the requisite faculty enabling us to do so.

Another way to meet at least some of these challenges to an empiricist account of the origin of our concepts is to revise our understanding of the content of our concepts so as to bring them more in line with what experience will clearly provide. Hume famously takes this approach. Beginning in a way reminiscent of Locke, he distinguishes between two forms of mental contents or “perceptions,” as he calls them: impressions and ideas. Impressions are the contents of our current experiences: our sensations, feelings, emotions, desires, and so on. Ideas are mental contents derived from impressions. Simple ideas are copies of impressions; complex ideas are derived from impressions by “compounding, transposing, augmenting or diminishing” them. Given that all our ideas are thus gained from experience, Hume offers us the following method for determining the content of any idea and thereby the meaning of any term taken to express it.

When we entertain, therefore, any suspicion that a philosophical term is employed without any meaning or idea (as is but too frequent), we need but inquire from what impression is that supposed idea derived ? And if it be impossible to assign any, this will confirm our suspicion. ( Enquiry , 2, p. 16)

Using this test, Hume draws out one of the most important implications of the empiricists’ denial of the Innate Concept thesis. If experience is indeed the source of all ideas, then our experiences also determine the content of our ideas. Our ideas of causation, of substance, of right and wrong have their content determined by the experiences that provide them. Those experiences, Hume argues, are unable to support the content that many rationalists and some empiricists, such as Locke, attribute to the corresponding ideas. Our inability to explain how some concepts, with the contents the rationalists attribute to them, are gained from experience should not lead us to adopt the Innate Concept thesis. It should lead us to accept a more limited view of the contents for those concepts, and thereby a more limited view of our ability to describe and understand the world.

Consider, for example, our idea of causation. Descartes takes it to be innate. Hume’s empiricist account severely limits its content. Our idea of causation is derived from a feeling of expectation rooted in our experiences of the constant conjunction of similar causes and effects.

It appears, then, that this idea of a necessary connection among events arises from a number of similar instances which occur, of the constant conjunction of these events; nor can that idea ever be suggested by any one of these instances surveyed in all possible lights and positions. But there is nothing in a number of instances, different from every single instance, which is supposed to be exactly similar, except only that after a repetition of similar instances the mind is carried by habit, upon the appearance of one event, to expect its usual attendant and to believe that it will exist. This connection, therefore, which we feel in the mind, this customary transition of the imagination from one object to its usual attendant, is the sentiment or impression from which we form the idea of power or necessary connection. ( Enquiry , 7.2, p. 59)

The source of our idea in experience determines its content.

Suitably to this experience, therefore, we may define a cause to be an object followed by another, and where all the objects, similar to the first are followed by objects similar to the second… We may, therefore, suitably to this experience, form another definition of cause and call it an object followed by another, and whose appearance always conveys the thought of the other . ( Enquiry , 7.2, p. 60)

Our claims, and any knowledge we may have, about causal connections in the world turn out, given the limited content of our empirically based concept of causation, to be claims and knowledge about the constant conjunction of events and our own feelings of expectation. Thus, the initial disagreement between rationalists and empiricists about the source of our ideas leads to one about their content and thereby the content of our descriptions and knowledge of the world.

Like philosophical debates generally, the rationalist/empiricist debate ultimately concerns our position in the world, in this case our position as rational inquirers. To what extent do our faculties of reason and experience support our attempts to know and understand our situation?

  • Adams, R., 1975, “Where Do Our Ideas Come From? Descartes vs Locke”, reprinted in Stitch S. (ed.) Innate Ideas , Berkeley, CA: California University Press.
  • Alexander, J. and Weinberg, J., 2007, “Analytic Epistemology and Experimental Philosophy,” Philosophy Compass , 2(1): 56–80.
  • Aune, B., 1970, Rationalism, Empiricism and Pragmatism: An Introduction , New York: Random House.
  • Ayer, A. J., 1952, Language, Truth and Logic , New York: Dover Publications.
  • Bealer, G., 1999, “A Theory of the A priori ,” Noûs , 33: 29–55.
  • Bealer, G. and Strawson, P. F., 1992, “The Incoherence of Empiricism,” Proceedings of the Aristotelian Society (Supplementary Volume), 66: 99–143.
  • Boyle, D., 2009, Descartes on Innate Ideas , London: Continuum.
  • Bonjour, L., 1998, In Defense of Pure Reason , Cambridge: Cambridge University Press.
  • Block, N., 1981, Essays in Philosophy of Psychology (Volume II), London: Methuen, Part Four.
  • Carruthers, P., 1992, Human Knowledge and Human Nature , Oxford: Oxford University Press.
  • Casullo, A., 2003, A priori Knowledge and Justification , New York: Oxford University Press.
  • Casullo, A. (ed.), 2012, Essays on A priori Knowledge and Justification , New York: Oxford University Press.
  • Clarke, D., 1982, Descartes’ Philosophy of Science , Manchester: Manchester University Press.
  • Cottingham, J., 1984, Rationalism , London: Paladin Books.
  • Chomsky, N., 1975, “Recent Contributions to the Theory of Innate Ideas”, reprinted in S. Stitch (ed.), Innate Ideas , Berkeley, CA: California University Press.
  • –––, 1988, Language and Problems of Knowledge , Cambridge, MA: MIT Press.
  • De Paul, M. and W. Ramsey (eds.), 1998, Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry , Lanham, MD: Rowman and Littlefield.
  • De Rosa, R., 2004, “Locke’s Essay, Book I: The Question-Begging Status of the Anti-Nativist Arguments”, Philosophy and Phenomenological Research , 69: 37–64.
  • –––, 2000, “On Fodor’s Claim That Classical Empiricists and Rationalists Agree on the Innateness of Ideas”, ProtoSociology , 14: 240–269.
  • Descartes, R., 1628, Rules for the Direction of our Native Intelligence , in Descartes: Selected Philosophical Writings , John Cottingham, Robert Stoothoff and Dugald Murdoch (trans.), Cambridge: Cambridge University Press, 1988 [abbreviated as Rules ].
  • –––, 1641, Meditations , in Descartes: Selected Philosophical Writings , John Cottingham, Robert Stoothoff and Dugald Murdoch (trans.), Cambridge: Cambridge University Press, 1988 [abbreviated as Meditations ].
  • –––, 1644, Principles of Philosophy , in Descartes: Selected Philosophical Writings , John Cottingham, Robert Stoothoff and Dugald Murdoch (trans.), Cambridge: Cambridge University Press, 1988.
  • Falkenstein, L, 2004, “Nativism and the Nature of Thought in Reid’s Account of Our Knowledge of the External World”, in Terence Cuneo and Rene Van Woudenberg (eds.), The Cambridge Companion to Reid , Cambridge: Cambridge University Press, pp. 156–179.
  • Fodor, J., 1975, The Language of Thought , Cambridge, MA: Harvard University Press.
  • –––, 1981, Representations , Brighton: Harvester.
  • Gorham, G., 2002, “Descartes on the Innateness of All Ideas,” Canadian Journal of Philosophy , 32(3): 355–388.
  • Huemer, M., 2001, Skepticism and the Veil of Perception , Lanham, Maryland: Rowman and Littlefield.
  • –––, 2005, Ethical Intuitionism , Hampshire: Palgrave MacMillan.
  • Hume, D., 1739–40, A Treatise of Human Nature , ed. David Fate Norton and Mary J. Norton, The Clarendon Edition of the Works of David Hume, Oxford: Oxford University Press, 2011 [abbreviated as Treatise ].
  • –––, 1748, An Enquiry Concerning Human Understanding , ed. Tom L. Beauchamp, The Clarendon Edition of the Works of David Hume, Oxford: Oxford University Press, 2000 [abbreviated as Enquiry ].
  • Kant, I., 1783, Prolegomena to Any Future Metaphysic , Jonathan Bennett (trans.), PDF available online at Early Modern Texts [abbreviated as Prolegomena ].
  • Kenny, A., 1986, Rationalism, Empiricism and Idealism , Oxford: Oxford University Press.
  • Kripke, S., 1980, Naming and Necessity , Oxford: Blackwell.
  • Leibniz, G., c1704, New Essays on Human Understanding , in Leibniz: Philosophical Writings , G.H.R. Parkinson (ed.), Mary Morris and G.H.R. Parkinson (trans.), London: J.M. Dent & Sons, 1973 [abbreviated as New Essays ].
  • Locke, J., 1690, An Essay on Human Understanding , ed. Peter H. Nidditch, 1975 [abbreviated as Essay ].
  • Loeb, L., 1981, From Descartes to Hume: Continental Metaphysics and the Development of Modern Philosophy , Ithaca, NY: Cornell University Press.
  • Mackie, J. L., 1977, Ethics: Inventing Right and Wrong , London: Penguin Books.
  • Nadler, S., 2006, “The Doctrine of Ideas”, in S. Gaukroger (ed.), The Blackwell Guide to Descartes’ Meditations , Oxford: Blackwell Publishing.
  • Plato, Meno , W. K. C. Guthrie (trans.), Plato: Collected Dialogues , edited by Edith Hamilton and Huntington Cairns, Princeton: Princeton University Press, 1973.
  • Quine, W. V. O., 1966, Ways of Paradox and Other Essays , New York: Random House.
  • –––, 1951, “Two Dogmas of Empiricism,” in W.V.O. Quine, From a Logical Point of View , Cambridge, MA: Harvard University Press, 1951.
  • Reid, T., 1785, Essays on the Intellectual Powers of Man , ed. Derek Brookes and Knud Haakonssen, Edinburgh: Edinburgh University Press, 2002 [abbreviated as Intellectual Powers ].
  • Ross, W. D., 1930, The Right and the Good , Indianapolis, IN: Hackett Publishing, 1988.
  • Stitch, S., 1975, Innate Ideas , Berkeley, CA: California University Press.
  • Van Cleve, J., 2015, Problems from Reid , Oxford: Oxford University Press.
  • Weinberg, S, 2016, Consciousness in Locke , Oxford: Oxford University Press.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

analytic/synthetic distinction | a priori justification and knowledge | Ayer, Alfred Jules | Berkeley, George | concepts | Descartes, René | Descartes, René: theory of ideas | epistemology | Hume, David | innate/acquired distinction | innateness: and language | innateness: historical controversies | justification, epistemic: foundationalist theories of | Kant, Immanuel | knowledge: analysis of | Leibniz, Gottfried Wilhelm | Locke, John | Plato | Quine, Willard Van Orman | reliabilist epistemology | skepticism | Spinoza, Baruch

Copyright © 2021 by Peter Markie M. Folescu < folescum @ missouri . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Susi Ferrarello Ph.D.

Being Rational Versus Being Reasonable

When rationality can become a cage that keeps us from knowing what is true..

Posted October 4, 2022 | Reviewed by Ekua Hagan

  • Emotional reactions are as important as rational thoughts.
  • Both the "emotional" and the "rational" can arrive at correct conclusions even if the procedure to get there is not logically articulated.
  • Being reasonable requires a flexible midpoint between rational thoughts and emotions.

Rationality is what keeps us sane in the messiness of life. Being able to stay rational through a difficult time, or just for as long as a simple argument, is a sort of safety vest we can wear to not drown in our fears and anxieties.

Yet, despite its appearance, rationality brings with it two big problems:

  • Rarely do we know what rationality is.
  • Rationality can become a cage for insanity.

I often noticed in my office how rationality ends up being associated with the voice of those who are more assertive , bossy, or even bullying in a conversation. Those who are emotionally wounded are considered irrational because their answers consist of emotional reactions, such as tears or broken sentences.

For this reason, let’s examine these two points more closely to see if we can find a healthier alternative to rationality.

The habit of being rational

In this short reflection, I will play the devil’s advocate in considering the shadows of rationality. In philosophy , rationality seems to be a self-evident truth with which we come in contact while exploring a certain topic or observing a given aspect of reality.

This means that per se rationality is a procedural activity and not necessarily a quality that belongs to one interlocutor. It might happen that in a conversation between two individuals, both the "emotional" and the "rational" can arrive at rational conclusions even if the procedure to get there was not necessarily logically articulated.

We tend to associate rationality with a quality belonging to someone—we would say "that person is rational" or "I consider myself a rational person" because rationality points to the procedural way of thinking that as Dewey explained is connected to habits ( How We Think , 1910). As Dewey noticed, thinking is that process in which we fill the gaps and make present what is in front of us but does not have words yet. To think, we need to leverage habits; that is, we need to use what we already know and associate it with what we do not know yet. In doing so, habits are our necessary allies to help us figure out something that is not understandable to us.

For this reason, it happens that rationality becomes the quality of someone more than the property of what we observe and try to explain. Those who habitually think through a certain process and are used to winning more arguments become, with time, the rational ones. They are those who understood the process and what habits to leverage to get there.

This does not mean that they are the unique holders of rationality—maybe even using the category rationality and irrationality on people is not so farsighted. I believe that rationality is more the character of what we try to grasp rather than the persistent quality of an individual that either makes or does not make sense.

Three forms of rationality

James—another pragmatist philosopher—defines at least three forms of rationality: absolutist, structural, and calculative. Believing that it is possible to achieve an absolutely rational decision or believing that there is a rational structure that underlies the universe is, according to James, irrational and a counterproductive simplification of facts:

"Our passional nature not only lawfully may, but must, decide an option between propositions, whenever it is a genuine option that cannot by its nature be decided on intellectual grounds; for to say under such circumstances, 'Do not decide, but leave the question open' is itself a passional decision—just like deciding yes or no—and is attended with the same risk of losing the truth." (William James, "The Will to Believe," in The Will to Believe and Other Essays in Popular Philosophy (New York, 1979), p. 11)

rational thinking essay

Rationality for James seems to be a calculation or a procedure that we set in motion because we believe it is fit to achieve something:

" ... to develop a thought's meaning, we need only determine what conduct it is fit to produce: that conduct is for us its sole significance. And the tangible fact at the root of all of our thought distinctions, however subtle, is that there is no one of them so fine as to consist in anything but a possible difference of practice" (William James, "Pragmatism," as published in a recent collection, William: Writings 1902-1910 (The Library of America, 1987), p. 506)

Being rational in an argument versus being reasonable

Hence, being rational in an argument does not mean that you need to be good at articulating your thoughts or able to make more sense than others. If things were that simple, people who do not master the language or have problems expressing their ideas would be always wrong and irrational. Being rational means being reasonable, that is, being able to use your reason in a way that fits the process you are undertaking at that moment.

Let’s use the example of a partnership where the husband tends to react to an argument with tears while the wife tends to react with detached and well-organized sentences. Common sense would say that the wife is the rational one in the couple because she can still reason. It is true that she can reason but her way of reasoning is not "reasonable" because she is not capable—like her husband—to find a language that is appropriate for the conversation they are having.

They both probably need to make little changes in their attitudes so they can encounter each other in the middle and devise reasoning that is a better fit for the argument they are having. She might have to understand how her husband is feeling and he might need to make an effort to express in words what his emotions are.

Being rational can be as unreasonable as much as being emotional.

The cage of rationality

In his beautiful book, Reason and Existenz , the existential philosopher and psychiatrist Karl Jaspers warns us about the risks of rationality and its impulsive nature.

Rationality often points to a heavily ethically charged way of making sense of the world that puts us in front of dichotomies that do not help our spirit.

To give an example, a woman might think that it is rational to become at peace with the idea that either she can have her career or her family; trying to get both is irrational. Or a person who grew up in a hostile environment might develop the rational dichotomy according to which people who are nice to her are often trying to fool her because there are no gratuitously kind people in life.

These are just a few worldviews that underlie the way we think of life. According to Jaspers, these dichotomies arise from moments in which we lived through case limits in our lives and we had to form an idea about the world that makes sense for us. With time, this habitual idea becomes the rational framework with which we used to explain events occurring in life.

Yet, as we can easily see from these two examples, remaining faithful to these dichotomies would feed a gloomy view of life that is all but rational. Sometimes what we take for granted as rational in our mind is strongly fed by emotions and passions that we need to bring to control. The dichotomy of either/or arises exactly from our need to fall again into chaos. To use the example above, I am a woman. I don’t want to be miserable. I must choose whether I want a career or a child.

This "rational" statement might come from the sense of anguish felt when looking at the model set by the family or the cliques presented by the media. According to Jaspers, to achieve a healthy life we should be able to harmonize the dichotomy as much as possible into a fluid way of thinking.

To conclude, I would say that being reasonable more than being rational is important in life; that is, being open to the complexity of reality as it presents itself at the moment and making a choice to keep your habits as open and flexible as possible. In doing so, we need to overcome our fear of drowning in the messiness of life and losing control of what we think we know.

Susi Ferrarello Ph.D.

Susi Ferrarello, Ph.D., is an associate professor at California State University, East Bay, and a philosophical counselor.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience

How to encourage and train students to think rationally

rational thinking essay

We all like to think of ourselves as rational actors, but are we really? Much of what we do is guided by habit, emotion, and cognitive biases that encourage us to take mental shortcuts. For students, this can result in faulty conclusions and ineffective learning strategies. Thus, training students’ rational thinking skills — the ability to draw measured conclusions from data, rules, and logic — can have some very real benefits.

Promoting rational thinking can improve students’ problem-solving skills, making them more capable learners across subject areas. Competent rational thinkers have extra tools to help them focus and manage their emotions , benefits that extend well beyond the classroom. While there’s no one-quick-fix for developing skilled rational thinkers, there are some broad techniques you can use to help students cultivate these abilities over time.

Five techniques to encourage students to think rationally

1. welcome questions from students — no matter how big or small.

It might be obvious, but encouraging and welcoming student questions is an essential step towards making them better, clearer thinkers. Rational thinkers are those that take the time to think about things from different angles and explore gaps in knowledge where they find them. Impress upon students that there are no “stupid questions,” and create a space where it is safe for them to think aloud as they come up with new ideas or question existing ones.

Use tricky questions as an opportunity to find the answers together, either through discussion or research. While it’s not always possible to diverge from the lesson plan, student curiosity should be rewarded with time and attention as much as possible!

2. Focus on systematic problem-solving to train trail and error process

rational thinking essay

One of the key principles of rational thinking is that many problems can be solved through thinking, with enough time and effort. The inverse of this is the binary attitude that students either know or don’t know how to do something (until they are shown).

Prompt students to work things out on their own through trial and error before testing their conclusions against those of their classmates. Remind students that making errors is an important part of the learning process, as it provides a chance for them to learn from their mistakes.

If you want to try this out in your classroom, consider assigning a Kialo Edu discussion as an essay alternative . Students can use the comment section to help their classmates sharpen their arguments through multiple rounds of peer feedback.

Rather than assigning a grade yourself, tell students they can choose their own grade once they are happy with their work. Activities like this encourage students to actively seek out constructive criticism as a way of strengthening their own ideas.

3. Allow students to explore the range of potential solutions to a problem

rational thinking essay

Jonathan Baron, a notable philosopher on the subject of thinking, identifies “insufficient search” as a major obstacle to thinking effectively. 1 He deems this as the failure to consider more than one possible approach or answer to a given problem.

To train students to think “outside of the box,” give them lots of opportunities to tackle complex topics for which there are no easy — or singularly correct — answers. Open-ended classroom discussions on social and philosophical problems can stimulate this kind of thinking, which some studies have even linked to improvements across foundational skills like reading and math.

If you’re looking for ideas on where to start, Kialo Edu is designed to facilitate this type of wide-ranging discussion, with countless ideas in our library of debate topics to help spark inspiration.

Educators should also take steps to create a classroom culture where students can discuss their ideas freely and won’t feel personally attacked when those ideas are challenged. Emphasize that winning is not the goal of classroom debates, but rather learning something new together.

4. Talk about thinking with your students

For students to develop their rational faculties, they need to be aware of their own thinking. By being more aware, students can train themselves to recognize — and avoid — careless thinking. Reflective practices like learning journals prompt students to visualize how their understanding has progressed with practice and contribute to a growth mindset .

“Showing your work” is also universally applicable, whether students are tackling math problems or defending their position on an ethical question.

5. Practice what you teach by modeling rational thinking

It’s essential that educators model the kind of thinking practices they want students to develop. Be honest about the gaps in your knowledge when they come up, and be willing to change your mind when faced with new evidence. Students should learn that a sign of true intellectual strength is not having been right all along, but having the curiosity and perseverance to work towards the correct answer.

Here at Kialo Edu, we’re passionate about helping to build the next generation of accomplished rational thinkers. That’s why we designed our platform to encourage civil discourse, collaborative learning , and systematic thinking as students work together to build out the different aspects of a debate.

How do you teach your students to think rationally? If you have a technique that works, feel free to reach out and tell us about it on social media, or directly at [email protected]

Looking for more inspiration on how to teach critical thinking in your classroom? We’ve got lots of other resources!

  • Baron, J. (2006). Thinking and Deciding (4th ed., pp.6). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511840265

Want to try Kialo Edu with your class?

Sign up for free and use Kialo Edu to have thoughtful classroom discussions and train students’ argumentation and critical thinking skills.

Home — Essay Samples — Education — Critical Thinking — Reflection On Critical Thinking

test_template

Reflection on Critical Thinking

  • Categories: Critical Thinking

About this sample

close

Words: 664 |

Published: Mar 14, 2024

Words: 664 | Page: 1 | 4 min read

Image of Dr. Charlotte Jacobson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Education

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 1069 words

2 pages / 789 words

3 pages / 1313 words

1 pages / 360 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Critical Thinking

The importance of Utopia in the Renaissance cannot be overstated. It provided a platform for critical thinking, social commentary, and the exploration of alternative possibilities. By presenting ideal societies, writers and [...]

In her book "The New Education: How to Revolutionize the University to Prepare Students for a World in Flux," Cathy Davidson discusses the need for a transformation in the traditional education system. One of the key components [...]

In SOC 120 Week 3 Individual Assignment, students are tasked with exploring various sociological concepts and theories in order to better understand the complexities of society. This assignment typically involves analyzing [...]

Forensic argument analysis is a critical component of forensic science that involves the examination and evaluation of arguments presented in legal cases. It plays a crucial role in determining the validity and reliability of [...]

In my critical analysis essay based on the article Some Lessons from the Assembly Line by Andrew Braaksma, my target audience will be graduating high school seniors. This audience will consist of adolescents between the ages of [...]

Argument essay writing is a powerful tool that allows individuals to express their opinions, persuade others, and engage in critical thinking. It is a form of academic writing that requires the writer to present a clear and [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

rational thinking essay

  • Share full article

Advertisement

Supported by

Daniel Kahneman, Who Plumbed the Psychology of Economics, Dies at 90

He helped pioneer a branch of the field that exposed hard-wired mental biases in people’s economic behavior. The work led to a Nobel.

Daniel Kahneman, a balding man with glasses wearing a blue blazer and a tie. stands in front of a red brick building and smiles.s

By Robert D. Hershey Jr.

Daniel Kahneman, who never took an economics course but who pioneered a psychologically based branch of that field that led to a Nobel in economic science in 2002, died on Wednesday. He was 90.

His death was confirmed by his partner, Barbara Tversky. She declined to say where he died.

Professor Kahneman, who was long associated with Princeton University and lived in Manhattan, employed his training as a psychologist to advance what came to be called behavioral economics. The work, done largely in the 1970s, led to a rethinking of issues as far-flung as medical malpractice, international political negotiations and the evaluation of baseball talent, all of which he analyzed, mostly in collaboration with Amos Tversky , a Stanford cognitive psychologist who did groundbreaking work on human judgment and decision-making. (Ms. Tversky, also a professor of psychology at Stanford , had been married to Professor Tversky, who died in 1996. She and Professor Kahneman became partners several years ago.)

As opposed to traditional economics, which assumes that human beings generally act in fully rational ways and that any exceptions tend to disappear as the stakes are raised, the behavioral school is based on exposing hard-wired mental biases that can warp judgment, often with counterintuitive results.

“His central message could not be more important,” the Harvard psychologist and author Steven Pinker told The Guardian in 2014, “namely, that human reason left to its own devices is apt to engage in a number of fallacies and systematic errors, so if we want to make better decisions in our personal lives and as a society, we ought to be aware of these biases and seek workarounds. That’s a powerful and important discovery.”

Professor Kahneman delighted in pointing out and explaining what he called universal brain “kinks.” The most important of these, the behaviorists hold, is loss-aversion: Why, for example, does the loss of $100 hurt about twice as much as the gaining of $100 brings pleasure?

Among its myriad implications, loss-aversion theory suggests that it is foolish to check one’s stock portfolio frequently, since the predominance of pain experienced in the stock market will most likely lead to excessive and possibly self-defeating caution.

Loss-aversion also explains why golfers have been found to putt better when going for par on a given hole than for a stroke-gaining birdie. They try harder on a par putt because they dearly want to avoid a bogey, or a loss of a stroke.

Mild-mannered and self-effacing, Professor Kahneman not only welcomed debate on his ideas; he also enlisted the help of adversaries as well as colleagues to perfect them. When asked who should be considered the “father” of behavioral economics, Professor Kahneman pointed to the University of Chicago economist Richard H. Thaler , a younger scholar (by 11 years) whom he described in his Nobel autobiography as his second most important professional friend, after Professor Tversky.

“I’m the grandfather of behavioral economics,” Professor Kahneman allowed in a 2016 interview for this obituary, in a restaurant near his home in Lower Manhattan.

This new school of thought did not get its first major public airing until 1985, in a conference at the University of Chicago Graduate School of Business, a bastion of traditional economics.

Professor Kahneman’s public reputation rested heavily on his 2011 book “Thinking, Fast and Slow,” which appeared on best-seller lists in science and business. One commentator, the essayist, mathematical statistician and former option trader Nassim Nicholas Taleb, author of the influential book on improbability “The Black Swan,” placed “Thinking” in the same league as Adam Smith’s “The Wealth of Nations” and Sigmund Freud’s “The Interpretation of Dreams.”

The author Jim Holt, writing in The New York Times Book Review , called “Thinking” “an astonishingly rich book: lucid, profound, full of intellectual surprises and self-help value.”

Shane Frederick, a professor at the Yale School of Management and a Kahneman protégé, said by email in 2016 that Professor Kahneman had “helped transform economics into a true behavioral science rather than a mere mathematical exercise in outlining the logical entailments of a set of often wildly untenable assumptions.”

An Accessible Writer

Professor Kahneman propagated his findings with an appealing writing style, using illustrative vignettes with which even lay readers could engage.

Professor Kahneman wrote, for example, that Professor Thaler had inspired him to study, as an experiment, the so-called mental accounting of someone who arrives at the theater and realizes that he has lost either his ticket or the cash equivalent. Professor Kahneman found that people who lost the cash would still buy a ticket by some means, while those who lost an already purchased ticket would more likely go home.

Professor Thaler won the 2017 Nobel in economic science — officially the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel. Professor Kahneman shared his 2002 Nobel with Vernon L. Smith of George Mason University in Virginia. “Had Tversky lived, he would certainly have shared the Nobel with Kahneman, his longtime collaborator and dear friend,” Professor Holt wrote in his 2011 Times review . Professor Tversky died in 1996 at 59.

Much of Professor Kahneman’s work is grounded in the notion — which he did not originate but organized and advanced — that the mind operates in two modes: fast and intuitive (mental activities that we’re more or less born with, called System One), or slow and analytical, a more complex mode involving experience and requiring effort (System Two).

Others have personified these mental modes as Econs (rational, analytical people) and Humans (emotional, impulsive and prone to exhibit unconscious mental biases and an unwise reliance on dubious rules of thumb). Professor Kahneman and Professor Tversky used the word “heuristics” to describe these rules of thumb. One is the “halo effect,” where in observing a positive attribute of another person one perceives other strengths that aren’t really there.

“Before Kahneman and Tversky, people who thought about social problems and human behavior tended to assume that we are mostly rational agents,” the Times columnist David Brooks wrote in 2011 . “They assumed that people have control over the most important parts of their own thinking. They assumed that people are basically sensible utility-maximizers, and that when they depart from reason it’s because some passion like fear or love has distorted their judgment.”

But Professors Kahneman and Tversky, he went on, “yielded a different vision of human nature.”

As Mr. Brooks described it: “We are players in a game we don’t understand. Most of our own thinking is below awareness.” He added: “Our biases frequently cause us to want the wrong things. Our perceptions and memories are slippery, especially about our own mental states. Our free will is bounded. We have much less control over ourselves than we thought.”

The work of Professor Kahneman and Professor Tversky, he concluded, “will be remembered hundreds of years from now.”

In the Shadow of Nazis

Daniel Kahneman was born on March 5, 1934, into a family of Lithuanian Jews who had emigrated to France to the early 1920s. After France fell to Nazi Germany in World War II, Daniel, like other Jews, was forced to wear a Star of David on the outside of his clothing. His father, the research chief in a chemical factory, was seized and interned at a way station before deportation to an extermination camp, but he was then released under mysterious circumstances. The family escaped to the Riviera and then to central France, where they lived in a converted chicken coop.

Daniel’s father died just before D-Day, in June 1944, and Daniel, by then an eighth-grader, and his sister, Ruth, wound up in British-controlled Palestine with their mother, Rachel. (Daniel had been born in Tel Aviv during an extended visit with relatives by his mother.)

He graduated from the Hebrew University of Jerusalem with a major in psychology, completing his college studies in two years. In 1954, after the founding of the state of Israel, he was drafted into the Israeli Defense Forces as a second lieutenant.

After a year as a platoon leader, he was transferred to the psychology branch, where he was given occasional assignments to assess candidates for officer training.

The unit’s ability to predict performance, however, was so poor that he coined the term “illusion of validity,” meaning a cognitive bias in which one displays overconfidence in the accuracy of one’s judgments. Two decades later this “illusion” became one of the most frequently cited elements in psychology literature.

He married Irah Kahan in Israel, and they soon set off for the University of California, Berkeley, where he had been granted a fellowship. He earned his Ph.D. in psychology there. He returned to Israel to teach at Hebrew University from 1961 to 1977. The marriage ended in divorce. (Professor Kahneman held dual citizenships, in the United States and Israel.)

In 1978, Professor Kahneman married Anne Treisman, a noted British psychologist who shared his interest in the study of attention, which was the chief subject of his early work. The two of them ran a lab and wrote papers together. In 2013 she received the National Medal of Science from President Barack Obama. She died in 2018. He and Ms. Treisman had long been friends with the Tverskys.

In addition to Ms. Tversky, he is survived by a son and daughter from his first marriage, Michael Kahneman and Lenore Shoham; two stepdaughters from his second marriage, Jessica and Deborah Treisman; two stepsons from the same marriage, Daniel and Stephen Treisman; three grandchildren; and four step-granddaughters. He lived in Greenwich Village for many years.

It was in Jerusalem, while developing a training course for Air Force flight instructors, that Professor Kahneman had “the most satisfying Eureka experience of my career,” as he wrote in an autobiographical sketch for the Nobel committee.

He had started to preach the traditional view that to promote learning, praise is more effective than punishment. But a seasoned colleague insisted otherwise, telling him, as Professor Kahneman recalled:

“On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver, and in general when they try it again, they do worse. On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don’t tell us that reinforcement works and punishment does not, because the opposite is the case.”

The colleague had insisted — and convinced Professor Kahneman — that statistically people may do very well in something in one instance or very poorly in another, but that in the end they tend to regress to the mean, or average.

“This was a joyous moment, in which I learned an important truth about the world,” Professor Kahneman wrote. “Because we tend to reward others when they do well and punish them when they do badly, and because there is regression to the mean, it is part of the human condition that we are statistically punished for rewarding others and rewarded for punishing them.”

His collaboration with Professor Tversky — their peak productive years were 1971 to 1981 — was exceptionally close, so much so that it inspired the author Michael Lewis to write a book about them, “The Undoing Project : A Friendship That Changed Our Minds” (2016).

“Amos and I shared the wonder of together owning a goose that could lay golden eggs — a joint mind that was better than our separate minds,” Professor Kahneman wrote in his Nobel autobiography. Later, in “Thinking,” he wrote, “The pleasure we found in working together made us exceptionally patient; it is much easier to strive for perfection when you are never bored.”

Mr. Lewis reported that the two men worked on a single typewriter, often amid uproarious laughter and shouts in Hebrew and English, and that they had sometimes flipped a coin to determine whose name would be listed first on a paper.

But they also feuded, particularly when Professor Kahneman thought he was being denied proper credit. One falling-out lasted years, ending finally with a reconciliation. Professor Kahneman was solicitous during his colleague’s final illness (he died of metastatic melanoma) and was his main eulogist at his funeral in 1996.

One product of their collaboration was a finding that overconfidence in conjunction with optimism is an extremely common bias, which leads people to think that wars are quickly winnable and that building projects will be completed on budget. But Professor Kahneman and Professor Tversky considered such bias necessary in the end for capitalism to function.

Professor Kahneman’s North American career included teaching posts at the University of British Columbia and Berkeley before he joined the Princeton University faculty in 1993.

His most recent book is “Noise: A Flaw in Human Judgment” (2021), written with Cass Sunstein and Olivier Sibony. In The Times Book Review, Steven Brill called it a “tour de force of scholarship and clear writing.”

The book looks at how human judgment often varies wildly even among specialists, as reflected in judicial decisions, insurance premiums, medical diagnoses and corporate decisions, as well as in many other aspects of life.

And it distinguishes between predictable biases — a judge, for example, who consistently sentences Black defendants more harshly — and what the authors call “noise”: less explainable decisions resulting from what they define as “unwanted variability in judgments.” In one example, the authors report that doctors are more likely to order cancer screenings for patients they see early in the morning than late in the afternoon.

The book, like his others, was an outgrowth of Professor Kahneman’s lifelong quest to understand how the human mind works — what thought processes lead people to make the kinds of decisions and judgments they do as they navigate a complex world. And toward the end of his life he acknowledged that so much more was to be known.

In an interview with Kara Swisher on her Times podcast “Sway” in 2021, he said, “If I were starting my career now, I would be choosing between artificial intelligence and neuroscience, because those are now particularly exciting ways of looking at human nature.”

Robert D. Hershey Jr. , a longtime reporter who wrote about finance and economics for The Times, died in January. Alex Traub contributed reporting.

IMAGES

  1. How To Write A Rationale For A Dissertation

    rational thinking essay

  2. ESSAY ON RATIONAL THINKING

    rational thinking essay

  3. PPT

    rational thinking essay

  4. Man as a Rational Being Essay Example

    rational thinking essay

  5. How to write the rationale essay

    rational thinking essay

  6. The Assessment of Rational Thinking: IQ ≠ RQ

    rational thinking essay

VIDEO

  1. Rational Thinking Cell

  2. What is Rational Thinking?

  3. #Rational Thinking #lord buddha#journey

  4. Essay on The Power of Positive thinking

  5. Rational Thinking about Hope

  6. The Power of Rational Thinking #afcat #cds #ssb #nda #ssbinterview #ssctech

COMMENTS

  1. Why Is It So Hard to Be Rational?

    In "The Rationality Quotient: Toward a Test of Rational Thinking" (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality "a ...

  2. What is the value of rationality, and why does it matter?

    In the past, most philosophers assumed that the central notion of rationality is a normative or evaluative concept: to think rationally is to think properly or well—in other words, to think as one should think. Rational thinking is in a sense good thinking, while irrational thinking is bad. Recently, however, philosophers have raised several objections to that assumption.

  3. Critical Thinking

    Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as differing conceptions of the same basic concept: careful thinking directed to a goal. Conceptions differ with respect to the scope of such thinking, the type of goal, the criteria and norms for thinking ...

  4. What Is Critical Thinking?

    Critical thinking is the ability to effectively analyze information and form a judgment. To think critically, you must be aware of your own biases and assumptions when encountering information, and apply consistent standards when evaluating sources. Critical thinking skills help you to: Identify credible sources. Evaluate and respond to arguments.

  5. The myth of rational thinking

    The myth of rational thinking. Why our pursuit of rationality leads to explosions of irrationality. By Sean Illing @seanilling [email protected] Apr 25, 2019, 8:10am EDT. Humans are hardly ...

  6. What Is It To Be Rational?

    So a theoretic model of rationality is a model of human behaviour and thinking, human activity on the whole, realised in accordance with norms which find their substantiation in the procedure of analytical activity of human reason. By reasonably based norm we mean such a norm the adoption of which follows from a certain reasoning.

  7. Rational Thinking

    Rational thinking (or more formally, information processing) refers to differences across individuals in their tendency and need to process information in an effortful, analytical manner using a rule-based system of logic (Epstein et al. 1996; Scott and Bruce 1995; Stanovich and West 1998; Phillips et al. 2016).In other words, rational thinking is one's preferred manner or style in which ...

  8. 3.1: Critical Thinking in College Writing

    While in this essay I refer mainly to critical thinking in composition, the general principles behind critical thinking are strikingly similar in other fields and disciplines. ... most of its general principles such as rational thinking, making independent evaluations and judgments, and a healthy skepticism of what is being read, are common to ...

  9. Are we rational?

    Abstract. Human thinking and reasoning can be compared with a 'normative' standard—a formal theory of right and wrong answers. The normative theories mostly applied are decision theory, probability theory, and logic. People frequently make errors by these standards and have been shown to have many cognitive biases.

  10. Rational Argument

    Rational debate is subject to both procedural norms and to epistemic norms that allow the evaluation of argument content. This chapter outlines normative frameworks for argumentation (dialectical, logical, and Bayesian); it then summarizes psychological research on argumentation, drawn from cognitive psychology, as well as a number of applied ...

  11. How to Train Yourself to Be a More Rational Thinker

    It's something you cultivate.". Greenberg also notes "how important being able to deal with negative emotions is when it comes to being more rational. If you're trying to figure out the truth, that means when someone points out a flaw in your reasoning, you need to be able to admit that.". Short-term loss, long-term win.

  12. Rationalism vs. Empiricism

    The dispute between rationalism and empiricism takes place primarily within epistemology, the branch of philosophy devoted to studying the nature, sources, and limits of knowledge. Knowledge itself can be of many different things and is usually divided among three main categories: knowledge of the external world, knowledge of the internal world ...

  13. Being Rational Versus Being Reasonable

    Emotional reactions are as important as rational thoughts. Both the "emotional" and the "rational" can arrive at correct conclusions even if the procedure to get there is not logically articulated ...

  14. How to encourage and train rational thinking in students

    4. Talk about thinking with your students. For students to develop their rational faculties, they need to be aware of their own thinking. By being more aware, students can train themselves to recognize — and avoid — careless thinking. Reflective practices like learning journals prompt students to visualize how their understanding has ...

  15. Rationalism

    rationalism, in Western philosophy, the view that regards reason as the chief source and test of knowledge.Holding that reality itself has an inherently logical structure, the rationalist asserts that a class of truths exists that the intellect can grasp directly. There are, according to the rationalists, certain rational principles—especially in logic and mathematics, and even in ethics and ...

  16. Mental health relations toward Rational Thinking essay

    (Hanson et al. 1995) Human development requires good mental health and rational thinking. Dopamine is responsible for what people do. It's a neurological substance that causes the brain to light up when something good happens.

  17. Reflection On Critical Thinking: [Essay Example], 664 words

    Critical thinking is a skill that goes beyond simply accepting information and requires individuals to analyze, evaluate, and interpret information in a more nuanced manner. One of the key benefits of critical thinking is its ability to enhance decision-making processes. By critically evaluating different options, individuals can make more ...

  18. Critical Thinking Essay Examples

    Critical thinking employs not only logic but broad intellectual criteria such as clarity, credibility, accuracy, precision, relevance, depth, breadth, significance, and fairness. Write your best essay on Critical Thinking - just find, explore and download any essay for free! Examples 👉 Topics 👉 Titles by Samplius.com.

  19. Theories of Rational Thinking

    Evolutionist known as Aristotle held the belief that human being is a rational animal. According to his view on rationality, rational thinking distinguishes human beings from other animals. He also expressed rationality as the power of reason and ownership. Based on the finding of many previous studies humankind has paid much respect to holding ...

  20. The Rational Thinking Is Very Powerful Word

    Introduction The Rational Thinking is very powerful word as it help us in making a decision which is appropriate. As in rational thinking there is logical and reasonable thinking and the decision maker is free from all the restriction such as sentiment. So the decision is made on the basis of original facts and knowledge not on the sentiment.

  21. Application of Rational Thinking in Business Essay

    First, rational thinking in business stands on several pillars: "experience, opinion, people, facts and data" (Richetti, Cynthia T. et al. 8) Experience is the basis for the acquired advantages, which is the prerogative for the oldest companies like Coca-cola or Microsoft. Rational thinking is always the result of people's opinion that ...

  22. Rational Thinking

    Rational thinking is a process. It refers to the ability to think with reason. It encompasses the ability to draw sensible conclusions from facts, logic and data. In simple words, if your thoughts are based on facts and not emotions, it is called rational thinking. Rational thinking focuses on resolving problems and achieving goals.

  23. Rational thinking in Business

    Rational thinking in Business Introduction Thinking processes can be ified into two based on the nature of thinking; rational and irrational. Rational thinking always supported with reasons or rational behind each thoughts and ideas whereas irrational thinking is controlled by emotions or prejudices. Calculation and planning are the essential ...

  24. Daniel Kahneman, pioneering behavioral psychologist

    Daniel Kahneman, the Eugene Higgins Professor of Psychology, Emeritus, professor of psychology and public affairs, emeritus, and a Nobel laureate in economics whose groundbreaking behavioral science research changed our understanding of how people think and make decisions, died on March 27. He was 90. Kahneman joined the Princeton University faculty in 1993, following appointments at Hebrew ...

  25. Daniel Kahneman, Who Plumbed the Psychology of Economics, Dies at 90

    Daniel Kahneman, who never took an economics course but who pioneered a psychologically based branch of that field that led to a Nobel in economic science in 2002, died on Wednesday. He was 90 ...