• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Sapir-Whorf Hypothesis: How Language Influences How We Express Ourselves

Rachael is a New York-based writer and freelance writer for Verywell Mind, where she leverages her decades of personal experience with and research on mental illness—particularly ADHD and depression—to help readers better understand how their mind works and how to manage their mental health.

hypothesis language means

Thomas Barwick / Getty Images

What to Know About the Sapir-Whorf Hypothesis

Real-world examples of linguistic relativity, linguistic relativity in psychology.

The Sapir-Whorf Hypothesis, also known as linguistic relativity, refers to the idea that the language a person speaks can influence their worldview, thought, and even how they experience and understand the world.

While more extreme versions of the hypothesis have largely been discredited, a growing body of research has demonstrated that language can meaningfully shape how we understand the world around us and even ourselves.

Keep reading to learn more about linguistic relativity, including some real-world examples of how it shapes thoughts, emotions, and behavior.  

The hypothesis is named after anthropologist and linguist Edward Sapir and his student, Benjamin Lee Whorf. While the hypothesis is named after them both, the two never actually formally co-authored a coherent hypothesis together.

This Hypothesis Aims to Figure Out How Language and Culture Are Connected

Sapir was interested in charting the difference in language and cultural worldviews, including how language and culture influence each other. Whorf took this work on how language and culture shape each other a step further to explore how different languages might shape thought and behavior.

Since then, the concept has evolved into multiple variations, some more credible than others.

Linguistic Determinism Is an Extreme Version of the Hypothesis

Linguistic determinism, for example, is a more extreme version suggesting that a person’s perception and thought are limited to the language they speak. An early example of linguistic determinism comes from Whorf himself who argued that the Hopi people in Arizona don’t conjugate verbs into past, present, and future tenses as English speakers do and that their words for units of time (like “day” or “hour”) were verbs rather than nouns.

From this, he concluded that the Hopi don’t view time as a physical object that can be counted out in minutes and hours the way English speakers do. Instead, Whorf argued, the Hopi view time as a formless process.

This was then taken by others to mean that the Hopi don’t have any concept of time—an extreme view that has since been repeatedly disproven.

There is some evidence for a more nuanced version of linguistic relativity, which suggests that the structure and vocabulary of the language you speak can influence how you understand the world around you. To understand this better, it helps to look at real-world examples of the effects language can have on thought and behavior.

Different Languages Express Colors Differently

Color is one of the most common examples of linguistic relativity. Most known languages have somewhere between two and twelve color terms, and the way colors are categorized varies widely. In English, for example, there are distinct categories for blue and green .

Blue and Green

But in Korean, there is one word that encompasses both. This doesn’t mean Korean speakers can’t see blue, it just means blue is understood as a variant of green rather than a distinct color category all its own.

In Russian, meanwhile, the colors that English speakers would lump under the umbrella term of “blue” are further subdivided into two distinct color categories, “siniy” and “goluboy.” They roughly correspond to light blue and dark blue in English. But to Russian speakers, they are as distinct as orange and brown .

In one study comparing English and Russian speakers, participants were shown a color square and then asked to choose which of the two color squares below it was the closest in shade to the first square.

The test specifically focused on varying shades of blue ranging from “siniy” to “goluboy.” Russian speakers were not only faster at selecting the matching color square but were more accurate in their selections.

The Way Location Is Expressed Varies Across Languages

This same variation occurs in other areas of language. For example, in Guugu Ymithirr, a language spoken by Aboriginal Australians, spatial orientation is always described in absolute terms of cardinal directions. While an English speaker would say the laptop is “in front of” you, a Guugu Ymithirr speaker would say it was north, south, west, or east of you.

As a result, Aboriginal Australians have to be constantly attuned to cardinal directions because their language requires it (just as Russian speakers develop a more instinctive ability to discern between shades of what English speakers call blue because their language requires it).

So when you ask a Guugu Ymithirr speaker to tell you which way south is, they can point in the right direction without a moment’s hesitation. Meanwhile, most English speakers would struggle to accurately identify South without the help of a compass or taking a moment to recall grade school lessons about how to find it.

The concept of these cardinal directions exists in English, but English speakers aren’t required to think about or use them on a daily basis so it’s not as intuitive or ingrained in how they orient themselves in space.

Just as with other aspects of thought and perception, the vocabulary and grammatical structure we have for thinking about or talking about what we feel doesn’t create our feelings, but it does shape how we understand them and, to an extent, how we experience them.

Words Help Us Put a Name to Our Emotions

For example, the ability to detect displeasure from a person’s face is universal. But in a language that has the words “angry” and “sad,” you can further distinguish what kind of displeasure you observe in their facial expression. This doesn’t mean humans never experienced anger or sadness before words for them emerged. But they may have struggled to understand or explain the subtle differences between different dimensions of displeasure.

In one study of English speakers, toddlers were shown a picture of a person with an angry facial expression. Then, they were given a set of pictures of people displaying different expressions including happy, sad, surprised, scared, disgusted, or angry. Researchers asked them to put all the pictures that matched the first angry face picture into a box.

The two-year-olds in the experiment tended to place all faces except happy faces into the box. But four-year-olds were more selective, often leaving out sad or fearful faces as well as happy faces. This suggests that as our vocabulary for talking about emotions expands, so does our ability to understand and distinguish those emotions.

But some research suggests the influence is not limited to just developing a wider vocabulary for categorizing emotions. Language may “also help constitute emotion by cohering sensations into specific perceptions of ‘anger,’ ‘disgust,’ ‘fear,’ etc.,” said Dr. Harold Hong, a board-certified psychiatrist at New Waters Recovery in North Carolina.

As our vocabulary for talking about emotions expands, so does our ability to understand and distinguish those emotions.

Words for emotions, like words for colors, are an attempt to categorize a spectrum of sensations into a handful of distinct categories. And, like color, there’s no objective or hard rule on where the boundaries between emotions should be which can lead to variation across languages in how emotions are categorized.

Emotions Are Categorized Differently in Different Languages

Just as different languages categorize color a little differently, researchers have also found differences in how emotions are categorized. In German, for example, there’s an emotion called “gemütlichkeit.”

While it’s usually translated as “cozy” or “ friendly ” in English, there really isn’t a direct translation. It refers to a particular kind of peace and sense of belonging that a person feels when surrounded by the people they love or feel connected to in a place they feel comfortable and free to be who they are.

Harold Hong, MD, Psychiatrist

The lack of a word for an emotion in a language does not mean that its speakers don't experience that emotion.

You may have felt gemütlichkeit when staying up with your friends to joke and play games at a sleepover. You may feel it when you visit home for the holidays and spend your time eating, laughing, and reminiscing with your family in the house you grew up in.

In Japanese, the word “amae” is just as difficult to translate into English. Usually, it’s translated as "spoiled child" or "presumed indulgence," as in making a request and assuming it will be indulged. But both of those have strong negative connotations in English and amae is a positive emotion .

Instead of being spoiled or coddled, it’s referring to that particular kind of trust and assurance that comes with being nurtured by someone and knowing that you can ask for what you want without worrying whether the other person might feel resentful or burdened by your request.

You might have felt amae when your car broke down and you immediately called your mom to pick you up, without having to worry for even a second whether or not she would drop everything to help you.

Regardless of which languages you speak, though, you’re capable of feeling both of these emotions. “The lack of a word for an emotion in a language does not mean that its speakers don't experience that emotion,” Dr. Hong explained.

What This Means For You

“While having the words to describe emotions can help us better understand and regulate them, it is possible to experience and express those emotions without specific labels for them.” Without the words for these feelings, you can still feel them but you just might not be able to identify them as readily or clearly as someone who does have those words. 

Rhee S. Lexicalization patterns in color naming in Korean . In: Raffaelli I, Katunar D, Kerovec B, eds. Studies in Functional and Structural Linguistics. Vol 78. John Benjamins Publishing Company; 2019:109-128. Doi:10.1075/sfsl.78.06rhe

Winawer J, Witthoft N, Frank MC, Wu L, Wade AR, Boroditsky L. Russian blues reveal effects of language on color discrimination . Proc Natl Acad Sci USA. 2007;104(19):7780-7785.  10.1073/pnas.0701644104

Lindquist KA, MacCormack JK, Shablack H. The role of language in emotion: predictions from psychological constructionism . Front Psychol. 2015;6. Doi:10.3389/fpsyg.2015.00444

By Rachael Green Rachael is a New York-based writer and freelance writer for Verywell Mind, where she leverages her decades of personal experience with and research on mental illness—particularly ADHD and depression—to help readers better understand how their mind works and how to manage their mental health.

SEP logo

  • Table of Contents
  • New in this Archive
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Language of Thought Hypothesis

The Language of Thought Hypothesis (LOTH) postulates that thought and thinking take place in a mental language. This language consists of a system of representations that is physically realized in the brain of thinkers and has a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOTH, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in syntactic operations defined over such representations. Most of the arguments for LOTH derive their strength from their ability to explain certain empirical phenomena like productivity and systematicity of thought and thinking.

1. What is the Language of Thought Hypothesis?

2. status of loth, 3. scope of loth, 4. nativism and loth, 5.1 the problem of thinking, 5.2 syntactic engine driving a semantic engine: computation, 5.3 intentionality and loth, 6.1 argument from contemporary cognitive psychology, 6.2 argument from the productivity of thought, 6.3 argument from the systematicity and compositionality of thought, 6.4 argument from the systematicity of thinking (inferential coherence), 7.1 regress arguments against loth, 7.2 propositional attitudes without explicit representations, 7.3 explicit representations without propositional attitudes, 8. the connectionism/classicism debate, other internet resources, related entries.

LOTH is an empirical thesis about the nature of thought and thinking. According to LOTH, thought and thinking are done in a mental language, i.e., in a symbolic system physically realized in the brain of the relevant organisms. In formulating LOTH, philosophers have in mind primarily the variety of thoughts known as ‘propositional attitudes’. Propositional attitudes are the thoughts described by such sentence forms as ‘ S believes that P ’, ‘ S hopes that P ’, ‘ S desires that P ’, etc., where ‘ S ’ refers to the subject of the attitude, ‘ P ’ is any sentence, and ‘that P ’ refers to the proposition that is the object of the attitude. If we let ‘ A ’ stand for such attitude verbs as ‘believe’, ‘desire’, ‘hope’, ‘intend’, ‘think’, etc., then the propositional attitude statements all have the form: S A s that P .

LOTH can now be formulated more exactly as a hypothesis about the nature of propositional attitudes and the way we entertain them. It can be characterized as the conjunction of the following three theses (A), (B) and (C):

Representational Theory of Thought: For each propositional attitude A , there is a unique and distinct (i.e. dedicated) [ 1 ] psychological relation R , and for all propositions P and subjects S , S A s that P if and only if there is a mental representation # P # such that

  • S bears R to # P #, and
  • # P # means that P .

Representational Theory of Thinking: Mental processes, thinking in particular, consists of causal sequences of tokenings of mental representations.

Mental representations, which, as per (A1), constitute the direct “objects” of propositional attitudes, belong to a representational or symbolic system which is such that (cf. Fodor and Pylyshyn 1988:12–3)

representations of the system have a combinatorial syntax and semantics: structurally complex (molecular) representations are systematically built up out of structurally simple (atomic) constituents, and the semantic content of a molecular representation is a function of the semantic content of its atomic constituents together with its syntactic/formal structure, and

the operations on representations (constituting, as per (A2), the domain of mental processes, thinking) are causally sensitive to the syntactic/formal structure of representations defined by this combinatorial syntax.

Functionalist Materialism. Mental representations so characterized are, at some suitable level, functionally characterizable entities that are (possibly, multiply) realized by the physical properties of the subject having propositional attitudes (if the subject is an organism, then the realizing properties are presumably the neurophysiological properties of the brain).

The relation R in (A1), when RTM is combined with (B), is meant to be understood as a computational/functional relation. The idea is that each attitude is identified with a characteristic computational/functional role played by the mental sentence that is the direct “object” of that kind of attitude. (Scare quotes are necessary because it is more appropriate to reserve ‘object’ for a proposition as we have done above, but as long as we keep this in mind, it is harmless to use it in this way for LOT sentences.) For instance, what makes a certain mental sentence an (occurrent) belief might be that it is characteristically the output of perceptual systems and input to an inferential system that interacts decision-theoretically with desires to produce further sentences or action commands. Or equivalently, we may think of belief sentences as those that are accessible only to certain sorts of computational operations appropriate for beliefs, but not to others. Similarly, desire-sentences (and sentences for other attitudes) may be characterized by a different set of operations that define a characteristic computational role for them. In the literature it is customary to use the metaphor of a “belief-box” (cf. Schiffer 1981) as a blanket term to cover whatever specific computational role belief sentences turn out to have in the mental economy of their possessors. (Similarly for “desire-box”, etc.)

The Language of Thought Hypothesis is so-called because of (B): token mental representations are like sentences in a language in that they have a syntactically and semantically regimented constituent structure. Put differently, the mental representations that are the direct “objects” of attitudes are structurally complex symbols whose complexity lends itself to a syntactic and semantic analysis. This is also why the LOT is sometimes called Mentalese .

It is (B2) that makes LOTH a species of the so-called Computational Theory of Mind (CTM). This is why LOTH is sometimes called the Computational/Representational Theory of Mind or Thought (CRTM/CRTT) (cf. Rey 1991, 1997). Indeed, LOTH seems to be the most natural product when RTM is combined with a view that treats mental processes or thinking as computational when computation is understood traditionally or classically (this is a recent term emphasizing the contrast with connectionist processing, which we will discuss later).

According to LOTH, when someone believes that P , there is a sense in which the immediate “object” of one's belief can be said to be a complex symbol, a sentence in one's LOT physically realized in the neurophysiology of one's brain, that has both syntactic structure and a semantic content, namely the proposition that P . So, contrary to the orthodox view that takes the belief relation as a dyadic relation between an agent and a proposition, LOTH takes it to be a triadic relation among an agent, a Mentalese sentence, and a proposition. The Mentalese sentence can then be said to have the proposition as its semantic/intentional content. Within the framework of LOTH, it is only in this sense can it be said that what is believed is a proposition, and thus the proper object of the attitude.

This triadic view seems to have several advantages over the orthodox dyadic view. It is a puzzle in the dyadic view how intentional organisms can stand in direct relation to abstract objects like propositions in such a way as to influence their causal powers. According to folk psychology (ordinary commonsense psychology that we rely on daily in our dealings with others), it is because those states have the propositional content they do that they have the causal powers they do. LOTH makes this relatively non-mysterious by introducing a physical intermediary that is capable of having the relevant causal powers in virtue of its syntactic structure that encodes its semantic content. Another advantage of this is that the thought processes can be causally guided by the syntactic forms of the sentences in a way that respect their semantic contents. This is the virtue of (B) to which we'll come back below. Mainly because of these features, LOTH is said to be poised to scientifically vindicate folk psychology if it turns out to be true.

LOTH has primarily been advanced as an empirical thesis (although some have argued for the truth of LOTH on a priori or conceptual grounds following the natural conceptual contours of folk psychology—see Davies 1989, 1991; Lycan 1993; Rey 1995; Jacob 1997; Markic 2001 argues against Jacob. Harman 1973 develops and defends LOTH on both empirical and conceptual grounds). It is not meant to be taken as an analysis of what the folk mean (or, for that matter, what the scientists ought to mean) when they talk about various propositional attitudes and their role in thinking. In this regard, LOT theorists typically view themselves as engaged in some sort of a proto-science, or at least in some empirical research program continuous with scientific psychology. Indeed, as we will see in more detail below, when Jerry Fodor first explicitly articulated and elaborated LOTH in some considerable detail in his (1975), he basically defended it on the ground that it was assumed by our best scientific theories or models in cognitive psychology and psycholinguistics. This empirical status generally accorded to LOTH should be kept firmly in mind when assessing its plausibility and especially its prospects in the light of new evidence and developments in scientific psychology. Nevertheless, it would be more appropriate to see LOTH more as a foundational thesis rather than as an ongoing research project guided by a set of concrete empirical methods, specific theses and principles. In this regard, LOTH stands to specific scientific theories of the (various aspects of the) mind somewhat like the “Atomic Hypothesis” stands to a whole bunch specific scientific theories about the particulate nature of the world (some of which may be—and certainly historically, have been—incompatible with each other).

When viewed this way, scientific theories advanced within the LOTH framework are not, strictly speaking, committed to preserving the folk taxonomy of the mental states in any very exact way. Notions like belief, desire, hope, fear, etc. are folk notions and, as such, it may not be utterly plausible to expect (eliminativist arguments aside) that a scientific psychology will preserve the exact contours of these concepts. On the contrary, there is every reason to believe that scientific counterparts of these notions will carve the mental space somewhat differently. For instance, it has been noted that the folk notion of belief harbors many distinctions. For example, it has both a dispositional and an occurrent sense. In the occurrent sense, it seems to mean something like consciously entertaining and accepting a thought (proposition) as true. There is quite a bit of literature and controversy on the dispositional sense. [ 2 ] Beliefs are also capable of being explicitly stored in long term memory as opposed to being merely dispositional or tacit. Compare, for instance: I believe that there was a big surprise party for my 24th birthday vs. I have always believed that lions don't eat their food with forks and knives, or that 13652/4=3413, even though until now these latter two thoughts had never occurred to me. There is furthermore the issue of degree of belief: while I may believe that George will come to dinner with his new girlfriend even though I wouldn't bet on it, you, thinking that you know him better than I do, may nevertheless go to the wall for it. It is unlikely that there will be one single construct of scientific psychology that will exactly correspond to the folk notion of belief in all these ways.

For LOTH to vindicate folk psychology it is sufficient that a scientific psychology with a LOT architecture come up with scientifically grounded psychological states that are recognizably like the propositional attitudes of folk psychology, and that play more or less similar roles in psychological explanations. [ 3 ]

LOTH is an hypothesis about the nature of thought and thinking with propositional content. As such, it may or may not be applicable to other aspects of mental life. Officially, it is silent about the nature of some mental phenomena such as experience, qualia, [ 4 ] sensory processes, mental images, visual and auditory imagination, sensory memory, perceptual pattern-recognition capacities, dreaming, hallucinating, etc. To be sure, many LOT theorists hold views about these aspects of mental life that sometimes make it seem that they are also to be explained by something similar to LOTH. [ 5 ]

For instance, Fodor (1983) seems to think that many modular input systems have their own LOT to the extent to which they can be explained in representational and computational terms. Indeed, many contemporary psychological models treat perceptual input systems in just these terms. [ 6 ] There is indeed some evidence that this kind of treatment might be appropriate for many perceptual processes. But it is to be kept in mind that a system may employ representations and be computational without necessarily satisfying any or both of the clauses in (B) above in any full-fledged way. Just think of finite automata theory where there are plenty of examples of a computational process defined over states or symbols which lack full-blown syntactic and/or semantic structural complexity. (For a useful discussion of varieties of computational processes and their classification, see Piccinini 2008.) Whether sensory or perceptual processes are to be treated within the framework of full-blown LOTH is again an open empirical question. It might be that the answer to this question is affirmative. If so, there may be more than one LOT realized in different subsystems or mechanisms in the mind/brain. So LOTH is not committed to there being a single representational system realized in the brain, nor is it committed to the claim that all mental representations are complex or language-like, nor would it be falsified if it turns out that most aspects of mental life other than the ones involving propositional attitudes don't require a LOT.

Similarly, there is strong evidence that the mind also exploits an image-like representational medium for certain kinds of mental tasks. [ 7 ] LOTH is non-committal about the existence of an image-like representational system for many mental tasks other than the ones involving propositional attitudes. But it is committed to the claim that propositional thought and thinking cannot be successfully accounted for in its entirety in purely imagistic terms. It claims that a combinatorial sentential syntax is necessary for propositional attitudes and a purely imagistic medium is not adequate for capturing that. [ 8 ]

There are in fact some interesting and difficult issues surrounding these claims. The adequacy of an imagistic system seems to turn on the nature of syntax at the sentential level. For instance, Fodor, in Chapter 4 of his (1975) book, allows that many lexical items in one's LOT may be image-like; he introduces the notion of a mental image/picture under description to avoid some obvious inadequacies of pictures (e.g., what makes a picture a picture of an overweight woman rather than a pregnant one, or vice versa, etc.). This is an attempt to combine discursive and imagistic representational elements at the lexical level. There may even be a well defined sense in which pictures can be combined to produce structurally complex pictures (as in British Empiricism: image-like simple ideas are combined to produce complex ideas, e.g., the idea of a unicorn—see also Prinz 2002). But what is absolutely essential for LOTH, and what Fodor insists on, is the claim that there is no adequate way in which a purely image-like system can capture what is involved in making judgments , i.e., in judging propositions to be true. This seems to require a discursive syntactic approach at the sentential level. The general problem here is the inadequacy of pictures or image-like representations to express propositions. I can judge that the blue box is on top of the red one without judging that the red box is under the blue one. I can judge that Mary kisses John without judging that John kisses Mary, and so on for indefinitely many such cases. It is hard to see how images or pictures can do that without using any syntactic structure or discursive elements, to say nothing of judging, e.g., conditionals, disjunctive or negative propositions, quantifications, negative existentials, etc. [ 9 ]

Moreover, there are difficulties with imagistic representations arising from demands on processing representations. As we will see below, (B2) turns out to provide the foundations for one of the most important arguments for LOTH: it makes it possible to mechanize thinking understood as a semantically coherent thought process, which, as per (A2), consists of a causal sequence of tokenings of mental representations. It is not clear, however, how an equivalent of (B2) could be provided for images or pictures in order to accommodate operations defined over them, even if something like an equivalent of (B1) could be given. On the other hand, there are truly promising attempts to integrate discursive symbolic theorem-proving with reasoning with image-like symbols. They achieve impressive efficiency in theorem-proving or in any deductive process defined over the expressions of such an integrated system. Such attempts, if they prove to be generalizable to psychological theorizing, are by no means threats to LOTH; on the contrary, such systems have every feature to make them a species of a LOT system: they satisfy (B). [ 10 ]

In the book (1975) in which Fodor introduced the LOTH, he also argued that all concepts are innate. As a result, the connection between LOTH and an implausibly strong version of conceptual nativism looked very much internal. This historical coincidence has led some people to think that LOTH is essentially committed to a very strong form of nativism, so strong in fact that it seems to make a reductio of itself (see, for instance, P.S. Churchland 1986, H. Putnam 1988, A. Clark 1994). The gist of his argument was that since learning concepts is a form of hypothesis formation and confirmation, it requires a system of mental representations in which formation and confirmation of hypotheses are to be carried out, but then there is a non-trivial sense in which one already has (albeit potentially) the resources to express the extension of the concepts to be learned.

In his LOT 2 (2008), Fodor continues to claim that concepts cannot be learned and that the very idea of concept learning is “confused”:

Now, according to HF [the Hypothesis Formation and Confirmation model], the process by which one learns C must include the inductive evaluation of some such hypothesis as ‘The C things are the ones that are green or triangular’. But the inductive evaluation of that hypothesis itself requires ( inter alia ) bringing the property green or triangular before the mind as such. ... Quite generally, you can't represent anything as such and such unless you already have the concept such and such . All that being so, it follows, on pain of circularity, that ‘concept learning’ as HF understands it can't be a way of acquiring concept C. ... Conclusion: If concept learning is as HF understands it, there can be no such thing . This conclusion is entirely general; it doesn't matter whether the target concept is primitive (like GREEN) or complex (like GREEN OR TRIANGULAR). ( LOT 2 , 2008:139)

Note that this argument and the predecessors Fodor articulated in his previous writings and especially in his (1975) are entirely general, applicable to any hypothesis that identifies concepts with mental representations whether or not these representations belong to a LOT.

The crux of the issue seems to be that learning concepts is a rational process. There seem to be non-arbitrary semantic and epistemic liaisons between the target concept to be acquired and its “evidence” base. This evidence base needs to be represented and rationally tied to the target concept. This target concept needs also to be expressed in terms of representations one already possesses. Fodor thinks that any model of concept learning understood in this sense will have to be a form of hypothesis formation and confirmation. But not every form of concept acquisition is learning. There are non-rational ways of acquiring concepts whose explanation need not be at the cognitive level (e.g., brute triggering mechanisms that can be activated in sorts of ways that can presumably be explained at the sub-cognitive or neurophysiological levels). If concepts cannot be learned, then they are either innate or non-rationally acquired. Whereas early Fodor used to think that concepts must therefore be innate (maybe he thought that non-learning concept acquisition forms are limited to sensory or certain classes of perceptual concepts), he now thinks that they may be acquired but the explanation of this is not the business of cognitive psychology.

Whatever one may think of the merits of Fodor's arguments for concept nativism or of his recent anti-learning stance, it should be emphasized that LOTH per se has very little to do with it. LOTH is not committed to such a strong version of nativism, especially about concepts. It also need not be committed to any anti-learning stance about concepts. It is certainly plausible to assume that LOTH will turn out to have some empirically (as well as theoretically/a priori) motivated nativist commitments about the structural organization and dynamic management of the entire representational system. But this much is to be expected especially in the light of recent empirical findings and trends. This, however, does not constitutes a reductio . It is an open empirical question how much nativism is true about concepts, and LOTH should be so taken as to be capable of accommodating whatever turns out to be true in this matter. LOTH, therefore, when properly conceived, is independent of any specific proposal about conceptual nativism. [ 11 ]

5. Naturalism and LOTH

One of the most attractive features of LOTH is that it is a central component of an ongoing research program in philosophy of psychology to naturalize the mind, that is, to give a theoretical framework in which the mind could naturally be seen as part of the physical world without postulating irreducibly psychic entities, events, processes or properties. Fodor, historically the most important defender of LOTH, once identified the major mysteries in philosophy of mind thus:

How could anything material have conscious states? How could anything material have semantical properties? How could anything material be rational? (where this means something like: how could the state transitions of a physical system preserve semantical properties?). (1991: 285, Reply to Devitt)

LOTH is a full-blown attempt to give a naturalist answer to the third question, an attempt to solve at least part of the problem underlying the second one, and is almost completely silent about the first. [ 12 ]

According to RTM, propositional attitudes are relations to meaningful mental representations whose causally sequenced tokenings constitute the process of thinking. This much can, in principle, be granted by an intentional realist who might nevertheless reject LOTH. Indeed, there are plenty of theorists who accept RTM in some suitable form (and also happily accept (C) in many cases) but reject LOTH either by explicitly rejecting (B) or simply by remaining neutral about it. Among some of the prominent philosophers who choose the former option are Searle (1984, 1990, 1992), Stalnaker (1984), Lewis (1972), Barwise and Perry (1983). [ 13 ] Some who want to remain neutral include Loar (1982a, 1982b), Dretske (1981), Armstrong (1980), and many contemporary functionalists including some connectionists. [ 14 ]

But RTM per se doesn't so much propose a naturalistic solution to intentionality and mechanization of thinking as simply assert a framework to emphasize intentional realism and, perhaps, with (C), a declaration of a commitment to naturalism or physicalism at best. How, then, is the addition of (B) supposed to help? Let us first try to see in a bit more detail what the problem is supposed to be in the first place to which (B) is proposed as a solution. Let us start by reflecting on thinking and see what it is about thinking that makes it a mystery in Fodor's list. This will give rise to one of the most powerful (albeit still nondemonstrative) arguments for LOTH.

RTM's second clause (A2), in effect, says that thinking is at least the tokenings of states that are (a) intentional (i.e. have representational/propositional content) and (b) causally connected. But, surely, thinking is more. There could be a causally connected series of intentional states that makes no sense at all. Thinking, therefore, is causally proceeding from states to states that makes semantic sense: the transitions among states must preserve some of their semantic properties to count as thinking. In the ideal case, this property would be the truth value of the states. But in most cases, any interesting intentional or epistemic property would do (e.g., warrantedness, degree of confirmation, semantic coherence given a certain practical context like satisfaction of goals in a specific context, etc.). In general, it is hard to spell out what this requirement of “making sense” comes to. The intuitive idea, however, should be clear. Thinking is not proceeding from thoughts to thoughts in arbitrary fashion: thoughts that are causally connected are in some fashion semantically (rationally, epistemically) connected too. If this were not so, there would be little point in thinking—thinking couldn't serve any useful purpose. Call this general phenomenon, then, the semantic coherence of causally connected thought processes. LOTH is offered as a solution to this puzzle: how is thinking, conceived this way, physically possible? This is the problem of thinking, thus the problem of mechanization of rationality in Fodor's version. How does LOTH propose to solve this problem and bring us one big step closer to the naturalization of the mind?

The two most important achievements of 20th century that are at the foundations of LOTH as well as most of modern Artificial Intelligence (AI) research and most of the so-called information processing approaches to cognition are (i) the developments in modern symbolic (formal) logic, and (ii) Alan Turing's idea of a Turing Machine and Turing computability. It is putting these two ideas together that gives LOTH its enormous explanatory power within a naturalistic framework. Modern logic showed that most of deductive reasoning can be formalized, i.e. most semantic relations among symbols can be entirely captured by the symbols' formal/syntactic properties and the relations among them. And Turing showed, roughly, that if a process has a formally specifiable character then it can be mechanized. So we can appreciate the implications of (i) and (ii) for the philosophy of psychology in this way: if thinking consists in processing representations physically realized in the brain (in the way the internal data structures are realized in a computer) and these representations form a formal system, i.e., a language with its proper combinatorial syntax (and semantics) and a set of derivations rules formally defined over the syntactic features of those representations (allowing for specific but powerful programs to be written in terms of them), then the problem of thinking, as described above, can in principle be solved in completely naturalistic terms, thus the mystery surrounding how a physical device can ever have semantically coherent state transitions (processes) can be removed. Thus, given the commitment to naturalism, the hypothesis that the brain is a kind of computer trafficking in representations in virtue of their syntactic properties is the basic idea of LOTH (and the AI vision of cognition).

Computers are environments in which symbols are manipulated in virtue of their formal features, but what is thus preserved are their semantic properties, hence the semantic coherence of symbolic processes. Slightly paraphrasing Haugeland (cf. 1985: 106), who puts the same point nicely in the form of a motto:

The Formalist Motto : If you take care of the syntax of a representational system, its semantics will take care of itself.

This is in virtue of the mimicry or mirroring relation between the semantic and formal properties of symbols. As Dennett once put it in describing LOTH, we can view the thinking brain as a syntactically driven engine preserving semantic properties of its processes, i.e. driving a semantic engine. What is so nice about this picture is that if LOTH is true we have a naturalistically adequate causal treatment of thinking that respects the semantic properties of the thoughts involved: it is in virtue of the physically coded syntactic/formal features that thoughts cause each other while the coherence of their semantic properties is preserved precisely in virtue of this.

Whether or not LOTH actually turns out to be empirically true in the details or in its entire vision of rational thinking, this picture of a syntactic engine driving a semantic one can at least be taken to be an important philosophical demonstration of how Descartes' challenge can be met (cf. Rey 1997: chp.8). Descartes claimed that rationality in the sense of having the power “to act in all the contingencies of life in the way in which our reason makes us act” cannot possibly be possessed by a purely physical device: “The rational soul … could not be in any way extracted from the power of matter … but must … be expressly created” (1637/1970: 117–18). Descartes was completely puzzled by just this rational character and semantic coherence of thought processes so much so that he failed to even imagine a possible mechanistic explication of it. He thus was forced to appeal to Divine creation. But we can now see/imagine at least a possible mechanistic/naturalistic scenario. [ 15 ]

But where do the semantic properties of the mental representations come from in the first place? How can they mean anything? This is Brentano's challenge to a naturalist. Brentano's bafflement was with the intentionality of the human mind, its apparently mysterious power to represent things, events, properties in the world. He thought that nothing physical can have this property: “The reference to something as an object is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar” (Brentano 1874/1973: 97). This problem of intentionality is the second problem or mystery in Fodor's list quoted above. I said that LOTH officially offers only a partial solution to it and perhaps proposes a framework within which the remainder of the solution can be couched and elaborated in a naturalistically acceptable way.

Recall that RTM contains a clause (A1b) that says that the immediate “object” of a propositional attitude that P is a mental representation # P # that means that P . Again, (B1) attributes a compositional semantics to the syntactically complex symbols belonging to one's LOT that are, as per (C), realized by the physical properties of a thinking system. According to LOTH, the semantic content of propositional attitudes is inherited from the semantic content of the mental symbols. So Brentano's questions for a LOT theorist becomes: how do the symbols in one's LOT get their meanings in the first place? There are two levels or stages at which this question can be raised and answered:

(1) At the level of atomic symbols (non-logical primitives): how do the atomic symbols represent what they do? (2) At the level of molecular symbols (phrasal complexes or sentences): how do molecular symbols represent what they do?

There have been at least two major lines LOT theorists have taken regarding these questions. The one that is least committal might perhaps be usefully described as the official position regarding LOTH's treatment of intentionality. Most LOT theorists seem to have taken this line. The official line doesn't propose any theory about the first stage, but simply assumes that the first question can be answered in a naturalistically acceptable way. In other words, officially LOTH simply assumes that the atomic symbols/expressions in one's LOT have whatever meanings they have. [ 16 ]

But, the official line continues, LOTH has a lot to say about the second stage, the stage where the semantic contents are computed or assigned to complex (molecular) symbols on the basis of their combinatorial syntax or grammar together with whatever meanings atomic symbols are assumed to have in the first stage. This procedure is familiar from a Tarski-style [ 17 ] definition of truth conditions of sentences . The truth-value of complex sentences in propositional logic are completely determined by the truth-values of the atomic sentences they contain together with the rules fixed by the truth-tables of the connectives occurring in the complex sentences. Example: ‘ P and Q ’ is true just in case both ‘ P ’ and ‘ Q ’ are true, but false otherwise. This process is similar but more complex in first-order languages, and even more so for natural languages—in fact, we don't have a completely working compositional semantics for the latter at the moment. So, if we have a semantic interpretation of atomic symbols ( if we have symbols whose reference and extension are fixed at the first stage by whatever naturalistic mechanism turns out to govern it), then the combinatorial syntax will take over and effectively determine the semantic interpretation (truth-conditions) of the complex sentences they are constituents of. So officially LOTH would only contribute to a complete naturalization project if there is a naturalistic story at the atomic level.

Early Fodor (1975, 1978, 1978a, 1980), for instance, envisaged a science of psychology which, among other things, would reasonably set for itself the goal of discovering the combinatorial syntactic principles of LOT and the computational rules governing its operations, without worrying much about semantic matters, especially about how to fix the semantics of atomic symbols (he probably thought that this was not a job for LOTH). Similarly, Field (1978) is very explicit about the combinatorial rules for assigning truth-conditions to the sentences of the internal code. In fact, Field's major argument for LOTH is that, given a naturalistic causal theory of reference for atomic symbols, about which he is optimistic (Field 1972), it is the only naturalistic theory that has a chance of solving Brentano's puzzle. For the moment, this is not much more than a hope, but, according to the LOT theorist, it is a well-founded hope based on a number of theoretical and empirical assumptions and data. Furthermore, it is a framework defining a naturalistic research program in which there have been promising successes. [ 18 ]

As I said, this official and, in a way, least committal line has been the more standard way of conceiving LOTH's role in the project of naturalizing intentionality. But some have gone beyond it and explored the ways in which the resources of LOTH can be exploited even in answering the first question (1) about the semantics of atomic symbols.

Now, there is a weak version of an answer to (1) on the part of LOTH and a strong version. On the weak version, LOTH may be untendentiously viewed as inevitably providing some of the resources in giving the ultimate naturalistic theory in naturalizing the meaning of atomic symbols. The basic idea is that whatever the ultimate naturalistic theory turns out to be true about atomic expressions, computation as conceived by LOTH will be part of it. For instance, it may be that, as with nomic covariation theories of meaning (Fodor 1987, 1990a; Dretske 1981), the meaning of an atomic predicate may consist in its potential to get tokened in the presence of (or, in causal response to) something that instantiates the property the predicate is said to express. A natural way of explicating this potential may partly but ultimately rely on certain computational principles the symbol may be subjected to within a LOT framework, or principles that in some sense govern the “behavior” of the symbol. Insofar as computation is naturalistically understood in the way LOTH proposes, a complete answer to the first question about the semantics of atomic symbols may plausibly involve an explicatory appeal to computation within a system of symbols. This is the weak version because it doesn't see LOTH as proposing a complete solution to the first question (1) above, but only helping it.

A strong version would have it that LOTH provides a complete naturalistic solution to both questions: given the resources of LOTH we don't need to look any further to meet Brentano's challenge. The basic idea lies in so-called functional or conceptual role semantics, according to which a concept is the concept it is precisely in virtue of the particular causal/functional potential it has in interacting with other concepts. Each concept may be thought of as having a certain distinctive set of epistemic/semantic relations or liaisons to other concepts. We can conceive of this set as determining a certain “conceptual role” for each concept. We can then take these roles to determine the semantic identity of concepts: concepts are the concepts they are because they have the conceptual roles they have; that is to say, among other things, concepts represent whatever they do precisely in virtue of these roles. The idea then is to reduce each conceptual role to causal/functional role of atomic symbols (now conceived as primitive terms in LOTH), and then use the resources of LOTH to reduce it in turn to computational role. Since computation is naturalistically well-defined, the argument goes, and since causal interactions between thoughts and concepts can be understood completely in terms of computation, we can completely naturalize intentionality if we can successfully treat meanings as arising out of thoughts/concepts' internal interactions with each other. In other words, the strong version of LOTH would claim that atomic symbols in LOT have the content they do in virtue of their potential for causal interactions with other tokens, and cashing out this potential in mechanical/naturalistic terms is what, among other things, LOTH is for. LOTH then comes as a naturalistic rescuer for conceptual role semantics.

It is not clear whether any one holds this strong version of LOTH in this rather naive form. But certainly some people have elaborated the basic idea in quite subtle ways, for which Cummins (1989: chp.8) is perhaps the best example. (But also see Block 1986 and Field 1978.) But even in the best hands, the proposal turns out to be very problematic and full of difficulties nobody seems to know how to straighten out. In fact, some of the most ardent critics of taking LOTH as incorporating a functional role semantics turn out to be some of the most ardent defenders of LOTH understood in a weak, non-committal sense we have explored above—see Fodor (1987: chp.3), Fodor and Lepore (1991), Fodor's attack (1978b) on AI's way of doing procedural semantics is also relevant here. Haugeland (1981), Searle (1980, 1984), and Putnam (1988) quite explicitly take LOTH to involve a program for providing a complete semantic account of mental symbols, which they then attack accordingly. [ 19 ]

It is also possible, in fact, quite natural, to combine conceptual role semantics (internalist) with causal/informational psychosemantics (externalist). The result is sometimes known as two-factor theories. If this turns out to be the right way to naturalize intentionality, then, given what is said above about the potential resources of LOTH in contributing to both factors, it is easy to see why many theorists who worry about naturalizing intentionality are attracted to LOTH.

As indicated previously, LOTH is almost completely silent about consciousness and the problem of qualia, the third mystery in Fodor's list in the quote above. But the naturalist's hope is that this problem too will be solved, if not by LOTH, then by something else. On the other hand, it is important to emphasize that LOTH is neutral about the naturalizability of consciousness/qualia. If it turns out that qualia cannot be naturalized, this would by no means show that LOTH is false or defective in some way. In fact, there are people who seem to think that LOTH may well turn out to be true even though qualia can perhaps not be naturalized (e.g., Block 1980, Chalmers 1996, McGinn 1991).

Finally, it should be emphasized that LOTH has no particular commitment to every symbolic activity's being conscious. Conscious thoughts and thinking may be the tip of a computational iceberg. Nevertheless, there are ways in which LOTH can be helpful for an account of state consciousness that seeks to explain a thought's being conscious in terms of a higher order thought which is about the first order thought. So, to the extent to which thought and thinking are conscious, to that extent LOTH can perhaps be viewed as providing some of the necessary resources for a naturalistic account of state consciousness—for elaboration see Rosenthal (1997) and Lycan (1997).

6. Arguments for LOTH

We have already seen two major arguments, perhaps historically the most important ones, for LOTH: First, we have noted that if LOTH is true then all the essential features of the common sense conception of propositional attitudes will be explicated in a naturalistic framework which is likely to be co-opted by scientific cognitive psychology, thus vindicating folk psychology. Second, we have discussed that, if true, LOTH would solve one of the mysteries about thinking minds: how is thinking (as characterized above) possible? How is rationality mechanically possible? Then we have also seen a third argument that LOTH would partially contribute to the project of naturalizing intentionality by offering an account of how the semantic properties of whole attitudes are fixed on the basis of their atomic constituents. But there have been many other arguments for LOTH. In this section, I will describe only those arguments that have been historically more influential and controversial.

When Fodor first formulated LOTH with significant elaboration in his (1975), he introduced his major argument for it along with its initial formulation in the first chapter. It was basically this: our best scientific theories and models of different aspects of higher cognition assume a framework that requires a computational/representational medium for them to be true. More specifically, he analyzed the basic form of the information processing models developed to account for three types of cognitive phenomena: perception as the fixation of perceptual beliefs, concept learning as hypothesis formation and confirmation, and decision making as a form of representing and evaluating the consequences of possible actions carried out in a situation with a preordered set of preferences. He rightly pointed out that all these psychological models treated mental processes as computational processes defined over representations. Then he drew what seems to be the obvious conclusion: if these models are right in at least treating mental processes as computational, even if not in detail, then there must be a LOT over which they are defined, hence LOTH.

In Fodor's (1975), the arguments for different aspects of LOTH are diffused and the emphasis, with the book's slogan “no computation without representation”, is put on the RTM rather than on (B) or (C). But all the elements are surely there.

People seem to be capable of entertaining an infinite number of thoughts, at least in principle, although they in fact entertain only a finite number of them. Indeed adults who speak a natural language are capable of understanding sentences they have never heard uttered before. Here is one: there is a big lake of melted gold on the dark side of the moon. I bet that you have never heard this sentence before, and yet, you have no difficulty in understanding it: it is one you in fact likely believe false. But this sentence was arbitrary, there are infinitely many such sentences I can in principle utter and you can in principle understand. But understanding a sentence is to entertain the thought/proposition it expresses. So there are in principle infinitely many thoughts you are capable of entertaining. This is sometimes expressed by saying that we have an unbounded competence in entertaining different thoughts, even though we have a bounded performance . But this unbounded capacity is to be achieved by finite means. For instance, storing an infinite number of representations in our heads is out of the question: we are finite beings. If human cognitive capacities (capacities to entertain an unbounded number of thoughts, or to have attitudes towards an unbounded number of propositions) are productive in this sense, how is this to be explained on the basis of finitary resources?

The explanation LOTH offers is straightforward: postulate a representational system that satisfies at least (B1). Indeed, recursion is the only known way to produce an infinite number of symbols from a finite base. In fact, given LOTH, productivity of thought as a competence mechanism seems to be guaranteed. [ 20 ]

Systematicity of thought consists in the empirical fact that the ability to entertain certain thoughts is intrinsically connected to the ability to entertain certain others. Which ones? Thoughts that are related in a certain way. In what way? There is a certain initial difficulty in answering such questions. I think, partly because of this, Fodor (1987) and Fodor and Pylyshyn (1988), who are the original defenders of this kind of argument, first argue for the systematicity of language production and understanding: the ability to produce/understand certain sentences is intrinsically connected to the ability to produce/understand certain others. Given that a mature speaker is able to produce/understand a certain sentence in her native language, by psychological law, there always appear to be a cluster of other sentences that she is able to produce/understand. For instance, we don't find speakers who know how to express in their native language the fact that John loves the girl but not the fact that the girl loves John. This is apparently so, moreover, for expressions of any n-place relation.

Fodor and Pylyshyn bring out the force of this psychological fact by comparing learning languages the way we actually do with learning a language by memorizing a huge phrase book. In the phrase book model, there is nothing to prevent someone learning how to say ‘John loves the girl’ without learning how to say ‘the girl loves John.’ In fact, that is exactly the way some information booklets prepared for tourists help them to cope with their new social environment. You might, for example, learn from a phrase book how to say ‘I'd like to have a cup of coffee with sugar and milk’ in Turkish without knowing how to say/understand absolutely anything else in Turkish. In other words, the phrase book model of learning a language allows arbitrarily punctate linguistic capabilities. In contrast, a speaker's knowledge of her native language is not punctate, it is systematic . Accordingly, we do not find, by nomological necessity, native speakers whose linguistic capacities are punctate.

Now, how is this empirical truth (in fact, a law-like generalization) to be explained? Obviously if this is a general nomological fact, then learning one's native language cannot be modeled on the phrase book model. What is the alternative? The alternative is well known. Native speakers master the grammar and vocabulary of their language. But this is just to say that sentences are not atomic, but have syntactic constituent structure. If you have a vocabulary, the grammar tells you how to combine systematically the words into sentences. Hence, in this way, if you know how to construct a particular sentence out of certain words, you automatically know how to construct many others. If you view all sentences as atomic, then, as Fodor and Pylyshyn say, the systematicity of language production/understanding is a mystery, but if you acknowledge that sentences have syntactic constituent structure, systematicity of linguistic capacities is what you automatically get; it is guaranteed. This is the orthodox explanation of linguistic systematicity.

From here, according to Fodor and Pylyshyn, establishing the systematicity of thought as a nomological fact is one step away. If it is a law that the ability to understand a sentence is systematically connected to the ability to understand many others, then it is similarly a law that the ability to think a thought is systematically connected to the ability to think many others. For to understand a sentence is just to think the thought/proposition it expresses. Since, according to RTM, to think a certain thought is just to token a representation in the head that expresses the relevant proposition, the ability to token certain representations is systematically connected to the ability to token certain others. But then, this fact needs an adequate explanation too. The classical explanation LOTH offers is to postulate a system of representations with combinatorial syntax exactly as in the case of the explanation of the linguistic systematicity. This is what (B1) offers. [ 21 ] This seems to be the only explanation that does not make the systematicity of thought a miracle, and thus argues for the LOT hypothesis.

However, thought is not only systematic but also compositional: systematically connected thoughts are also always semantically related in such a way that the thoughts so related seem to be composed out of the same semantic elements. For instance, the ability to think ‘John loves the girl’ is connected to the ability to think ‘the girl loves John’ but not to, say, ‘protons are made up of quarks’ or to ‘2+2=4.’ Why is this so? The answer LOTH gives is to postulate a combinatorial semantics in addition to a combinatorial syntax, where an atomic constituent of a mental sentence makes (approximately) the same semantic contribution to any complex mental expression in which it occurs. This is what Fodor and Pylyshyn call ‘the principle of compositionality’. [ 22 ]

In brief, it is an argument for LOTH that it offers a cogent and principled solution to the systematicity and compositionality of cognitive capacities by postulating a system of representations that has a combinatorial syntax and semantics, i.e., a system of representations that satisfies at least (B1).

Systematicity of thought does not seem to be restricted solely to the systematic ability to entertain certain thoughts . If the system of mental representations does have a combinatorial syntax, then there is a set of rules, psychosyntactic formation rules, so to speak, that govern the construction of well-formed expressions in the system. It is this fact, (B1), that guarantees that if you can form a mental sentence on the basis of certain rules, then you can also form many others on the basis of the same rules. The rules of combinatorial syntax determine the syntactic or formal structure of complex mental representations. This is the formative (or, formational ) aspect of systematicity. But inferential thought processes ( i.e., thinking ) seem to be systematic too: the ability to make certain inferences is intrinsically connected to the ability to make certain many others. For instance, you do not find minds that can infer ‘ A ’ from ‘ A&B ’ but cannot infer ‘ C ’ from ‘ A&B&C .’ It seems to be a psychological fact that inferential capacities come in clusters that are homogeneous in certain aspects. How is this fact (i.e., the inferential or transformational systematicity) to be explained?

As we have seen, the explanation LOTH offers depends on the exploitation of the notion of logical form or syntactic structure determined by the combinatorial syntax postulated for the representational system. The combinatorial syntax not only gives us a criterion of well-formedness for mental expressions, but it also defines the logical form or syntactic structure for each well-formed expression. The classical solution to inferential systematicity is to make the mental operations on representations sensitive to their form or structure, i.e., to insist on (B2). Since, from a syntactic view point, similarly formed expressions will have similar forms, it is possible to define a single operation which will apply to only certain expressions that have a certain form, say, only to conjunctions, or conditionals. This allows the LOT theorist to give homogeneous explanations of what appear to be homogeneous classes of inferential capacities. This is one of the greatest virtues of LOTH, hence provides an argument for it.

The solution LOTH offers for what I called the problem of thinking, above, is connected to the argument here because the two phenomena are connected in a deep way. Thinking requires that the logico-semantic properties of a particular thought process be somehow causally implicated in the process (say, inferring that John is happy from knowing that if John is at the beach then John is happy and coming to realize that John is indeed at the beach). The systematicity of inferential thought processes then is based on the observation that if the agent is capable of making that particular inference, then she is capable of making many other somehow similarly organized inferences. But the idea of similar organization in this context seems to demand some sort of classification of thoughts independently of their particular content. But what can the basis of such a classification be? The only basis seems to be the logico-syntactic properties of thoughts, their form. Although it feels a little uneasy to talk about syntactic properties of thoughts common-sensically understood, it seems that they are forced upon us by the very attempt to understand their semantic properties: how, for instance, could we explain the semantic content of the thought that if John is at the beach then he is happy without somehow appealing to its being a conditional ? This is the point of contact between the two phenomena. Especially when the demands of naturalism are added to this picture, inferring a LOT (= a representational system satisfying B) realized in the brain becomes almost irresistible. Indeed Rey (1995) doesn't resist and claims that, given the above observations, LOTH can be established on the basis of arguments that are not “merely empirical”. I leave it to the reader to evaluate whether mere critical reflection on our concepts of thought and thinking (along with certain mundane empirical observations about them) can be sufficient to establish LOTH. [ 23 ]

7. Objections to LOTH

There have been numerous arguments against LOTH. Some of them are directed more specifically against the Representational Theory of Mind (A), some against functionalist materialism (C). Here I will concentrate only on those arguments specifically targeting (B)—the most controversial component of LOTH.

These arguments rely on the explanations offered by LOTH defenders for certain aspects of natural languages. In particular, many LOT theorists advert to LOTH to explain (1) how natural languages are learned, (2) how natural languages are understood, or (3) how the utterances in such languages can be meaningful. For instance, according to Fodor (1975), natural languages are learned by forming and confirming hypotheses about the translation of natural language sentences into Mentalese such as: ‘Snow is white’ is true in English if and only if P , where ‘ P ’ is a sentence in one's LOT. But to be able to do that, one needs a representational medium in which to form and confirm hypotheses—at least to represent the truth-conditions of natural language sentences. The LOT is such a medium. Again, natural languages are understood because, roughly, such an understanding consists in translating their sentences into one's Mentalese. Similarly, natural language utterances are meaningful in virtue of the meanings of corresponding Mentalese sentences.

The basic complaint is that in each of these cases, either the explanations generate a regress because the same sort of explanations ought to be given for how the LOT is learned, understood or can be meaningful, or else they are gratuitous because if a successful explanation can be given for LOT that does not generate a regress then it could and ought to be given for the natural language phenomena without introducing a LOT (see, e.g., Blackburn 1984). Fodor's response in (1975) is (1) that LOT is not learned, it's innate; (2) that it's understood in a different sense than the sense involved in natural language comprehension; (3) that LOT sentences acquire their meanings not in virtue of another meaningful language but in a completely different way, perhaps by standing in some sort of causal relation to what they represent or by having certain computational profiles (see above, §5.3). For many who have a Wittgensteinian bent, these replies are not likely to be convincing. But here the issues tend to concern RTM rather than (B).

Laurence and Margolis (1997) point out that the regress arguments depend on the assumption that LOTH is introduced only to explain (1)-(3). If it can be shown that there are lots of other empirical phenomena for which the LOTH provides good explanations, then the regress arguments fail because LOTH then would not be gratuitous. In fact, as we have seen above, there are plenty of such phenomena. But still it is important to realize that the sort of explanations proposed for the understanding of one's LOT (computational use/activity of LOT sentences with certain meanings) and how LOT sentences can be meaningful (computational roles and/or nomic relations with the world) cannot be given for (1)-(3): it's unclear, for example, what it would be like to give a computational role and/or nomic relation account for the meanings of natural language utterances. (See Knowles 1998 for a reply to Laurence & Margolis 1997; Margolis & Laurence 1998 counterreplies to Knowles.)

Dennett in his review of Fodor's (1975) has raised the following objection (cf. Fodor 1987: 21–3 for a similar discussion):

In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: “it thinks it should get its queen out early.” This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with “I should get my queen out early” explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have “engineering reality.” I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. (Dennett 1981: 107)

The objection, as Fodor (1987: 22) points out, isn't that the program has a dispositional , or potential , belief that it will get its queen out early. Rather, the program actually operates on this belief. There appear to be lots of other examples: e.g., in reasoning we pretty often follow certain inference rules like modus ponens, disjunctive syllogism, etc., without necessarily explicitly representing them.

The standard reply to such objections is to draw a distinction between rules on the basis of which Mentalese data-structures are manipulated, and the data-structures themselves (intuitively, the program/data distinction). LOTH is not committed to every rule's being explicitly represented. In fact, as a point of nomological fact, in a computational device not every rule can be explicitly represented: some have to be hard-wired and, thus, implicit in this sense. In other words, LOTH permits but doesn't require that rules be explicitly represented. On the other hand, data structures have to be explicitly represented: it is these that are manipulated formally by the rules. No causal manipulation is possible without explicit tokening of these structures. According to Fodor, if a propositional attitude is an actual episode in one's reasoning that plays a causal role, then LOTH is committed to explicit representation of its content, which is as per (A2 and B2) causally implicated in the physical process realizing that reasoning. Dispositional propositional attitudes can then be accounted for in terms of an appropriate principle of inferential closure of explicitly represented propositional attitudes (cf. Lycan 1986).

Dennett's chess program certainly involves explicit representations of the chess board, the pieces, etc. and perhaps some of the rules. Which rules are implicit and which are explicit depend on the empirical details of the program. Pointing to the fact that there may be some rules that are emergent out of the implementation of explicit rules and data-structures does not suffice to undermine LOTH.

In any sufficiently complex computational system, there are bound to be many symbol manipulations with no obviously corresponding description at the level of propositional attitudes. For instance, when a multiplication program is run through a standard conventional computer, the steps of the program are translated into the computer's machine language and executed there, but at this level the operations apply to 1's and 0's with no obvious way to map them onto the original numbers to be multiplied or to the multiplication operation. So it seems that at those levels that, according to Dennett, have engineering reality there are plenty of explicit tokenings of symbols with appropriate operations over them that don't correspond to anything like the propositional attitudes of folk psychology. In other words, there is plenty of symbolic activity which it would be wrong to say a person engages in. Rather, they are done by the person's subpersonal computational components as opposed to the person. How to rule out such cases? (cf. Fodor 1987: 23–6 for a similar discussion.)

They are ruled out by an appropriate reading of (A1) and (B1): (A1) says that the person herself must stand in an appropriate computational relation to a Mentalese sentence, which, as per (B1), has a suitable syntax and semantics. Only then will the sentence constitute the person's having a propositional attitude. Not all explicit symbols in one's LOT will satisfy this. In other words, not every computational routine will correspond to a processing appropriately described as storage in, e.g., the “belief-box”. Furthermore, as pointed out by Fodor (1987), LOTH would vindicate the common sense view of propositional attitudes if they turn out to be computational relations to Mentalese sentences. It may not be further required that every explicit representation correspond to a propositional attitude.

There have been many other objections to LOTH in recent years raised especially by connectionists: that LOT systems cannot handle certain cognitive tasks like perceptual pattern recognition, that they are too brittle and not sufficiently damage resistant, that they don't exhibit graceful degradation when physically damaged or as a response to noisy or degraded input, that they are too rigid, deterministic, so are not well-suited for modeling humans' capacity to satisfy multiple soft-constraints so gracefully, that they are not biologically realistic, and so on. (For useful discussions of these and many similar objections, see Rumelhart, McClelland and the PDP Research Group (1986), Fodor and Pylyshyn (1988), Horgan and Tienson (1996), Horgan (1997), McLaughlin and Warfield (1994), Bechtel and Abrahamsen (2002), Marcus (2002).)

When Jerry Fodor published his influential book, The Language of Thought , in (1975), he called LOTH “the only game in town.” As we have seen, it was the philosophical articulation of the assumptions that underlay the new developments in “cognitive sciences” after the demise of behaviorism. Fodor argued for the truth of LOTH on the basis of the successes of the best scientific theories we had then. Indeed most of the scientific work in cognitive psychology, psycholinguistics, and AI assumed the framework of LOTH.

In the early 1980's, however, Fodor's claim that LOTH was the only game in town was beginning to be challenged by some who were working on so-called connectionist networks. They claimed that connectionism offered a new and radically different alternative to classicism in modeling cognitive phenomena. The name ‘classicism’ has since then become to be applied to the LOTH framework. On the other hand, many classicists like Fodor thought that connectionism was nothing but a slightly more sophisticated way with which the old and long dead associationism, whose roots could be traced back to early British empiricists, was being revived. In 1988 Fodor and Pylyshyn (F&P) published a long article, “Connectionism and Cognitive Architecture: A Critical Analysis”, in which they launched a formidable attack on connectionism, which largely set the terms for the ensuing debate between connectionists and classicists.

F&P's forceful criticism consists in posing a dilemma for connectionists: They either fail to explain the law-like cognitive regularities like systematicity and productivity in an adequate way or the connectionist models are nothing but mere implementation models of classical architectures; hence, they fail to provide a radically new paradigm as connectionists claim. This conclusion was also meant to be a challenge: Explain the cognitive regularities in question without postulating a LOT architecture.

First, let me present F&P's argument against connectionism in a somewhat reconstructed fashion. It will be helpful to characterize the debate by locating the issues according to the reactions many connectionists had to the premises of the argument.

F&P's Argument against Connectionism in their (1988) article :

Cognition essentially involves representational states and causal operations whose domain and range are these states; consequently, any scientifically adequate account of cognition should acknowledge such states and processes.

Higher cognition (specifically, thought and thinking with propositional content) conceived in this way, has certain empirically interesting properties: in particular, it is a law of nature that cognitive capacities are productive , systematic , and inferentially coherent .

Accordingly, the architecture of any proposed cognitive model is scientifically adequate only if it guarantees that cognitive capacities are productive, systematic, etc. This would amount to explaining, in the scientifically relevant and required sense, how it could be a law that cognition has these properties.

The only way (i.e., necessary condition) for a cognitive architecture to guarantee systematicity (etc.) is for it to involve a representational system for which (B) is true (see above). (Classical architectures necessarily satisfy (B).)

Either the architecture of connectionist models does satisfy (B), or it does not.

If it does, then connectionist models are implementations of the classical LOT architecture and have little new to offer (i.e., they fail to compete with classicism, and thus connectionism does not constitute a radically new way of modeling cognition).

If it does not, then (since connectionism does not then guarantee systematicity, etc., in the required sense) connectionism is empirically false as a theory of the cognitive architecture.

Therefore, connectionism is either true as an implementation theory, or empirically false as a theory of cognitive architecture.

The notion of cognitive architecture assumes special importance in this debate. F&P's characterization of the notion goes as follows:

The architecture of the cognitive system consists of the set of basic operations, resources, functions, principles, etc. (generally the sorts of properties that would be described in a “user's manual” for that architecture if it were available on a computer) whose domain and range are the representational states of the organism. (1988: 10)

Also, note that (B1) and (B2) are meta-architectural properties in that they are themselves conditions upon any specific architecture's being classical. They define classicism per se, but not any particular way of being classical. Classicism as such simply claims that whatever the particular cognitive architecture of the brain might turn out to be (whatever the specific grammar of Mentalese turns out to be), (B) must be true of it. F&P claim that this is the only way an architecture can be said to guarantee the nomological necessity of cognitive regularities like systematicity, etc. This seems to be the relevant and required sense in which a scientific explanation of cognition is required to guarantee the regularities—hence the third premise in their argument.

Connectionist responses have fallen into four classes:

Deny premise(i) . The rejection of (i) commits connectionists to what is sometimes called radical or eliminativist connectionism . Premise (i), as F&P point out, draws a general line between eliminativism and representationalism (or, intentional realism). There has been some controversy as to whether connectionism constitutes a serious challenge to the fundamental tenets of folk psychology. [ 24 ] Although it may still be too early for assessment, [ 25 ] the connectionist research program has been overwhelmingly cognitivist: most connectionists do in fact advance their models as having causally efficacious representational states, and explicitly endorse F&P's first premise. So they seem to accept intentional realism. [ 26 ]

Accept the conclusion . This group may be seen as more or less accepting the cogency of the entire argument, and characterizes itself as implementationalist : they hold that connectionist networks will implement a classical architecture or language of thought. According to this group, the appropriate niche for neural networks is closer to neuroscience than to cognitive psychology. They seem to view the importance of the program in terms of its prospects of closing the gap between the neurosciences and high-level cognitive theorizing. In this, many seem content to admit premise (vi). (See Marcus 2001 for a discussion of the virtues of placing connectionist models closer to implementational level.)

Deny premise (ii) or (iv) . Some connectionists reject (ii) or (iv), [ 27 ] holding that there are no lawlike cognitive regularities such as systematicity (etc.) to be explained, or that such regularities do not require a (B)-like architecture for their explanation. Those who question (ii) often question the empirical evidence for systematicity (etc.) and tend to ignore the challenge put forward by F&P. Those who question (iv) also often question (ii), or they argue that there can be very different sort of explanations for systematicity and the like (e.g. evolutionary explanations, see Braddon-Mitchell and Fitzpatrick 1990), or they question the very notion of explanation involved (e.g. Matthews 1994). There are indeed quite a number of different kinds of arguments in the literature against these premises. [ 28 ] For a sampling, see Aydede (1995) and McLaughlin (1993b), who partitions the debate similarly.

Deny premise (vi) . The group of connectionists who have taken F&P's challenge most seriously has tended to reject premise (vi) in their argument, while accepting, on the face of it, the previous five premises (sometimes with reservations on the issue of productivity). They think that it is possible for connectionist representations to be syntactically structured in some sense without being classical. Prominent in this group are Smolensky (1990a, 1990b, 1995), van Gelder (1989, 1990, 1991), Chalmers (1990, 1993). [ 29 ] Some connectionists whose models give support to this line include Elman (1989), Hinton (1990), Touretzky (1990), Pollack (1990), Barnden and Srinivas (1991), Shastri and Ajjanagadde (1993), Plate (1998), Hummel et al. (2004), Van Der Velde and De Kamps (2006), Barrett et al. (2008), Sanjeevi and Bhattacharyya (2010).

Much of the recent debate between connectionists and classicists has focused on this option. How is it possible to reject premise (vi), which seems true by definition of classicism. The connectionists' answer, roughly put, is that when you devise a representational system whose satisfaction of (B) relies on a non-concatenative realization of structural/syntactic complexity of representations, you have a non-classical system. (See especially Smolensky 1990a and van Gelder 1990.) Interestingly, some classicists like Fodor and McLaughlin (1990) (F&M) seem to agree. F&M stipulate that you have a classical system only if the syntactic complexity of representations is realized concatenatively , or as it is sometimes put, explicitly :

We … stipulate that for a pair of expression types E1, E2, the first is a Classical constituent of the second only if the first is tokened whenever the second is tokened. (F&M 1990: 186)

The issues about how connectionists propose to obtain constituent structure non-concatenatively tend to be complex and technical. But they propose to exploit so called distributed representations in certain novel ways. The essential idea behind most of them is to use vector (and tensor) algebra (involving superimposition, multiplication, etc. of vectors) in composing and decomposing connectionist representations which consist in coding patterns of activity across neuron-like units which can be modeled as vectors. The result of such techniques is the production of representations that have in some interesting sense a complexity whose constituent structure is largely implicit in that the constituents are not tokened explicitly when the representations are tokened, but can be recovered by further operations upon them. The interested reader should consult some of the pioneering work by Elman (1989), Hinton (1990), Smolensky (1989, 1990, 1995), Touretzky (1990), Pollack (1990).

F&M's criticism, more specifically stated, however, is this. Connectionists with such techniques only satisfy (B1) in some “extended sense”, but they are incapable of satisfying (B2), precisely because their way of satisfying (B1) is committed to a non-concatenative realization of syntactic structures.

Some connectionists disagree (e.g., Chalmers 1993, Niklasson and van Gelder 1994—see also Browne 1998 and Browne and Sun 2001 for discussion and overview of models): they claim that you can have structure-sensitive transformations or operations defined over representations whose syntactic structure is non-concatenatively realized. So given the apparent agreement that non-concatenative realization is what makes a system non-classical, connectionists claim that they can and do perfectly satisfy (B) in its entirety with their connectionist models without implementing classical models.

The debate still continues and there is a growing literature built around the many issues raised by it. Aydede (1997a) offers an extensive analysis of the debate between classicists and this group of connectionists with special attention to the conceptual underpinnings of the debate. (See also Roth 2005 who argues that to the extent to which connectionist models can transform representations successfully according to an algorithmic function, to that extent they count as executing program in the sense relevant to classical program execution.) Aydede argues that both parties are wrong in assuming that concatenative realization is relevant to the characterization of LOTH. Part of the argument is that concatenative realization of (B) is just that—a realization. The attentive reader might have noticed that there is nothing in the characterization of (B) that requires concatenative realization. Indeed, when we look at all the major arguments for LOTH focused on the need for (B), none of them requires concatenation or explicit realization of syntactic structure. In fact, it is almost on the border of confusion to necessarily associate LOTH to such an implementational level issue. If anything, this class of connectionist networks, if successful and generalizable across all higher cognition, contributes to our understanding of how radically differently a LOTH architecture could be implemented in neural networks. Indeed, if these models prove to be adequate for explaining the full range of human cognitive capacities, they would show how syntactically structured representations and structure sensitive processes could be implemented in a radically new way. So research programs in this niche are by no means trivial or insignificant. But we need to be clear and careful about what minimally needs to be the case for LOTH to be true, and why.

On the other hand, it is by no means clear that these connectionist models are successful and generalizable (scalable). They all have proved to have serious limitations that seem to be tied to their particular ways of implementing variable binding (syntactic structure) and structure sensitive processing. For critical discussion, see Marcus (2001), Hadley (2009), Browne and Sun (2001). Marcus in particular makes a strong and largely empirical case for why classical symbol systems are needed for explaining human capacities of variable binding and generalizing, and why existing connectionist models aren't up to the job to match human capacities while remaining non-classical. Indeed the trend in the last fifteen years seems to be towards developing hybrid systems combining connectionist and classical symbol processing models—see, for instance, the articles in Wermter and Sun (2000). [ 30 ]

  • Aizawa, K. (1994). “Representations without Rules, Connectionism and the Syntactic Argument.” Synthese 101(3): 465–492.
  • –––. (1997a). “Explaining Systematicity.” Mind and Language 12(2): 115–136.
  • –––. (1997b). “Exhibiting versus Explaining Systematicity: A Reply to Hadley and Hayward.” Minds and Machines 7(1): 39–55.
  • –––. (2003). The Systematicity Arguments , Kluwer Academic Publishers.
  • Aydede, Murat. (1995). “Connectionism and Language of Thought”, CSLI Technical Report , Stanford, CSLI, 95–195. (This is an early version of Aydede 1997 but contains quite a lot of expository material not contained in 1997.)
  • –––. (1997a). “Language of Thought: The Connectionist Contribution,” Minds and Machines , Vol. 7, No. 1, pp. 57–101.
  • –––. (1997b). “Has Fodor Really Changed His Mind on Narrow Content?”, Mind and Language , 12(3–4): 422–458.
  • –––. (1998). “Fodor On Concepts and Frege Puzzles,” Pacific Philosophical Quarterly , 79(4): 289–294.
  • –––. (2000). “On the Type/Token Relation of Mental Representations,” Facta Philosophica: International Journal for Contemporary Philosophy , 2(1): 23–49.
  • –––. (2005). “Computation and Functionalism: Syntactic Theory of Mind Revisited” in Gürol Irzik and G. Güzeldere (eds.), Boston Studies in the History and Philosophy of Science , Dordrecht: Kluwer Academic Publishers.
  • Aydede, Murat, and Güven Güzeldere (2005). “Cognitive Architecture, Concepts, and Introspection: An Information-Theoretic Solution to the Problem of Phenomenal Consciousness”, Noûs , 39(2): 197–255.
  • Armstrong, D.M. (1973). Belief, Truth and Knowledge , Cambridge: Cambridge University Press.
  • –––. (1980). The Nature of Mind , Ithaca, NY: Cornell University Press.
  • Bader, S. and B. Hitzler (2005). “Dimensions of neural-symbolic integration—a structured survey” in We Will Show Them: Essays in Honour of Dov Gabbay , edited by S. Artemov and H. Barringer and A. S. d'Avila Garcez and L.C. Lamb and J. Woods, King's College Publications.
  • Barnden, J. and K. Srinivas (1991). “Encoding techniques for complex information structures in connectionist systems,” Connection Science, 3(3): 269–315.
  • Barrett, L., J Feldman, and L. Mac Dermed (2008). “A (somewhat) new solution to the variable binding problem,” Neural Computation , Vol. 20, pp. 2361–2378.
  • Barsalou, L. W. (1993). “Flexibility, Structure, and Linguistic Vagary in Concepts: Manifestations of a Compositional System of Perceptual Symbols” in Theories of Memory , edited by A. Collins, S. Gathercole, M. Conway and P. Morris, Hillsdale, NJ: Lawrence Erlbaum Associates.
  • –––. (1999). “Perceptual Symbol Systems.” Behavioral and Brain Sciences 22(4).
  • Barsalou, L. W., W. Yeh, B. J. Luka, K. L. Olseth, K. S. Mix, and L.-L. Wu. (1993). “Concepts and Meaning”, Chicago Linguistics Society 29.
  • Barsalou, L. W., and J. J. Prinz. (1997). “Mundane Creativity in Perceptual Symbol Systems” in Creative Thought: An Investigation of Conceptual Structures and Processes , edited by T. B. Ward, S. M. Smith and J. Vaid, Washington, DC: American Psychological Association.
  • Barwise, Jon and John Etchemendy (1995). Hyperproof , Stanford, Palo Alto: CSLI Publications.
  • Barwise, J. and J. Perry (1983). Situations and Attitudes , Cambridge, Massachusetts: MIT Press.
  • Bechtel, W. and A. Abrahamsen (2002). Connectionism and the Mind: An Introduction to Parallel Processing in Networks , 2nd Edition, Oxford, UK: Basil Blackwell.
  • Blackburn, S. (1984). Spreading the Word , Oxford, UK: Oxford University Press.
  • Block, Ned. (1980). “Troubles with Functionalism” in Readings in Philosophy of Psychology , N. Block (ed.), Vol.1, Cambridge, Massachusetts: Harvard University Press, 1980. (Originally appeared in Perception and Cognition: Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science , C.W. Savage (ed.), Minneapolis: The University of Minnesota Press, 1978.)
  • –––. (ed.) (1981). Imagery . Cambridge, Massachusetts: MIT Press.
  • –––. (1983a). “Mental Pictures and Cognitive Science,” Philosophical Review 93: 499–542. (Reprinted in Mind and Cognition , W.G. Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.)
  • –––. (1983b). “The Photographic Fallacy in the Debate about Mental Imagery”, Nous 17: 651–62.
  • ––– (1986). “Advertisement for a Semantics for Psychology” in Studies in the Philosophy of Mind: Midwest Studies in Philosophy , Vol.10, P. French, T. Euhling and H. Wettstein (eds.), Minneapolis: University of Minnesota Press.
  • Braddon-Mitchell, David and John Fitzpatrick (1990). “Explanation and the Language of Thought,” Synthese 83: 3–29.
  • Braddon-Mitchell, D. and F. Jackson (2007). Philosophy of Mind and Cognition: An Introduction , Blackwell.
  • Browne, A. (1998). “Performing a symbolic inference step on distributed representations”, Neurocomputing , 19(1–3): 23–34.
  • Browne, A., and R. Sun (1999). “Connectionist variable binding”, Expert Systems , 16(3): 189–207.
  • –––. (2001). “Connectionist inference models”, Neural Networks , 14(10): 1331–1355.
  • Brentano, Franz (1874/1973). Psychology from an Empirical Standpoint , A. Rancurello, D. Terrell and L. McAlister (trans.), London: Routledge and Kegan Paul.
  • Butler, Keith (1991). “Towards a Connectionist Cognitive Architecture,” Mind and Language , Vol. 6, No. 3, pp. 252–72.
  • Chalmers, David J. (1990). “Syntactic Transformations on Distributed Representations,” Connection Science , Vol. 2.
  • –––. (1993). “Connectionism and Copositionality: Why Fodor and Pylyshyn Were Wrong” in Philosophical Psychology 6: 305–319.
  • –––. (1996). The Conscious Mind: In Search of a Fundamental Theory , Oxford, UK: Oxford University Press.
  • Churchland, Patricia Smith (1986). Neurophilosophy: Toward a Unified Science of Mind-Brain , Cambridge, Massachusetts: MIT Press.
  • –––. (1987). “Epistemology in the Age of Neuroscience,” Journal of Philosophy , Vol. 84, No. 10, pp. 544–553.
  • Churchland, Patricia S. and Terrence J. Sejnowski (1989). “Neural Representation and Neural Computation” in Neural Connections, Neural Computation , L. Nadel, L.A. Cooper, P. Culicover and R.M. Harnish (eds.), Cambridge, Massachusetts: MIT Press, 1989.
  • Churchland, Paul M. (1990). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science , Cambridge, Massachusetts: MIT Press.
  • –––. (1981). “Eliminative Materialism and the Propositional Attitudes,” Journal of Philosophy 78: 67–90.
  • Churchland, Paul M. and P.S. Churchland (1990). “Could a Machine Think?,” Scientific American , Vol. 262, No. 1, pp. 32–37.
  • Clark, Andy (1988). “Thoughts, Sentences and Cognitive Science,” Philosophical Psychology , Vol. 1, No. 3, pp. 263–278.
  • –––. (1989a). “Beyond Eliminativism,” Mind and Language , Vol. 4, No. 4, pp. 251–279.
  • –––. (1989b). Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing , Cambridge, Massachusetts: MIT Press.
  • –––. (1990). “Connectionism, Competence, and Explanation,” British Journal for Philosophy of Science , 41: 195–222.
  • –––. (1991). “Systematicity, Structured Representations and Cognitive Architecture: A Reply to Fodor and Pylyshyn” in Connectionism and the Philosophy of Mind , Terence Horgan and John Tienson (eds.), Studies in Cognitive Systems (Volume 9), Dordrecht: Kluwer Academic Publishers, 1991.
  • –––. (1994). “Language of Thought (2)” in A Companion to the Philosophy of Mind edited by S. Guttenplan, Oxford, UK: Basil Blackwell, 1994.
  • Cowie, F. (1998). What's Within? Nativism Reconsidered . Oxford, UK, Oxford University Press.
  • Cummins, Robert. (1986). “Inexplicit Information” in The Representation of Knowledge and Belief, M. Brand and R.M. Harnish (eds.), Tucson, Arizona: Arizona University Press, 1986.
  • –––. (1989). Meaning and Mental Representation , Cambridge, Massachusetts: MIT Press.
  • –––. (1996). Representations, Targets, and Attitudes , Cambridge, Massachusetts: MIT Press.
  • Cummins, Robert and Georg Schwarz (1987). “Radical Connectionism,” The Southern Journal of Philosophy , Vol. XXVI, Supplement.
  • Davidson, Donald (1984). Inquiries into Truth and Interpretation , Oxford: Clarendon Press.
  • Davies, Martin (1989). “Connectionism, Modularity, and Tacit Knowledge,” British Journal for the Philosophy of Science 40: 541–555.
  • –––. (1991). “Concepts, Connectionism, and the Language of Thought,” in Philosophy and Connectionist Theory , W. Ramsey, S.P. Stich and D.E. Rumelhart (eds.), Hillsdale, NJ: Lawrence Erlbaum, 1991.
  • –––. (1995). “Two Notions of Implicit Rules,” Philosophical Perspectives 9: 153–83.
  • Dennett, D.C. (1978). “Two Approaches to Mental Images” in Brainstorms: Philosophical Essays on Mind and Psychology , Cambridge, Massachusetts: MIT Press, 1981.
  • –––. (1981). “Cure for the Common Code” in Brainstorms: Philosophical Essays on Mind and Psychology , Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in Mind , April 1977.)
  • –––. (1986). “The Logical Geography of Computational Approaches: A View from the East Pole” in The Representation of Knowledge and Belief , Myles Brand and Robert M. Harnish (eds.), Tucson: The University of Arizona Press, 1986.
  • –––. (1991a). “Real Patterns,” Journal of Philosophy , Vol. LXXXVIII, No. 1, pp. 27–51.
  • –––. (1991b). “Mother Nature Versus the Walking Encyclopedia: A Western Drama” in Philosophy and Connectionist Theory , W. Ramsey, S.P. Stich and D.E. Rumelhart (eds.), Lawrence Erlbaum Associates.
  • Descartes, R. (1637/1970). “Discourse on the Method” in The Philosophical Works of Descartes , Vol.I, E.S. Haldane and G.R.T. Ross (trans.), Cambridge, UK: Cambridge University Press.
  • Devitt, Michael (1990). “A Narrow Representational Theory of the Mind,” Mind and Cognition , W.G. Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.
  • –––. (1996). Coming to our Senses: A Naturalistic Program for Semantic Localism , Cambridge, UK: Cambridge University Press.
  • Devitt, Michael and Sterelny, Kim (1987). Language and Reality: An Introduction to the Philosophy of Language , Cambridge, Massachusetts: MIT Press.
  • Dretske, Fred (1981). Knowledge and the Flow of Information , Cambridge, Massachusetts: MIT Press.
  • –––. (1988). Explaining Behavior , Cambridge, Massachusetts: MIT Press.
  • Elman, Jeffrey L. (1989). “Structured Representations and Connectionist Models”, Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society , Ann Arbor, Michigan, pp.17–23.
  • Field, Hartry H. (1972). “Tarski's Theory of Truth”, Journal of Philosophy, 69: 347–75.
  • –––. (1978). “Mental Representation”, Erkenntnis 13, 1, pp.9–61. (Also in Mental Representation: A Reader , S.P. Stich and T.A. Warfield (eds.), Oxford, UK: Basil Blackwell, 1994. References in the text are to this edition.)
  • Fodor, Jerry A. (1975). The Language of Thought , Cambridge, Massachusetts: Harvard University Press.
  • –––. (1978). “Propositional Attitudes” in RePresentations: Philosophical Essays on the Foundations of Cognitive Science , J.A. Fodor, Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in The Monist 64, No.4, 1978.)
  • –––. (1978a). “Computation and Reduction” in RePresentations: Philosophical Essays on the Foundations of Cognitive Science , J.A. Fodor, Cambridge, MA: MIT Press. (Originally appeared in Minnesota Studies in the Philosophy of Science: Perception and Cognition , Vol. 9, W. Savage (ed.), 1978.)
  • –––. (1978b). “Tom Swift and His Procedural Grandmother,” Cognition , Vol. 6. (Also in RePresentations: Philosophical Essays on the Foundations of Cognitive Science , J.A. Fodor, Cambridge, Massachusetts: MIT Press, 1981.)
  • –––. (1980). “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology”, Behavioral and Brain Sciences 3, 1, 1980. (Also in RePresentations: Philosophical Essays on the Foundations of Cognitive Science , J.A. Fodor, Cambridge, MA: MIT Press, 1981. References in the text are to this edition.)
  • –––. (1981a). RePresentations: Philosophical Essays on the Foundations of Cognitive Science , Cambridge, Massachusetts: MIT Press.
  • –––. (1981b), “Introduction: Something on the State of the Art” in RePresentations: Philosophical Essays on the Foundations of Cognitive Science , J.A. Fodor, Cambridge, Massachusetts: MIT Press, 1981.
  • –––. (1983). The Modularity of Mind , Cambridge, Massachusetts: MIT Press.
  • –––. (1985). “Fodor's Guide to Mental Representation: The Intelligent Auntie's Vade-Mecum”, Mind 94, 1985, pp.76–100. (Also in A Theory of Content and Other Essays , J.A. Fodor, Cambridge, Massachusetts: MIT Press. References in the text are to this edition.)
  • –––. (1986). “Banish DisContent” in Language, Mind, and Logic , J. Butterfield (ed.), Cambridge, UK: Cambridge University Press, 1986. (Also in Mind and Cognition , William Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.)
  • –––. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind , Cambridge, Massachusetts: MIT Press.
  • –––. (1989). “Substitution Arguments and the Individuation of Belief” in A Theory of Content and Other Essays , J. Fodor, Cambridge, Massachusetts: MIT Press, 1990. (Originally appeared in Method, Reason and Language , G. Boolos (ed.), Cambridge, UK: Cambridge University Press, 1989.)
  • –––. (1990). A Theory of Content and Other Essays , Cambridge, Massachusetts: MIT Press.
  • –––. (1991). “Replies” (Ch.15) in Meaning in Mind: Fodor and his Critics , B. Loewer and G. Rey (eds.), Oxford, UK: Basil Blackwell, 1991.
  • –––. (2001). “Doing without What's Within: Fiona Cowie's Critique of Nativism.” Mind : 110(437) 99–148.
  • –––. (2008). LOT 2: The Language of Thought Revisited , Oxford: Oxford University Press.
  • Fodor, Jerry A. and Ernest Lepore (1991). “Why Meaning (Probably) Isn't Conceptual Role?”, Mind and Language , Vol. 6, No. 4, pp. 328–43.
  • Fodor, Jerry A. and B. McLaughlin (1990). “Connectionism and the Problem of Systematicity: Why Smolensky's Solution Doesn't Work,” Cognition 35: 183–204.
  • Fodor, Jerry A. and Zenon W. Pylyshyn (1988). “Connectionism and Cognitive Architecture: A Critical Analysis” in S. Pinker and J. Mehler, eds., Connections and Symbols , Cambridge, Massachusetts: MIT Press (A Cognition Special Issue).
  • Grice, H.P. (1957). “Meaning”, Philosophical Review , 66: 377–88.
  • Hadley, R. F. (1995). “The ”Explicit-Implicit“ Distinction.” Minds and Machines 5(2): 219–242.
  • –––. (1997). “Cognition, Systematicity and Nomic Necessity.” Mind and Language 12(2): 137–153.
  • –––. (1997). “Explaining Systematicity: A Reply to Kenneth Aizawa.” Minds and Machines 7(4): 571–579.
  • –––. (1999). “Connectionism and Novel Combinations of Skills: Implications for Cognitive Architecture.” Minds and Machines 9(2): 197–221.
  • –––. (2009). “The problem of rapid variable creation,” Neural Computation , 21: 510–32.
  • Hadley, R. F. and M. B. Hayward (1997). “Strong Semantic Systematicity from Hebbian Connectionist Learning.” Minds and Machines 7(1): 1–37.
  • Harman, Gilbert (1973). Thought , Princeton University Press.
  • Haugeland, John (1981). “The Nature and Plausibility of Cognitivism,” Behavioral and Brain Sciences I, 2: 215–60 (with peer commentary and replies).
  • –––. (1985). Artificial Intelligence: The Very Idea , Cambridge, Massachusetts: MIT Press.
  • Hinton, Geoffrey (1990). “Mapping Part-Whole Hierarchies into Connectionist Networks,” Artificial Intelligence , Vol. 46, Nos. 1–2, (Special Issue on Connectionist Symbol Processing).
  • Horgan, T. E. and J. Tienson (1996). Connectionism and the Philosophy of Psychology , Cambridge, Massachusetts: MIT Press.
  • Horgan, T. (1997). “Connectionism and the Philosophical Foundations of Cognitive Science.” Metaphilosophy 28(1–2): 1–30.
  • Hummel, J. E., Holyoak, K. J., Green, C., Doumas, L. A. A., Devnich, D., Kittur, A., & Kalar, D.J. (2004). A Solution to the Binding Problem for Compositional Connectionism. In S.D. Levy & R. Gayler: Compositional Connectionism in Cognitive Science: Papers from the AAAI Fall Symposium (pp. 31–34). Menlo Park, CA: AAAI Press.
  • Jacob, P. (1997). What Minds Can Do: Intentionality in a Non-Intentional World . Cambridge, UK, Cambridge University Press.
  • Kirsh, D. (1990). “When Is Information Explicitly Represented?” in Information, Language and Cognition . P. Hanson (ed.), University of British Columbia Press.
  • Knowles, J. (1998). “The Language of Thought and Natural Language Understanding.” Analysis 58(4): 264–272.
  • Kosslyn, S.M. (1980). Image and Mind . Cambridge, Massachusetts: Harvard University Press.
  • –––. (1981). “The Medium and the Message in Mental Imagery: A Theory” in Imagery, N. Block (ed.), Cambridge, Massachusetts: MIT Press, 1981.
  • –––. (1994). Image and Brain , Cambridge, Massachusetts: MIT Press.
  • Kulvicki, J. (2004). “Isomorphism in information-carrying systems”, Pacific Philosophical Quarterly 85(4): 380–395.
  • –––. (2006). On Images: Their Structure and Content , Oxford: Clarendon Press.
  • Laurence, Stephen and Eric Margolis (1997). “Regress Arguments Against the Language of Thought”, Analysis , Vol. 57, No. 1.
  • –––. (2002). “Radical Concept Nativism.” Cognition 86: 22–55.
  • Leeds, S. (2002). “Perception, Transparency, and the Language of Thought.” Noûs 36(1): 104–129.
  • Lewis, David (1972). “Psychophysical and Theoretical Identifications,” Australasian Journal of Philosophy , 50(3):249–58. (Also in Readings in Philosophy of Psychology , Ned Block (ed.), Vols.1, Cambridge, Massachusetts: Harvard University Press, 1980.)
  • –––. (1994). “Reduction of Mind” in A Companion to the Philosophy of Mind , edited by Samuel Guttenplan, Oxford: Blackwell, pp. 412–31.
  • Loar, Brian F. (1982a). Mind and Meaning , Cambridge, UK: Cambridge University Press.
  • –––. (1982b). “Must Beliefs Be Sentences?” in Proceedings of the Philosophy of Science Association for 1982 , Asquith, P. and T. Nickles (eds.), East Lansing, Michigan, 1983.
  • Lycan, William G. (1981). “Toward a Homuncular Theory of Believing,” Cognition and Brain Theory 4(2): 139–159.
  • –––. (1986). “Tacit Belief” in Belief: Form, Content, and Function, R. Bogdan (ed.), Oxford, UK: Oxford University Press.
  • –––. (1993). “A Deductive Argument for the Representational Theory of Thinking,” Mind and Language , Vol. 8, No. 3, pp. 404–22.
  • –––. (1997). “Consciousness as Internal Monitoring” in The Nature of Consciousness: Philosophical Debates , edited by N. Block, O. Flanagan and G. Güzeldere, Cambridge, Massachusetts: MIT Press.
  • Marcus, G. F. (1998). “Can connectionism save constructivism?” Cognition 66: 153–182.
  • –––. (1998). “Rethinking Eliminative Connectionism.” Cognitive Psychology 37: 243–282.
  • –––. (2001). The Algebraic Mind: Integrating Connectionism and Cognitive Science . Cambridge, MA, MIT Press.
  • Margolis, Eric (1998). “How to Acquire a Concept?”, Mind and Language .
  • Margolis, E. and S. Laurence (1999). “Where the Regress Argument Still Goes Wrong: Reply to Knowles.” Analysis 59(4): 321–327.
  • –––. (2001). “The Poverty of the Stimulus Argument.” British Journal for the Philosophy of Science 52: 217–276.
  • ––– (forthcoming-a). “Learning Matters: The Role of Learning in Concept Acquisition.”
  • –––. (forthcoming-b). “The Nativist Manifesto.”
  • Markic, O. (2001). “Is Language of Thought a Conceptual Necessity?” Acta Analytica 16(26): 53–60.
  • Marr, David (1982). Vision , San Francisco: W. H. Freeman.
  • Martinez, F. and J. Ezquerro Martinez (1998). “Explicitness with Psychological Ground.” Minds and Machines 8(3): 353–374.
  • Matthew, Robert J. (1994). “Three-Concept Monte: Explanation, Implementation and Systematicity”, Synthese , Vol. 101, No. 3, pp. 347–63.
  • McGinn, Colin (1989). Mental Content , Oxford: Blackwell.
  • –––. (1991). The Problem of Consciousness , Oxford, UK: Basil Blackwell.
  • McLaughlin, B.P. (1993a). “The Connectionism/Classicism Battle to Win Souls,” Philosophical Studies 71: 163–90.
  • –––. (1993b). “Systematicity, Conceptual Truth, and Evolution,” in Philosophy and Cognitive Science, C. Hookway and D. Peterson (eds.), Royal Institute of Philosophy, Supplement No. 34.
  • McLaughlin, B.P. and Ted Warfield (1994). “The Allures of Connectionism Reexamined”, Synthese 101, pp. 365–400
  • Millikan, Ruth Garrett (1984). Language, Thought, and Other Biological Categories: New Foundations for Realism , Cambridge, Massachusetts: MIT Press.
  • –––. (1993). White Queen Psychology and Other Essays for Alice , Cambridge, Massachusetts: MIT Press.
  • Niklasson, L. and T. van Gelder (1994). “On Being Systematically Connectionist,” Mind and Language , 9(3): 288–302
  • Papineau, D. (1987). Reality and Representation , Oxford, UK: Basil Blackwell.
  • Perry, John and David Israel (1991). “Fodor and Psychological Explanations” in Meaning in Mind: Fodor and his Critics , B. Loewer and G. Rey (eds.), Oxford, UK: Basil Blackwell, 1991.
  • Phillips, S. (2002). “Does Classicism Explain Universality?” Minds and Machines 12(3): 423–434.
  • Piccinini, G. (2008). “Computers,” Pacific Philosophical Quarterly , 89:32 –73.
  • Pinker, S., and A. Prince (1988). “On language and connectionism: Analysis of a parallel distributed processing model of language acquisition,” Cognition (special issue on Connections and Symbols) 28: 73–193.
  • Plate, Tony A. (1998). “Structured operations with distributed vector representations” in Keith Holyoak, Dedre Gentner, and Boicho Kokinov, Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences . NBU Series in Cognitive Science. New Bugarian University, Sofia.
  • Pollack, J.B. (1990). “Recursive Distributed Representations,” Artificial Intelligence , Vol.46, Nos.1–2, (Special Issue on Connectionist Symbol Processing).
  • Prinz, J. (2002). Furnishing the Mind: Concepts and Their Perceptual Basis . Cambridge, MA, MIT Press.
  • Putnam, Hilary (1988), Representation and Reality , Cambridge, Massachusetts: MIT Press.
  • Pylyshyn, Z.W. (1978). “Imagery and Artificial Intelligence” in Perception and Cognition . W. Savage (ed.), University of Minnesota Press. (Reprinted in Readings in the Philosophy of Psychology , N. Block (ed.), Cambridge, Massachusetts: MIT Press, 1980.)
  • Pylyshyn, Z. W. (1984). Computation and Cognition: Toward a Foundation for Cognitive Science , Cambridge, Massachusetts: MIT Press.
  • Ramsey, F.P. (1931). “General Propositions and Causality” in The Foundations of Mathematics , New York: Harcourt Brace, pp. 237–55.
  • Ramsey, W., S. Stich and J. Garon (1991). “Connectionism, Eliminativism and the Future of Folk Psychology,” in Philosophy and Connectionist Theory , W. Ramsey, D. Rumelhart and Stephen Stich (eds.), Hillsdale, NJ: Lawrence Erlbaum.
  • Rescorla, M. (2009a). “Cognitive maps and the language of thought,” The British Journal for the Philosophy of Science , 60 (2): 377–407.
  • –––. (2009b). “Predication and cartographic representation,” Synthese, 169:175–200.
  • Rey, Georges (1981). “What are Mental Images?” in Readings in the Philosophy of Psychology , N. Block (ed.), Vol. 2, Cambridge, Massachusetts: Harvard University Press, 1981.
  • –––. (1991). “An Explanatory Budget for Connectionism and Eliminativism” in Connectionism and the Philosophy of Mind , Terence Horgan and John Tienson (eds.), Studies in Cognitive Systems (Volume 9), Dordrecht: Kluwer Academic Publishers.
  • –––. (1992). “Sensational Sentences Switched”, Philosophical Studies 67: 73–103.
  • –––. (1993). “Sensational Sentences” in Consciousness, M. Davies and G. Humphrey (eds.), Oxford, UK: Basil Blackwell, pp. 240–57.
  • –––. (1995). “A Not ‘Merely Empirical’ Argument for a Language of Thought,” in Philosophical Perspectives 9, J. Tomberlin (ed.), pp. 201–222.
  • –––. (1997). Contemporary Philosophy of Mind: A Contentiously Classical Approach , Oxford, UK: Basil Blackwell.
  • Rosenthal, D.M. (1997). “A Theory of Consciousness” in The Nature of Consciousness: Philosophical Debates , edited by N. Block, O. Flanagan and G. Güzeldere, Cambridge, Massachusetts: MIT Press.
  • Roth, M. (2005). “Program Execution in Connectionist Networks,” Mind & Language , 20(4): 448–467.
  • Rumelhart, D.E. and J.L. McClelland (1986). “PDP Models and General Issues in Cognitive Science,” in Parallel Distributed Processing , Vol.1, D.E. Rumelhart, J.L. McClelland, and the PDP Research Group, Cambridge, Massachusetts: MIT Press, 1986.
  • Rumelhart, D.E., J.L. McClelland, and the PDP Research Group (1986). Parallel Distributed Processing , (Vols. 1&2), Cambridge, Massachusetts: MIT Press.
  • Rupert, R. D. (1999). “On the Relationship between Naturalistic Semantics and Individuation Criteria for Terms in a Language of Thought,” Synthese , 117: 95–131.
  • –––. (2008). “Frege's puzzle and Frege cases: Defending a quasi-syntactic solution,” Cognitive Systems Research , 9: 76–91.
  • Sanjeevi, S. and P. Bhattacharyya (2010). “Connectionist predicate logic model with parallel execution of rule chain” in Proceedings of the International Conference and Workshop on Emerging Trends in Technology (ICWET 2010) TCET, Mumbai, India (2010).
  • Schiffer, Stephen (1981). “Truth and the Theory of Content” in Meaning and Understanding, H. Parret and J. Bouveresse (eds.), Berlin: Walter de Gruyter, 1981.
  • Searle, John R. (1980). “Minds, Brains, and Programs” Behavioral and Brain Sciences III, 3: 417–24.
  • –––. (1984). Minds, Brains and Science , Cambridge, Massachusetts: Harvard University Press.
  • –––. (1990). “Is the Brain a Digital Computer?”, Proceedings and Addresses of the APA, Vol. 64, No. 3, November 1990.
  • –––. (1992). The Rediscovery of Mind , Cambridge, Massachusetts: MIT Press.
  • Sehon, S. (1998). “Connectionism and the Causal Theory of Action Explanation.” Philosophical Psychology 11(4): 511–532.
  • Shastri, L. (2006). “Comparing the neural blackboard and the temporal synchrony-based SHRUTI architecture,” Behavioral and Brain Science , 29: 84–86.
  • Shastri, L. and A. Ajjanagadde (1993). “From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony,” Behavioral and Brain Sciences , Vol. 16, pp. 417–94
  • Shepard, R. and Cooper, L. (1982). Mental Images and their Transformations . Cambridge, Massachusetts: MIT Press.
  • Smolensky, Paul (1988). “On the Proper Treatment of Connectionism,” Behavioral and Brain Sciences 11: 1–23.
  • –––. (1990a). “Connectionism, Constituency, and the Language of Thought” in Meaning in Mind: Fodor and His Critics , B. Loewer and G. Rey (eds.), : Oxford, UK: Basil Blackwell, 1991.
  • –––. (1990b). “Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems,” Artificial Intelligence , Vol. 46, Nos. 1–2, (Special Issue on Connectionist Symbol Processing), November 1990.
  • –––. (1995). “Constituent Structure and Explanation in an Integrated Connectionist/Symbolic Cognitive Architecture” in Connectionism: Debates on Psychological Explanation , C. Macdonald and G. Macdonald (eds.), Oxford, UK: Basil Blackwell, 1995.
  • Schneider, S. (2009). “The Nature of Symbols in the Language of Thought,” Mind and Language , 24(5): 523–553.
  • Stalnaker, Robert C. (1984). Inquiry , Cambridge, Massachusetts: MIT Press.
  • Sterelny, K. (1986). “The Imagery Debate”, Philosophy of Science 53: 560–83. (Reprinted in Mind and Cognition, W. Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.)
  • –––. (1990). The Representational Theory of Mind , Cambridge, Massachusetts: MIT Press.
  • Stich, Stephen (1983). From Folk Psychology to Cognitive Science: The Case against Belief , Cambridge, Massachusetts: MIT Press.
  • Tarski, Alfred (1956). “The Concept of truth in Formalized Languages” in Logic, Semantics and Metamathematics , J.Woodger (trans.), Oxford, UK: Oxford University Press.
  • Touretzky, D.S. (1990). “BoltzCONS: Dynamic Symbol Structures in a Connectionist Network,” Artificial Intelligence , Vol. 46, Nos. 1–2, (Special Issue on Connectionist Symbol Processing).
  • Tye, M. (1984). “The Debate about Mental Imagery”, Journal of Philosophy 81: 678–91.
  • –––. (1991). The Imagery Debate , Cambridge, Massachusetts: MIT Press.
  • Van Der Velde, F. and Marc De Kamps (2006). “Neural blackboard architectures of combinatorial structures in cognition,” Behavioral and Brain Sciences , Vol. 29 (01), pp. 37–70.
  • van Gelder, Timothy (1989). “Compositionality and the Explanation of Cognitive Processes”, Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society , Ann Arbor, Michigan, pp. 34–41.
  • –––. (1990). “Compositionality: A Connectionist Variation on a Classical Theme,” Cognitive Science , Vol. 14.
  • –––. (1991). “Classical Questions, Radical Answers: Connectionism and the Structure of Mental Representations” in Connectionism and the Philosophy of Mind , Terence Horgan and John Tienson (eds.), Studies in Cognitive Systems (Volume 9), Dordrecht: Kluwer Academic Publishers.
  • Vinueza, A. (2000). “Sensations and the Language of Thought.” Philosophical Psychology 13(3): 373–392.
  • Wermter, S. and Ron Sun (eds.) (2000). Hybrid Neural Systems , Heidelberg: Springer.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Bibliography on the language of thought , in PhilPapers.org.
  • Bibliography on the philosophy of artificial intelligence , curated by Eric Dietrich, in PhilPapers.org.

-->artificial intelligence --> | belief | Church-Turing Thesis | cognitive science | computation: in physical systems | concepts | connectionism | consciousness: representational theories of | folk psychology: as a theory | functionalism | intentionality | mental content: causal theories of | mental imagery | mental representation | mind: computational theory of | naturalism | physicalism | propositional attitude reports | qualia | reasoning: automated | Turing, Alan | Turing machines

Copyright © 2010 by Murat Aydede < maydede @ mail . ubc . ca >

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

Stanford Center for the Study of Language and Information

The Stanford Encyclopedia of Philosophy is copyright © 2016 by The Metaphysics Research Lab , Center for the Study of Language and Information (CSLI), Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Hypothesis Language

  • Reference work entry
  • Cite this reference work entry

hypothesis language means

  • Hendrik Blockeel  

437 Accesses

Representation language

The hypothesis language used by a machine learning system is the language in which the hypotheses (also referred to as patterns or models) it outputs are described.

Motivation and Background

Most machine learning algorithms can be seen as a procedure for deriving one or more hypotheses from a set of observations. Both the input (the observations) and the output (the hypotheses) need to be described in some particular language. This language is respectively called the Observation Language or the hypothesis language. These terms are mostly used in the context of symbolic learning, where these languages are often more complex than in subsymbolic or statistical learning. For instance, hypothesis languages have received a lot of attention in the field of Inductive Logic Programming , where systems typically take as one of their input parameters a declarative specification of the hypothesis language they are supposed to use (which is typically a...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

Blockeel, H., & De Raedt, L. (1998). Top-down induction of first order logical decision trees. Artificial Intelligence, 101 (1–2), 285–297.

Article   MATH   MathSciNet   Google Scholar  

De Raedt, L. (1998). Attribute-value learning versus inductive logic programming: the missing links (extended abstract). In D. Page (Ed.), Proceedings of the eighth international conference on inductive logic programming . Lecture notes in artificial intelligence (Vol. 1446, pp. 1–8). Berlin: Springer.

Google Scholar  

De Raedt, L. (2008). Logical and relational learning . Berlin: Springer.

Book   MATH   Google Scholar  

Džeroski, S., & Lavrač, N. (Ed.). (2001). Relational data mining . Berlin: Springer.

MATH   Google Scholar  

Getoor, L., Friedman, N., Koller, D., & Pfeffer, A. (2001). Learning probabilistic relational models. In S. Dzeroski & N. Lavrac (Eds.), Relational data mining (pp. 307–334). Berlin: Springer.

Kersting, K., & De Raedt, L. (2001). Towards combining inductive logic programming and Bayesian networks. In C. Rouveirol & M. Sebag (Eds.), Proceedings of the 11th international conference on inductive logic programmingLecture notes in computer science (Vol. 2157, pp. 118–131). Berlin: Springer.

Lloyd, J. W. (2003). Logic for learning . Berlin: Springer.

Mitchell, T. (1997). Machine Learning. McGraw Hill.

Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine Learning, 62 (1–2), 107–136.

Article   Google Scholar  

Download references

Author information

Authors and affiliations.

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

School of Computer Science and Engineering, University of New South Wales, Sydney, Australia, 2052

Claude Sammut

Faculty of Information Technology, Clayton School of Information Technology, Monash University, P.O. Box 63, Victoria, Australia, 3800

Geoffrey I. Webb

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry.

Blockeel, H. (2011). Hypothesis Language. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_372

Download citation

DOI : https://doi.org/10.1007/978-0-387-30164-8_372

Publisher Name : Springer, Boston, MA

Print ISBN : 978-0-387-30768-8

Online ISBN : 978-0-387-30164-8

eBook Packages : Computer Science Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

The Sapir-Whorf Hypothesis Linguistic Theory

DrAfter123/Getty Images

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

The Sapir-Whorf hypothesis is the  linguistic theory that the semantic structure of a language shapes or limits the ways in which a speaker forms conceptions of the world. It came about in 1929. The theory is named after the American anthropological linguist Edward Sapir (1884–1939) and his student Benjamin Whorf (1897–1941). It is also known as the   theory of linguistic relativity, linguistic relativism, linguistic determinism, Whorfian hypothesis , and Whorfianism .

History of the Theory

The idea that a person's native language determines how he or she thinks was popular among behaviorists of the 1930s and on until cognitive psychology theories came about, beginning in the 1950s and increasing in influence in the 1960s. (Behaviorism taught that behavior is a result of external conditioning and doesn't take feelings, emotions, and thoughts into account as affecting behavior. Cognitive psychology studies mental processes such as creative thinking, problem-solving, and attention.)

Author Lera Boroditsky gave some background on ideas about the connections between languages and thought:

"The question of whether languages shape the way we think goes back centuries; Charlemagne proclaimed that 'to have a second language is to have a second soul.' But the idea went out of favor with scientists when  Noam Chomsky 's theories of language gained popularity in the 1960s and '70s. Dr. Chomsky proposed that there is a  universal grammar  for all human languages—essentially, that languages don't really differ from one another in significant ways...." ("Lost in Translation." "The Wall Street Journal," July 30, 2010)

The Sapir-Whorf hypothesis was taught in courses through the early 1970s and had become widely accepted as truth, but then it fell out of favor. By the 1990s, the Sapir-Whorf hypothesis was left for dead, author Steven Pinker wrote. "The cognitive revolution in psychology, which made the study of pure thought possible, and a number of studies showing meager effects of language on concepts, appeared to kill the concept in the 1990s... But recently it has been resurrected, and 'neo-Whorfianism' is now an active research topic in  psycholinguistics ." ("The Stuff of Thought. "Viking, 2007)

Neo-Whorfianism is essentially a weaker version of the Sapir-Whorf hypothesis and says that language  influences  a speaker's view of the world but does not inescapably determine it.

The Theory's Flaws

One big problem with the original Sapir-Whorf hypothesis stems from the idea that if a person's language has no word for a particular concept, then that person would not be able to understand that concept, which is untrue. Language doesn't necessarily control humans' ability to reason or have an emotional response to something or some idea. For example, take the German word  sturmfrei , which essentially is the feeling when you have the whole house to yourself because your parents or roommates are away. Just because English doesn't have a single word for the idea doesn't mean that Americans can't understand the concept.

There's also the "chicken and egg" problem with the theory. "Languages, of course, are human creations, tools we invent and hone to suit our needs," Boroditsky continued. "Simply showing that speakers of different languages think differently doesn't tell us whether it's language that shapes thought or the other way around."

  • Definition and Discussion of Chomskyan Linguistics
  • Generative Grammar: Definition and Examples
  • Cognitive Grammar
  • Universal Grammar (UG)
  • Transformational Grammar (TG) Definition and Examples
  • The Theory of Poverty of the Stimulus in Language Development
  • Linguistic Performance
  • Linguistic Competence: Definition and Examples
  • What Is a Natural Language?
  • The Definition and Usage of Optimality Theory
  • 24 Words Worth Borrowing From Other Languages
  • What Is Linguistic Functionalism?
  • Definition and Examples of Case Grammar
  • Cognitive Linguistics
  • Biography of Noam Chomsky, Writer and Father of Modern Linguistics
  • An Introduction to Semantics
  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of hypothesis

Did you know.

The Difference Between Hypothesis and Theory

A hypothesis is an assumption, an idea that is proposed for the sake of argument so that it can be tested to see if it might be true.

In the scientific method, the hypothesis is constructed before any applicable research has been done, apart from a basic background review. You ask a question, read up on what has been studied before, and then form a hypothesis.

A hypothesis is usually tentative; it's an assumption or suggestion made strictly for the objective of being tested.

A theory , in contrast, is a principle that has been formed as an attempt to explain things that have already been substantiated by data. It is used in the names of a number of principles accepted in the scientific community, such as the Big Bang Theory . Because of the rigors of experimentation and control, it is understood to be more likely to be true than a hypothesis is.

In non-scientific use, however, hypothesis and theory are often used interchangeably to mean simply an idea, speculation, or hunch, with theory being the more common choice.

Since this casual use does away with the distinctions upheld by the scientific community, hypothesis and theory are prone to being wrongly interpreted even when they are encountered in scientific contexts—or at least, contexts that allude to scientific study without making the critical distinction that scientists employ when weighing hypotheses and theories.

The most common occurrence is when theory is interpreted—and sometimes even gleefully seized upon—to mean something having less truth value than other scientific principles. (The word law applies to principles so firmly established that they are almost never questioned, such as the law of gravity.)

This mistake is one of projection: since we use theory in general to mean something lightly speculated, then it's implied that scientists must be talking about the same level of uncertainty when they use theory to refer to their well-tested and reasoned principles.

The distinction has come to the forefront particularly on occasions when the content of science curricula in schools has been challenged—notably, when a school board in Georgia put stickers on textbooks stating that evolution was "a theory, not a fact, regarding the origin of living things." As Kenneth R. Miller, a cell biologist at Brown University, has said , a theory "doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”

While theories are never completely infallible, they form the basis of scientific reasoning because, as Miller said "to the best of our ability, we’ve tested them, and they’ve held up."

  • proposition
  • supposition

hypothesis , theory , law mean a formula derived by inference from scientific data that explains a principle operating in nature.

hypothesis implies insufficient evidence to provide more than a tentative explanation.

theory implies a greater range of evidence and greater likelihood of truth.

law implies a statement of order and relation in nature that has been found to be invariable under the same conditions.

Examples of hypothesis in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'hypothesis.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Greek, from hypotithenai to put under, suppose, from hypo- + tithenai to put — more at do

1641, in the meaning defined at sense 1a

Phrases Containing hypothesis

  • counter - hypothesis
  • nebular hypothesis
  • null hypothesis
  • planetesimal hypothesis
  • Whorfian hypothesis

Articles Related to hypothesis

hypothesis

This is the Difference Between a...

This is the Difference Between a Hypothesis and a Theory

In scientific reasoning, they're two completely different things

Dictionary Entries Near hypothesis

hypothermia

hypothesize

Cite this Entry

“Hypothesis.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/hypothesis. Accessed 28 Apr. 2024.

Kids Definition

Kids definition of hypothesis, medical definition, medical definition of hypothesis, more from merriam-webster on hypothesis.

Nglish: Translation of hypothesis for Spanish Speakers

Britannica English: Translation of hypothesis for Arabic Speakers

Britannica.com: Encyclopedia article about hypothesis

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

More commonly misspelled words, commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, how to use accents and diacritical marks, popular in wordplay, the words of the week - apr. 26, 9 superb owl words, 'gaslighting,' 'woke,' 'democracy,' and other top lookups, 10 words for lesser-known games and sports, your favorite band is in the dictionary, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

Grad Coach

What Is A Research (Scientific) Hypothesis? A plain-language explainer + examples

By:  Derek Jansen (MBA)  | Reviewed By: Dr Eunice Rautenbach | June 2020

If you’re new to the world of research, or it’s your first time writing a dissertation or thesis, you’re probably noticing that the words “research hypothesis” and “scientific hypothesis” are used quite a bit, and you’re wondering what they mean in a research context .

“Hypothesis” is one of those words that people use loosely, thinking they understand what it means. However, it has a very specific meaning within academic research. So, it’s important to understand the exact meaning before you start hypothesizing. 

Research Hypothesis 101

  • What is a hypothesis ?
  • What is a research hypothesis (scientific hypothesis)?
  • Requirements for a research hypothesis
  • Definition of a research hypothesis
  • The null hypothesis

What is a hypothesis?

Let’s start with the general definition of a hypothesis (not a research hypothesis or scientific hypothesis), according to the Cambridge Dictionary:

Hypothesis: an idea or explanation for something that is based on known facts but has not yet been proved.

In other words, it’s a statement that provides an explanation for why or how something works, based on facts (or some reasonable assumptions), but that has not yet been specifically tested . For example, a hypothesis might look something like this:

Hypothesis: sleep impacts academic performance.

This statement predicts that academic performance will be influenced by the amount and/or quality of sleep a student engages in – sounds reasonable, right? It’s based on reasonable assumptions , underpinned by what we currently know about sleep and health (from the existing literature). So, loosely speaking, we could call it a hypothesis, at least by the dictionary definition.

But that’s not good enough…

Unfortunately, that’s not quite sophisticated enough to describe a research hypothesis (also sometimes called a scientific hypothesis), and it wouldn’t be acceptable in a dissertation, thesis or research paper . In the world of academic research, a statement needs a few more criteria to constitute a true research hypothesis .

What is a research hypothesis?

A research hypothesis (also called a scientific hypothesis) is a statement about the expected outcome of a study (for example, a dissertation or thesis). To constitute a quality hypothesis, the statement needs to have three attributes – specificity , clarity and testability .

Let’s take a look at these more closely.

Need a helping hand?

hypothesis language means

Hypothesis Essential #1: Specificity & Clarity

A good research hypothesis needs to be extremely clear and articulate about both what’ s being assessed (who or what variables are involved ) and the expected outcome (for example, a difference between groups, a relationship between variables, etc.).

Let’s stick with our sleepy students example and look at how this statement could be more specific and clear.

Hypothesis: Students who sleep at least 8 hours per night will, on average, achieve higher grades in standardised tests than students who sleep less than 8 hours a night.

As you can see, the statement is very specific as it identifies the variables involved (sleep hours and test grades), the parties involved (two groups of students), as well as the predicted relationship type (a positive relationship). There’s no ambiguity or uncertainty about who or what is involved in the statement, and the expected outcome is clear.

Contrast that to the original hypothesis we looked at – “Sleep impacts academic performance” – and you can see the difference. “Sleep” and “academic performance” are both comparatively vague , and there’s no indication of what the expected relationship direction is (more sleep or less sleep). As you can see, specificity and clarity are key.

A good research hypothesis needs to be very clear about what’s being assessed and very specific about the expected outcome.

Hypothesis Essential #2: Testability (Provability)

A statement must be testable to qualify as a research hypothesis. In other words, there needs to be a way to prove (or disprove) the statement. If it’s not testable, it’s not a hypothesis – simple as that.

For example, consider the hypothesis we mentioned earlier:

Hypothesis: Students who sleep at least 8 hours per night will, on average, achieve higher grades in standardised tests than students who sleep less than 8 hours a night.  

We could test this statement by undertaking a quantitative study involving two groups of students, one that gets 8 or more hours of sleep per night for a fixed period, and one that gets less. We could then compare the standardised test results for both groups to see if there’s a statistically significant difference. 

Again, if you compare this to the original hypothesis we looked at – “Sleep impacts academic performance” – you can see that it would be quite difficult to test that statement, primarily because it isn’t specific enough. How much sleep? By who? What type of academic performance?

So, remember the mantra – if you can’t test it, it’s not a hypothesis 🙂

A good research hypothesis must be testable. In other words, you must able to collect observable data in a scientifically rigorous fashion to test it.

Defining A Research Hypothesis

You’re still with us? Great! Let’s recap and pin down a clear definition of a hypothesis.

A research hypothesis (or scientific hypothesis) is a statement about an expected relationship between variables, or explanation of an occurrence, that is clear, specific and testable.

So, when you write up hypotheses for your dissertation or thesis, make sure that they meet all these criteria. If you do, you’ll not only have rock-solid hypotheses but you’ll also ensure a clear focus for your entire research project.

What about the null hypothesis?

You may have also heard the terms null hypothesis , alternative hypothesis, or H-zero thrown around. At a simple level, the null hypothesis is the counter-proposal to the original hypothesis.

For example, if the hypothesis predicts that there is a relationship between two variables (for example, sleep and academic performance), the null hypothesis would predict that there is no relationship between those variables.

At a more technical level, the null hypothesis proposes that no statistical significance exists in a set of given observations and that any differences are due to chance alone.

And there you have it – hypotheses in a nutshell. 

If you have any questions, be sure to leave a comment below and we’ll do our best to help you. If you need hands-on help developing and testing your hypotheses, consider our private coaching service , where we hold your hand through the research journey.

hypothesis language means

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research limitations vs delimitations

16 Comments

Lynnet Chikwaikwai

Very useful information. I benefit more from getting more information in this regard.

Dr. WuodArek

Very great insight,educative and informative. Please give meet deep critics on many research data of public international Law like human rights, environment, natural resources, law of the sea etc

Afshin

In a book I read a distinction is made between null, research, and alternative hypothesis. As far as I understand, alternative and research hypotheses are the same. Can you please elaborate? Best Afshin

GANDI Benjamin

This is a self explanatory, easy going site. I will recommend this to my friends and colleagues.

Lucile Dossou-Yovo

Very good definition. How can I cite your definition in my thesis? Thank you. Is nul hypothesis compulsory in a research?

Pereria

It’s a counter-proposal to be proven as a rejection

Egya Salihu

Please what is the difference between alternate hypothesis and research hypothesis?

Mulugeta Tefera

It is a very good explanation. However, it limits hypotheses to statistically tasteable ideas. What about for qualitative researches or other researches that involve quantitative data that don’t need statistical tests?

Derek Jansen

In qualitative research, one typically uses propositions, not hypotheses.

Samia

could you please elaborate it more

Patricia Nyawir

I’ve benefited greatly from these notes, thank you.

Hopeson Khondiwa

This is very helpful

Dr. Andarge

well articulated ideas are presented here, thank you for being reliable sources of information

TAUNO

Excellent. Thanks for being clear and sound about the research methodology and hypothesis (quantitative research)

I have only a simple question regarding the null hypothesis. – Is the null hypothesis (Ho) known as the reversible hypothesis of the alternative hypothesis (H1? – How to test it in academic research?

Tesfaye Negesa Urge

this is very important note help me much more

Trackbacks/Pingbacks

  • What Is Research Methodology? Simple Definition (With Examples) - Grad Coach - […] Contrasted to this, a quantitative methodology is typically used when the research aims and objectives are confirmatory in nature. For example,…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Beelinguapp

Stephen Krashen’s Five Hypotheses of Second Language Acquisition

A male teacher helping a young female student

Unsplash Monica Melton

Interested in learning more about linguistics and linguists ? Read this way.

What is linguistics? Linguistics is the scientific study of language that involves the analysis of language rules, language meaning, and language context. In other words, linguistics is the study of how a language is formed and how it works.

A person who studies linguistics is called a linguist . A linguist doesn't necessarily have to learn different languages because they’re more interested in learning the structures of languages. Noam Chomsky and Dr. Stephen Krashen are two of the world’s most famous linguists.

Dr. Stephen D. Krashen facilitated research in second-language acquisition , bilingual education, and in reading. He believes that language acquisition requires “meaningful interaction with the target language.”

Dr. Krashen also theorized that there are 5 hypotheses to second language acquisition , which have been very influential in the field of second language research and teaching

Let’s take a look at these hypotheses. Who knows, maybe you’ve applied one or all of them in your language learning journey!

1. Acquisition-Learning Hypothesis

The Acquisition-Learning Hypothesis states that there is a distinction between language acquisition and language learning. In language acquisition, the student acquires language unconsciously . This is similar to when a child picks up their first language. On the other hand, language learning happens when the student is consciously discovering and learning the rules and grammatical structures of the language.

2. Monitor Hypothesis

Monitor Hypothesis states that the learner is consciously learning the grammar rules and functions of a language rather than its meaning. This theory focuses more on the correctness of the language . To use the Monitor Hypothesis properly, three standards must be met:

  • The acquirer must know the rules of the language.
  • The acquirer must concentrate on the exact form of the language.
  • The acquirer must set aside some time to review and apply the language rules in a conversation. Although this is a tricky one, because in regular conversations there’s hardly enough time to ensure correctness of the language.

3. Natural Order Hypothesis

Natural Order Hypothesis is based on the finding that language learners learn grammatical structures in a fixed and universal way . There is a sense of predictability to this kind of learning, which is similar to how a speaker learns their first language.

4. Input Hypothesis

Input Hypothesis places more emphasis on the acquisition of the second language. This theory is more concerned about how the language is acquired rather than learned.

Moreover, the Input Hypothesis states that the learner naturally develops language as soon as the student receives interesting and fun information .

5. Affective Filter Hypothesis

In Affective Filter, language acquisition can be affected by emotional factors. If the affective filter is higher, then the student is less likely to learn the language. Therefore, the learning environment for the student must be positive and stress-free so that the student is open for input.

A cartoon practicing language acquisition

Language acquisition is a subconscious process. Usually, language acquirers are aware that they’re using the language for communication but are unaware that they are acquiring the language.

Language acquirers also are unaware of the rules of the language they are acquiring. Instead, language acquirers feel a sense of correctness, when the sentence sounds and feels right. Strange right? But it is also quite fascinating.

Acquiring a language is a tedious process. It can seem more like a chore, a game of should I learn today or should I just do something else? Sigh

But Dr. Krashen’s language acquisition theories might be onto something, don’t you think? Learning a language should be fun and in some way it should happen naturally. Try to engage in meaningful interactions like reading exciting stories and relevant news articles, even talking with friends and family in a different language. Indulge in interesting and easy to understand language activities, and by then you might already have slowly started acquiring your target language!

Related Posts

Try these twelve tricky japanese tongue twisters, learn “jandals” & more kiwi slang for your new zealand holiday, basic russian greetings & easy phrases, subscribe to our newsletter.

Sapir–Whorf hypothesis (Linguistic Relativity Hypothesis)

Mia Belle Frothingham

Author, Researcher, Science Communicator

BA with minors in Psychology and Biology, MRes University of Edinburgh

Mia Belle Frothingham is a Harvard University graduate with a Bachelor of Arts in Sciences with minors in biology and psychology

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

There are about seven thousand languages heard around the world – they all have different sounds, vocabularies, and structures. As you know, language plays a significant role in our lives.

But one intriguing question is – can it actually affect how we think?

Collection of talking people. Men and women with speech bubbles. Communication and interaction. Friends, students or colleagues. Cartoon flat vector illustrations isolated on white background

It is widely thought that reality and how one perceives the world is expressed in spoken words and are precisely the same as reality.

That is, perception and expression are understood to be synonymous, and it is assumed that speech is based on thoughts. This idea believes that what one says depends on how the world is encoded and decoded in the mind.

However, many believe the opposite.

In that, what one perceives is dependent on the spoken word. Basically, that thought depends on language, not the other way around.

What Is The Sapir-Whorf Hypothesis?

Twentieth-century linguists Edward Sapir and Benjamin Lee Whorf are known for this very principle and its popularization. Their joint theory, known as the Sapir-Whorf Hypothesis or, more commonly, the Theory of Linguistic Relativity, holds great significance in all scopes of communication theories.

The Sapir-Whorf hypothesis states that the grammatical and verbal structure of a person’s language influences how they perceive the world. It emphasizes that language either determines or influences one’s thoughts.

The Sapir-Whorf hypothesis states that people experience the world based on the structure of their language, and that linguistic categories shape and limit cognitive processes. It proposes that differences in language affect thought, perception, and behavior, so speakers of different languages think and act differently.

For example, different words mean various things in other languages. Not every word in all languages has an exact one-to-one translation in a foreign language.

Because of these small but crucial differences, using the wrong word within a particular language can have significant consequences.

The Sapir-Whorf hypothesis is sometimes called “linguistic relativity” or the “principle of linguistic relativity.” So while they have slightly different names, they refer to the same basic proposal about the relationship between language and thought.

How Language Influences Culture

Culture is defined by the values, norms, and beliefs of a society. Our culture can be considered a lens through which we undergo the world and develop a shared meaning of what occurs around us.

The language that we create and use is in response to the cultural and societal needs that arose. In other words, there is an apparent relationship between how we talk and how we perceive the world.

One crucial question that many intellectuals have asked is how our society’s language influences its culture.

Linguist and anthropologist Edward Sapir and his then-student Benjamin Whorf were interested in answering this question.

Together, they created the Sapir-Whorf hypothesis, which states that our thought processes predominantly determine how we look at the world.

Our language restricts our thought processes – our language shapes our reality. Simply, the language that we use shapes the way we think and how we see the world.

Since the Sapir-Whorf hypothesis theorizes that our language use shapes our perspective of the world, people who speak different languages have different views of the world.

In the 1920s, Benjamin Whorf was a Yale University graduate student studying with linguist Edward Sapir, who was considered the father of American linguistic anthropology.

Sapir was responsible for documenting and recording the cultures and languages of many Native American tribes disappearing at an alarming rate. He and his predecessors were well aware of the close relationship between language and culture.

Anthropologists like Sapir need to learn the language of the culture they are studying to understand the worldview of its speakers truly. Whorf believed that the opposite is also true, that language affects culture by influencing how its speakers think.

His hypothesis proposed that the words and structures of a language influence how its speaker behaves and feels about the world and, ultimately, the culture itself.

Simply put, Whorf believed that you see the world differently from another person who speaks another language due to the specific language you speak.

Human beings do not live in the matter-of-fact world alone, nor solitary in the world of social action as traditionally understood, but are very much at the pardon of the certain language which has become the medium of communication and expression for their society.

To a large extent, the real world is unconsciously built on habits in regard to the language of the group. We hear and see and otherwise experience broadly as we do because the language habits of our community predispose choices of interpretation.

Studies & Examples

The lexicon, or vocabulary, is the inventory of the articles a culture speaks about and has classified to understand the world around them and deal with it effectively.

For example, our modern life is dictated for many by the need to travel by some vehicle – cars, buses, trucks, SUVs, trains, etc. We, therefore, have thousands of words to talk about and mention, including types of models, vehicles, parts, or brands.

The most influential aspects of each culture are similarly reflected in the dictionary of its language. Among the societies living on the islands in the Pacific, fish have significant economic and cultural importance.

Therefore, this is reflected in the rich vocabulary that describes all aspects of the fish and the environments that islanders depend on for survival.

For example, there are over 1,000 fish species in Palau, and Palauan fishers knew, even long before biologists existed, details about the anatomy, behavior, growth patterns, and habitat of most of them – far more than modern biologists know today.

Whorf’s studies at Yale involved working with many Native American languages, including Hopi. He discovered that the Hopi language is quite different from English in many ways, especially regarding time.

Western cultures and languages view times as a flowing river that carries us continuously through the present, away from the past, and to the future.

Our grammar and system of verbs reflect this concept with particular tenses for past, present, and future.

We perceive this concept of time as universal in that all humans see it in the same way.

Although a speaker of Hopi has very different ideas, their language’s structure both reflects and shapes the way they think about time. Seemingly, the Hopi language has no present, past, or future tense; instead, they divide the world into manifested and unmanifest domains.

The manifested domain consists of the physical universe, including the present, the immediate past, and the future; the unmanifest domain consists of the remote past and the future and the world of dreams, thoughts, desires, and life forces.

Also, there are no words for minutes, minutes, or days of the week. Native Hopi speakers often had great difficulty adapting to life in the English-speaking world when it came to being on time for their job or other affairs.

It is due to the simple fact that this was not how they had been conditioned to behave concerning time in their Hopi world, which followed the phases of the moon and the movements of the sun.

Today, it is widely believed that some aspects of perception are affected by language.

One big problem with the original Sapir-Whorf hypothesis derives from the idea that if a person’s language has no word for a specific concept, then that person would not understand that concept.

Honestly, the idea that a mother tongue can restrict one’s understanding has been largely unaccepted. For example, in German, there is a term that means to take pleasure in another person’s unhappiness.

While there is no translatable equivalent in English, it just would not be accurate to say that English speakers have never experienced or would not be able to comprehend this emotion.

Just because there is no word for this in the English language does not mean English speakers are less equipped to feel or experience the meaning of the word.

Not to mention a “chicken and egg” problem with the theory.

Of course, languages are human creations, very much tools we invented and honed to suit our needs. Merely showing that speakers of diverse languages think differently does not tell us whether it is the language that shapes belief or the other way around.

Supporting Evidence

On the other hand, there is hard evidence that the language-associated habits we acquire play a role in how we view the world. And indeed, this is especially true for languages that attach genders to inanimate objects.

There was a study done that looked at how German and Spanish speakers view different things based on their given gender association in each respective language.

The results demonstrated that in describing things that are referred to as masculine in Spanish, speakers of the language marked them as having more male characteristics like “strong” and “long.” Similarly, these same items, which use feminine phrasings in German, were noted by German speakers as effeminate, like “beautiful” and “elegant.”

The findings imply that speakers of each language have developed preconceived notions of something being feminine or masculine, not due to the objects” characteristics or appearances but because of how they are categorized in their native language.

It is important to remember that the Theory of Linguistic Relativity (Sapir-Whorf Hypothesis) also successfully achieves openness. The theory is shown as a window where we view the cognitive process, not as an absolute.

It is set forth to look at a phenomenon differently than one usually would. Furthermore, the Sapir-Whorf Hypothesis is very simple and logically sound. Understandably, one’s atmosphere and culture will affect decoding.

Likewise, in studies done by the authors of the theory, many Native American tribes do not have a word for particular things because they do not exist in their lives. The logical simplism of this idea of relativism provides parsimony.

Truly, the Sapir-Whorf Hypothesis makes sense. It can be utilized in describing great numerous misunderstandings in everyday life. When a Pennsylvanian says “yuns,” it does not make any sense to a Californian, but when examined, it is just another word for “you all.”

The Linguistic Relativity Theory addresses this and suggests that it is all relative. This concept of relativity passes outside dialect boundaries and delves into the world of language – from different countries and, consequently, from mind to mind.

Is language reality honestly because of thought, or is it thought which occurs because of language? The Sapir-Whorf Hypothesis very transparently presents a view of reality being expressed in language and thus forming in thought.

The principles rehashed in it show a reasonable and even simple idea of how one perceives the world, but the question is still arguable: thought then language or language then thought?

Modern Relevance

Regardless of its age, the Sapir-Whorf hypothesis, or the Linguistic Relativity Theory, has continued to force itself into linguistic conversations, even including pop culture.

The idea was just recently revisited in the movie “Arrival,” – a science fiction film that engagingly explores the ways in which an alien language can affect and alter human thinking.

And even if some of the most drastic claims of the theory have been debunked or argued against, the idea has continued its relevance, and that does say something about its importance.

Hypotheses, thoughts, and intellectual musings do not need to be totally accurate to remain in the public eye as long as they make us think and question the world – and the Sapir-Whorf Hypothesis does precisely that.

The theory does not only make us question linguistic theory and our own language but also our very existence and how our perceptions might shape what exists in this world.

There are generalities that we can expect every person to encounter in their day-to-day life – in relationships, love, work, sadness, and so on. But thinking about the more granular disparities experienced by those in diverse circumstances, linguistic or otherwise, helps us realize that there is more to the story than ours.

And beautifully, at the same time, the Sapir-Whorf Hypothesis reiterates the fact that we are more alike than we are different, regardless of the language we speak.

Isn’t it just amazing that linguistic diversity just reveals to us how ingenious and flexible the human mind is – human minds have invented not one cognitive universe but, indeed, seven thousand!

Kay, P., & Kempton, W. (1984). What is the Sapir‐Whorf hypothesis?. American anthropologist, 86(1), 65-79.

Whorf, B. L. (1952). Language, mind, and reality. ETC: A review of general semantics, 167-188.

Whorf, B. L. (1997). The relation of habitual thought and behavior to language. In Sociolinguistics (pp. 443-463). Palgrave, London.

Whorf, B. L. (2012). Language, thought, and reality: Selected writings of Benjamin Lee Whorf. MIT press.

Print Friendly, PDF & Email

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Meaning of hypothesis in English

Your browser doesn't support HTML5 audio

  • abstraction
  • afterthought
  • anthropocentrism
  • anti-Darwinian
  • exceptionalism
  • foundation stone
  • great minds think alike idiom
  • non-dogmatic
  • non-empirical
  • non-material
  • non-practical
  • social Darwinism
  • supersensible
  • the domino theory

hypothesis | American Dictionary

Hypothesis | business english, examples of hypothesis, translations of hypothesis.

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

doggie day care

a place where owners can leave their dogs when they are at work or away from home in the daytime, or the care the dogs receive when they are there

Dead ringers and peas in pods (Talking about similarities, Part 2)

Dead ringers and peas in pods (Talking about similarities, Part 2)

hypothesis language means

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • English    Noun
  • American    Noun
  • Business    Noun
  • Translations
  • All translations

Add hypothesis to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

  • Machine Learning Tutorial
  • Data Analysis Tutorial
  • Python - Data visualization tutorial
  • Machine Learning Projects
  • Machine Learning Interview Questions
  • Machine Learning Mathematics
  • Deep Learning Tutorial
  • Deep Learning Project
  • Deep Learning Interview Questions
  • Computer Vision Tutorial
  • Computer Vision Projects
  • NLP Project
  • NLP Interview Questions
  • Statistics with Python
  • 100 Days of Machine Learning

Hypothesis in Machine Learning

  • Bayes Theorem in Machine learning
  • How does Machine Learning Works?
  • Understanding Hypothesis Testing
  • An introduction to Machine Learning
  • Types of Machine Learning
  • How Machine Learning Will Change the World?
  • Difference Between Machine Learning vs Statistics
  • Difference between Statistical Model and Machine Learning
  • Difference Between Machine Learning and Artificial Intelligence
  • ML | Naive Bayes Scratch Implementation using Python
  • Introduction to Machine Learning in R
  • Introduction to Machine Learning in Julia
  • Design a Learning System in Machine Learning
  • Getting started with Machine Learning
  • Machine Learning vs Artificial Intelligence
  • Hypothesis Testing Formula
  • Current Best Hypothesis Search
  • What is the Role of Machine Learning in Data Science
  • Removing stop words with NLTK in Python
  • Decision Tree
  • Linear Regression in Machine learning
  • Agents in Artificial Intelligence
  • Plotting Histogram in Python using Matplotlib
  • One Hot Encoding in Machine Learning
  • Best Python libraries for Machine Learning
  • Introduction to Hill Climbing | Artificial Intelligence
  • Clustering in Machine Learning
  • Digital Image Processing Basics

The concept of a hypothesis is fundamental in Machine Learning and data science endeavours. In the realm of machine learning, a hypothesis serves as an initial assumption made by data scientists and ML professionals when attempting to address a problem. Machine learning involves conducting experiments based on past experiences, and these hypotheses are crucial in formulating potential solutions.

It’s important to note that in machine learning discussions, the terms “hypothesis” and “model” are sometimes used interchangeably. However, a hypothesis represents an assumption, while a model is a mathematical representation employed to test that hypothesis. This section on “Hypothesis in Machine Learning” explores key aspects related to hypotheses in machine learning and their significance.

Table of Content

How does a Hypothesis work?

Hypothesis space and representation in machine learning, hypothesis in statistics, faqs on hypothesis in machine learning.

A hypothesis in machine learning is the model’s presumption regarding the connection between the input features and the result. It is an illustration of the mapping function that the algorithm is attempting to discover using the training set. To minimize the discrepancy between the expected and actual outputs, the learning process involves modifying the weights that parameterize the hypothesis. The objective is to optimize the model’s parameters to achieve the best predictive performance on new, unseen data, and a cost function is used to assess the hypothesis’ accuracy.

In most supervised machine learning algorithms, our main goal is to find a possible hypothesis from the hypothesis space that could map out the inputs to the proper outputs. The following figure shows the common method to find out the possible hypothesis from the Hypothesis space:

Hypothesis-Geeksforgeeks

Hypothesis Space (H)

Hypothesis space is the set of all the possible legal hypothesis. This is the set from which the machine learning algorithm would determine the best possible (only one) which would best describe the target function or the outputs.

Hypothesis (h)

A hypothesis is a function that best describes the target in supervised machine learning. The hypothesis that an algorithm would come up depends upon the data and also depends upon the restrictions and bias that we have imposed on the data.

The Hypothesis can be calculated as:

[Tex]y = mx + b [/Tex]

  • m = slope of the lines
  • b = intercept

To better understand the Hypothesis Space and Hypothesis consider the following coordinate that shows the distribution of some data:

Hypothesis_Geeksforgeeks

Say suppose we have test data for which we have to determine the outputs or results. The test data is as shown below:

hypothesis language means

We can predict the outcomes by dividing the coordinate as shown below:

hypothesis language means

So the test data would yield the following result:

hypothesis language means

But note here that we could have divided the coordinate plane as:

hypothesis language means

The way in which the coordinate would be divided depends on the data, algorithm and constraints.

  • All these legal possible ways in which we can divide the coordinate plane to predict the outcome of the test data composes of the Hypothesis Space.
  • Each individual possible way is known as the hypothesis.

Hence, in this example the hypothesis space would be like:

Possible hypothesis-Geeksforgeeks

The hypothesis space comprises all possible legal hypotheses that a machine learning algorithm can consider. Hypotheses are formulated based on various algorithms and techniques, including linear regression, decision trees, and neural networks. These hypotheses capture the mapping function transforming input data into predictions.

Hypothesis Formulation and Representation in Machine Learning

Hypotheses in machine learning are formulated based on various algorithms and techniques, each with its representation. For example:

  • Linear Regression : [Tex] h(X) = \theta_0 + \theta_1 X_1 + \theta_2 X_2 + … + \theta_n X_n[/Tex]
  • Decision Trees : [Tex]h(X) = \text{Tree}(X)[/Tex]
  • Neural Networks : [Tex]h(X) = \text{NN}(X)[/Tex]

In the case of complex models like neural networks, the hypothesis may involve multiple layers of interconnected nodes, each performing a specific computation.

Hypothesis Evaluation:

The process of machine learning involves not only formulating hypotheses but also evaluating their performance. This evaluation is typically done using a loss function or an evaluation metric that quantifies the disparity between predicted outputs and ground truth labels. Common evaluation metrics include mean squared error (MSE), accuracy, precision, recall, F1-score, and others. By comparing the predictions of the hypothesis with the actual outcomes on a validation or test dataset, one can assess the effectiveness of the model.

Hypothesis Testing and Generalization:

Once a hypothesis is formulated and evaluated, the next step is to test its generalization capabilities. Generalization refers to the ability of a model to make accurate predictions on unseen data. A hypothesis that performs well on the training dataset but fails to generalize to new instances is said to suffer from overfitting. Conversely, a hypothesis that generalizes well to unseen data is deemed robust and reliable.

The process of hypothesis formulation, evaluation, testing, and generalization is often iterative in nature. It involves refining the hypothesis based on insights gained from model performance, feature importance, and domain knowledge. Techniques such as hyperparameter tuning, feature engineering, and model selection play a crucial role in this iterative refinement process.

In statistics , a hypothesis refers to a statement or assumption about a population parameter. It is a proposition or educated guess that helps guide statistical analyses. There are two types of hypotheses: the null hypothesis (H0) and the alternative hypothesis (H1 or Ha).

  • Null Hypothesis(H 0 ): This hypothesis suggests that there is no significant difference or effect, and any observed results are due to chance. It often represents the status quo or a baseline assumption.
  • Aternative Hypothesis(H 1 or H a ): This hypothesis contradicts the null hypothesis, proposing that there is a significant difference or effect in the population. It is what researchers aim to support with evidence.

Q. How does the training process use the hypothesis?

The learning algorithm uses the hypothesis as a guide to minimise the discrepancy between expected and actual outputs by adjusting its parameters during training.

Q. How is the hypothesis’s accuracy assessed?

Usually, a cost function that calculates the difference between expected and actual values is used to assess accuracy. Optimising the model to reduce this expense is the aim.

Q. What is Hypothesis testing?

Hypothesis testing is a statistical method for determining whether or not a hypothesis is correct. The hypothesis can be about two variables in a dataset, about an association between two groups, or about a situation.

Q. What distinguishes the null hypothesis from the alternative hypothesis in machine learning experiments?

The null hypothesis (H0) assumes no significant effect, while the alternative hypothesis (H1 or Ha) contradicts H0, suggesting a meaningful impact. Statistical testing is employed to decide between these hypotheses.

Please Login to comment...

Similar reads.

author

  • Machine Learning

advertisewithusBannerImg

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

New York State Senator John C. Liu, NYS Senate District 16

New York State Senator John C. Liu

Chairman of New York City Education Committee

( D ) 16th Senate District

Legislators announce language defining squatter in state housing law included in FY25 state budget

John C. Liu

April 23, 2024

  • State budget
  • Senator Liu
  • Housing Law

PDF icon

File press-release-2024-04-22-squatter.pdf

NY State Capitol

FOR IMMEDIATE RELEASE:  Monday, April 22, 2024 Contact: Soojin Choi |347-556-6335| [email protected] 

Albany, NY –  State Senator John Liu, Assembly Member Ron Kim and Queens legislators today announced that the final FY2025 state budget includes language that defines squatter in state housing law. The language was derived from legislation introduced by Senator Liu and Assembly Member Ron Kim,  S8996 /  A9772 , following recent reports of squatters who take over private property.

The definition updates New York State real property law to read that “a tenant shall not include a squatter,” and further define squatter as “a person who enters or intrudes upon real property without the permission of the person entitled to possession, and continues to occupy the property without title, right or permission of the owner or owner’s agent or a person entitled to possession.” 

Including this definition in real property law will help distinguish legal renters from those who unlawfully intrude or take over property. It establishes that squatters are not tenants and therefore not subject to tenant rights or protections after 30 days.

State Senator John Liu stated, “It was important that we acted with urgency to send a strong message to squatters who take over private homes that they are not welcome in our community. Scam artists who intrude on others’ homes should not have rights as tenants in state housing law, and this inclusion in the budget codifies that in simple, straightforward language. Defining squatter is an important step forward, and we will continue examining even stronger measures to protect homeowners without inadvertently putting renters at risk.”

State Assembly Member Ron Kim  stated, "Our state needed stronger protections for law abiding property owners who are being victimized by squatters. Our new law defines these terms more precisely. Any occupant who unlawfully resides in a property owner's home will be more easily removed. I want to thank colleagues in the legislature for moving expeditiously on this new law"

State Assembly Member Ed Braunstein stated, “New York State homeowners have been put on edge by the recent reports of people returning to their properties and finding squatters, only to discover that the squatters cannot be removed without a lengthy and costly eviction proceeding. Trespassing individuals in these situations are abusing a law meant to protect lawful tenants and they absolutely should not be afforded the same rights and protections. I was proud to co-sponsor this bill, which sought to close this loophole and protect New York homeowners from these unlawful opportunists. I am pleased that this clarifying language was included in this year’s state budget.”

State Assembly Member Nily Rozic  stated, “ Today signifies an important step forward as we include language defining 'squatters' in the state budget. This measure will help safeguard property rights and ensure the well-being of our communities. Thank you to my colleagues for their collective efforts to address squatting issues across New York."

State Senator Toby Ann Stavisky  stated, “I want to thank my colleagues in the legislature for supporting this bill, which I cosponsored. A problem was brought to our attention, and we got positive results.”

State Senator Leroy Comrie  stated, "The state legislature took a hard stance against squatters who twist existing loopholes through acts that would, by any other circumstance, constitute theft. This change to the law is desperately needed, amongst our Queens residents, especially our seniors and homeowners, who have been living in fear and confusion as to how the law could possibly allow for such abuses. I am proud to stand with my colleagues to resolve this matter and hope for this bill’s swift passage.”

State Senator Roxanne J Persaud  stated,  “Squatters have jeopardized the livelihood of homeowners far too long. By passing this legislation in the budget, the rights and protections of legal renters will be clearly defined and those who take advantage will be legally held accountable. This law seeks to protect the hard earned assets of property owners and allows them to further secure their economic stability."

State Assembly Member Grace Lee  stated, “In this year’s budget, we are making it clear that if you are illegally occupying someone’s property, you are a squatter, not a tenant. Squatters do not have the same rights and protections as lawful tenants and by clarifying this distinction, we can better protect small landlords in our communities. I was proud to be a prime co-sponsor for this bill and to fight with my colleagues to get it passed in this year’s budget.”

                                                               ###

Share this Article or Press Release

Press Release

In The News

State Sen. John Liu on budget deadline, mayoral control of NYC schools

April 15, 2024

Pix11-Pix on Politics

Liu, Kim introduce legislation defining squatter in NYS housing law

April 10, 2024

Press conference on Squatter bill

Liu, Mayer joint statement on SED's Mayoral Control report

April 9, 2024

Capitol

Sign up for Senator Liu's email notifications.

2023 women of distinction honoree, 2023 new york state senate veterans' hall of fame honoree, tick & lyme disease prevention.

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Take action

  • Report an antitrust violation
  • File adjudicative documents
  • Find banned debt collectors
  • View competition guidance
  • Competition Matters Blog

New HSR thresholds and filing fees for 2024

View all Competition Matters Blog posts

We work to advance government policies that protect consumers and promote competition.

View Policy

Search or browse the Legal Library

Find legal resources and guidance to understand your business responsibilities and comply with the law.

Browse legal resources

  • Find policy statements
  • Submit a public comment

hypothesis language means

Vision and Priorities

Memo from Chair Lina M. Khan to commission staff and commissioners regarding the vision and priorities for the FTC.

Technology Blog

Consumer facing applications: a quote book from the tech summit on ai.

View all Technology Blog posts

Advice and Guidance

Learn more about your rights as a consumer and how to spot and avoid scams. Find the resources you need to understand how consumer protection law impacts your business.

  • Report fraud
  • Report identity theft
  • Register for Do Not Call
  • Sign up for consumer alerts
  • Get Business Blog updates
  • Get your free credit report
  • Find refund cases
  • Order bulk publications
  • Consumer Advice
  • Shopping and Donating
  • Credit, Loans, and Debt
  • Jobs and Making Money
  • Unwanted Calls, Emails, and Texts
  • Identity Theft and Online Security
  • Business Guidance
  • Advertising and Marketing
  • Credit and Finance
  • Privacy and Security
  • By Industry
  • For Small Businesses
  • Browse Business Guidance Resources
  • Business Blog

Servicemembers: Your tool for financial readiness

Visit militaryconsumer.gov

Get consumer protection basics, plain and simple

Visit consumer.gov

Learn how the FTC protects free enterprise and consumers

Visit Competition Counts

Looking for competition guidance?

  • Competition Guidance

News and Events

Latest news, williams-sonoma will pay record $3.17 million civil penalty for violating ftc made in usa order.

View News and Events

Upcoming Event

Older adults and fraud: what you need to know.

View more Events

Sign up for the latest news

Follow us on social media

-->   -->   -->   -->   -->  

gaming controller illustration

Playing it Safe: Explore the FTC's Top Video Game Cases

Learn about the FTC's notable video game cases and what our agency is doing to keep the public safe.

Latest Data Visualization

Visualization of FTC Refunds to Consumers

FTC Refunds to Consumers

Explore refund statistics including where refunds were sent and the dollar amounts refunded with this visualization.

About the FTC

Our mission is protecting the public from deceptive or unfair business practices and from unfair methods of competition through law enforcement, advocacy, research, and education.

Learn more about the FTC

Lina M. Khan

Meet the Chair

Lina M. Khan was sworn in as Chair of the Federal Trade Commission on June 15, 2021.

Chair Lina M. Khan

Looking for legal documents or records? Search the Legal Library instead.

  • Cases and Proceedings
  • Premerger Notification Program
  • Merger Review
  • Anticompetitive Practices
  • Competition and Consumer Protection Guidance Documents
  • Warning Letters
  • Consumer Sentinel Network
  • Criminal Liaison Unit
  • FTC Refund Programs
  • Notices of Penalty Offenses
  • Advocacy and Research
  • Advisory Opinions
  • Cooperation Agreements
  • Federal Register Notices
  • Public Comments
  • Policy Statements
  • International
  • Office of Technology Blog
  • Military Consumer
  • Consumer.gov
  • Bulk Publications
  • Data and Visualizations
  • Stay Connected
  • Commissioners and Staff
  • Bureaus and Offices
  • Budget and Strategy
  • Office of Inspector General
  • Careers at the FTC

Fact Sheet on FTC’s Proposed Final Noncompete Rule

Facebook

  • Competition
  • Office of Policy Planning
  • Bureau of Competition

The following outline provides a high-level overview of the FTC’s proposed final rule :

  • Specifically, the final rule provides that it is an unfair method of competition—and therefore a violation of Section 5 of the FTC Act—for employers to enter into noncompetes with workers after the effective date.
  • Fewer than 1% of workers are estimated to be senior executives under the final rule.
  • Specifically, the final rule defines the term “senior executive” to refer to workers earning more than $151,164 annually who are in a “policy-making position.”
  • Reduced health care costs: $74-$194 billion in reduced spending on physician services over the next decade.
  • New business formation: 2.7% increase in the rate of new firm formation, resulting in over 8,500 additional new businesses created each year.
  • This reflects an estimated increase of about 3,000 to 5,000 new patents in the first year noncompetes are banned, rising to about 30,000-53,000 in the tenth year.
  • This represents an estimated increase of 11-19% annually over a ten-year period.
  • The average worker’s earnings will rise an estimated extra $524 per year. 

The Federal Trade Commission develops policy initiatives on issues that affect competition, consumers, and the U.S. economy. The FTC will never demand money, make threats, tell you to transfer money, or promise you a prize. Follow the  FTC on social media , read  consumer alerts  and the  business blog , and  sign up to get the latest FTC news and alerts .

Press Release Reference

Contact information, media contact.

Victoria Graham Office of Public Affairs 415-848-5121

  • International edition
  • Australia edition
  • Europe edition

On a lawn surrounded on three sides by buildings, dozens of tents with handmade protest signs in red and green.

Why we need to stop using ‘pro-Palestine’ and ‘pro-Israel’

The safety and security of Palestinians and Jews are interdependent, so we should use language carefully

I n reporting on the encampments springing up on college campuses across the US, the media seem to have convened a terminology confab and agreed on two descriptions: “pro-Palestinian” and “anti-Israel”. These labels oversimplify Americans’ opinions on Israel’s onslaught against Gaza, which marked its 200th day on Tuesday with no end in sight. But the error is worse than semantic.

“Universities Struggle as Pro-Palestinian Demonstrations Grow,” says the New York Times . “Colleges Struggle to Contain Intensifying Pro-Palestinian Protests,” reports the Wall Street Journal . In Minneapolis, the Star Tribune has the local news that the “University of Minnesota police arrest nine after pro-Palestinian encampment set up on campus”. Some publications less shy about displaying their political biases take the opposite tack. A headline in the right-leaning New York Post , for instance, exaggerated the literally incendiary nature of the demonstrators’ tactics: “Anti-Israel protesters carry flares to march on NYPD HQ after over 130 arrested at NYU.” The accompanying video is cast in red. Ever evenhanded, CBS does both: “Pro-Palestinian, pro-Israel protesters gather outside Columbia University.”

Yes, for some, the phrase “from the river to the sea” signals a wish to exterminate the other side, whether that means Palestinians, Jews or the state of Israel. At demonstrations aflutter with Palestinian flags, chants may be heard calling for repeat performances of the atrocities of 7 October.

For most people, particularly Jews, in the movement to end the annihilation of Gaza , the feelings are complex, even when the moral stance is uncompromising and the demands straightforward: stop funding genocide, let Gaza live. There are ways to describe where people stand that more accurately represent these complexities.

First, support for Palestinian liberation is not synonymous with support for Hamas. “The contemporary left-wing slide into Hamas apologism is not only abhorrent, but not aligned with the goals of Palestinian liberation,” wrote Ahmed Fouad Alkhatib, a Gaza native and US citizen, in the Forward in March. “If contemporary activists truly grappled with the horror Hamas inflicted on October 7 and understood Hamas’s history of corruption and exploitation of the Gazan people, they would see that Hamas must be abandoned entirely for pro-Palestine activism to actually progress.”

While one term seems to refer to people and the other to the state, the terms pro-Palestinian and anti- (or pro-) Israel blur the distinction between governments and people. To be for Palestinian liberation is not necessarily to endorse Palestinian nationalism or a future Arab-supremacist nation. As the feminist legal scholar Aya Gruber noted on X : “During Vietnam there were ‘antiwar’ and ‘peace’ protesters, not ‘pro-Vietnam’ & ‘anti-US’ protesters.” She adds that the “irresponsible” media do not refer to elected officials who vote to fund the bombs that are killing tens of thousands of people and decimating homes, hospitals and schools as “anti-Palestinian”.

Nor does opposition to Israeli policy mean indifference to the Jewish residents of Israel. The journalist (and seriously observant Jew) Peter Beinart, formerly a prominent spokesperson for liberal Zionism, has since renounced his support for a Jewish ethno-state in the Middle East and now advocates for a single, secular, multinational state, with equal rights for all. While consistently foregrounding the cataclysm in Gaza, Beinart rarely fails to mention the hostages still being held by Hamas. Yet, as he recently told the Harvard Crimson , his condemnation of Israel does not “reflect a lack of concern for the welfare of Jews in Israel and Jews around the world, but are actually my best effort to take positions that I believe will lead to greater safety for us”. He frequently points to data showing that escalations of Israeli violence against Palestinians are correlated with increased antisemitic acts elsewhere the world.

The left is increasingly anti-Zionist. At the “emergency seder in the streets” in Brooklyn on the second night of Passover, the Canadian socialist and climate justice activist Naomi Klein called Zionism “a false idol that takes our most profound biblical stories of justice and emancipation from slavery – the story of Passover itself – and turns them into brutalist weapons of colonial land theft, roadmaps for ethnic cleansing and genocide”.

But if you see Zionism as a movement of refuge, not of genocide, you can be Zionist and oppose the violence perpetrated by Israeli authorities against Palestinian civilians. The Jewish anti-occupation and antiwar organization IfNotNow comprises “Zionists, anti-Zionists, non-Zionists, post-Zionists, and many people who don’t know what they’d call themselves”, wrote Alex Langer, a New York member of the group, in Haaretz , in 2018. “The Zionists within IfNotNow have shown that not everyone who believes in a Jewish nation-state in Israel seeks a system of endless bloodshed and oppression, that there are Zionists who are willing to put their voices and sometimes bodies on the line for freedom and dignity for all.”

Of course, the most noxious – and incorrect – characterization of a political stance toward Israel-Palestine is the conflation of “anti-Israel” with antisemitism. The useful cynicism of that maneuver is currently on view at the hearings run by Elise Stefanik, the New York Republican representative, whose goal is not to eradicate antisemitism but rather to undermine academic freedom and the credibility of intellectuals generally. “Two groups conflate Zionism and Judaism,” said Yaakov Shapiro, the anti-Zionist Orthodox rabbi. “Zionists, who want to legitimize Zionism by pretending it is Judaism; and antisemites, who want to de-legitimize Judaism by pretending it is Zionism.”

Most people in the movement to end Israeli apartheid have come to understand that whatever the solution – one state or two – the safety and security of Jews and Palestinians are interdependent. That makes the misrepresentation of the spectrum of beliefs more than an insult to language. The terms “pro-Palestinian” and “pro-Israel” – and their implicit mutual exclusion – reproduce and perpetuate the nationalist antagonisms that fuel the forever war between Jews and Palestinians.

If ceasing to use them will not magically produce a solution, it would help create the atmosphere necessary to imagine a peaceful future for Palestinians and Jews in the Middle East and the diaspora. In fact, the college encampments are rehearsals of that future. On MSNBC , Isra Hirsi, daughter of the progressive Democratic US representative Ilhan Omar, told an interviewer that far from being a threat to public safety, the Columbia University encampment was a “beautiful” embodiment “of solidarity”. Before the police broke up the camp and arrested students including Hirsi, participants of all faiths and none sang, prayed and celebrated Shabbat together. On the first night of Passover at Yale and the University of Michigan , students held seders amid the tents. The outdoor rituals demonstrated that a person can be openly, fearlessly Jewish on these campuses.

At the seder in the streets, a friend and I looked around at the many attendants in keffiyehs and noted that the black-and-white Palestinian scarf could be easily interchanged with a tallit, or Jewish prayer shawl. A few minutes later, a Palestinian-American speaker just returned from the West Bank called for liberation for everyone “between every river and every sea”.

Judith Levine is a Brooklyn journalist and essayist, a contributing writer to the Intercept and the author of five books

  • Palestinian territories
  • US universities

More on this story

hypothesis language means

UK weighing sending troops into Gaza to distribute aid

hypothesis language means

Hamas ‘reviewing Israel’s latest Gaza ceasefire proposal’

hypothesis language means

Gaza’s 37m tonnes of bomb-filled debris could take 14 years to clear, says expert

hypothesis language means

Most Jews and Palestinians want peace. Extremists, narcissists ​and other ‘allies’ only block the way

hypothesis language means

Questions in rocket-hit Sderot over whether IDF can ever destroy Hamas

hypothesis language means

‘Political arrest’ of Palestinian academic in Israel marks new civil liberties threat

hypothesis language means

Palestinian baby rescued from dead mother’s womb dies in Gaza hospital

hypothesis language means

‘We are with them’: support for Hamas grows among Palestinians in Lebanon

hypothesis language means

USC cancels main commencement ceremony amid Gaza protests

hypothesis language means

‘Selfless and strong’: memorial honours World Central Kitchen aid workers killed in Gaza

Most viewed.

IMAGES

  1. The Language of Hypothesis Testing

    hypothesis language means

  2. How to Write a Hypothesis

    hypothesis language means

  3. PPT

    hypothesis language means

  4. 🏷️ Formulation of hypothesis in research. How to Write a Strong

    hypothesis language means

  5. What is a Hypothesis

    hypothesis language means

  6. How to Write a Hypothesis

    hypothesis language means

VIDEO

  1. Concept of Hypothesis

  2. Logic and Language of Hypothesis Testing

  3. The Language of Hypothesis Testing Part 2

  4. 10 .b Language of Hypothesis Testing

  5. stats 10.1 hw help

  6. What Is A Hypothesis?

COMMENTS

  1. The Language of Thought Hypothesis

    The language of thought hypothesis (LOTH) proposes that thinking occurs in a mental language. Often called Mentalese, the mental language resembles spoken language in several key respects: it contains words that can combine into sentences; the words and sentences are meaningful; and each sentence's meaning depends in a systematic way upon the meanings of its component words and the way those ...

  2. Language of thought hypothesis

    The language of thought hypothesis (LOTH), sometimes known as thought ordered mental expression (TOME), is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor.It describes the nature of thought as possessing "language-like" or compositional structure (sometimes known as mentalese).On this view, simple concepts combine in systematic ...

  3. The Sapir-Whorf Hypothesis: How Language Influences How We Express

    The Sapir-Whorf Hypothesis, also known as linguistic relativity, refers to the idea that the language a person speaks can influence their worldview, thought, and even how they experience and understand the world. While more extreme versions of the hypothesis have largely been discredited, a growing body of research has demonstrated that ...

  4. Language of Thought Hypothesis

    The language of thought hypothesis (LOTH) is the hypothesis that mental representation has a linguistic structure, or in other words, that thought takes place within a mental language. The hypothesis is sometimes expressed as the claim that thoughts are sentences in the head. It is one of a cluster of other hypotheses that together offer a ...

  5. Language of Thought

    Definition. The language of thought hypothesis - originally worked out by Jerry A. Fodor [ 1, 2] and supported, e.g., by Zenon W. Pylyshyn [ 3] - is a highly influential approach in the philosophy of mind and particularly in cognitive science. It provides a basic framework for the computational model of the mind.

  6. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  7. The Language of Thought Hypothesis

    The Language of Thought Hypothesis (LOTH) postulates that thought and thinking take place in a mental language. This language consists of a system of representations that is physically realized in the brain of thinkers and has a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations.

  8. Scientific Hypothesis, Theory, Law Definitions

    A hypothesis is an educated guess, based on observation. It's a prediction of cause and effect. Usually, a hypothesis can be supported or refuted through experimentation or more observation. A hypothesis can be disproven but not proven to be true. Example: If you see no difference in the cleaning ability of various laundry detergents, you might ...

  9. Hypothesis Language

    Definition. The hypothesis language used by a machine learning system is the language in which the hypotheses (also referred to as patterns or models) it outputs are described. Motivation and Background. Most machine learning algorithms can be seen as a procedure for deriving one or more hypotheses from a set of observations. Both the input ...

  10. Language and thought

    Language and thought. The study of how language influences thought, and vice-versa, has a long history in a variety of fields. There are two bodies of thought forming around this debate. One body of thought stems from linguistics and is known as the Sapir-Whorf hypothesis. There is a strong and a weak version of the hypothesis which argue for ...

  11. Hypothesis

    The hypothesis of Andreas Cellarius, showing the planetary motions in eccentric and epicyclical orbits.. A hypothesis (pl.: hypotheses) is a proposed explanation for a phenomenon.For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained ...

  12. Definition and History of the Sapir-Whorf Hypothesis

    The Sapir-Whorf hypothesis is the linguistic theory that the semantic structure of a language shapes or limits the ways in which a speaker forms conceptions of the world. It came about in 1929. The theory is named after the American anthropological linguist Edward Sapir (1884-1939) and his student Benjamin Whorf (1897-1941).

  13. Hypothesis Definition & Meaning

    hypothesis: [noun] an assumption or concession made for the sake of argument. an interpretation of a practical situation or condition taken as the ground for action.

  14. What Is A Research Hypothesis? A Simple Definition

    A research hypothesis (also called a scientific hypothesis) is a statement about the expected outcome of a study (for example, a dissertation or thesis). To constitute a quality hypothesis, the statement needs to have three attributes - specificity, clarity and testability. Let's take a look at these more closely.

  15. Stephen Krashen's Five Hypotheses of Second Language Acquisition

    The acquirer must concentrate on the exact form of the language. The acquirer must set aside some time to review and apply the language rules in a conversation. Although this is a tricky one, because in regular conversations there's hardly enough time to ensure correctness of the language. 3. Natural Order Hypothesis.

  16. Sapir-Whorf hypothesis (Linguistic Relativity Hypothesis)

    Since the Sapir-Whorf hypothesis theorizes that our language use shapes our perspective of the world, people who speak different languages have different views of the world. In the 1920s, Benjamin Whorf was a Yale University graduate student studying with linguist Edward Sapir, who was considered the father of American linguistic anthropology.

  17. HYPOTHESIS

    HYPOTHESIS definition: 1. an idea or explanation for something that is based on known facts but has not yet been proved…. Learn more.

  18. Sapir-Whorf Hypothesis

    Language and Thought. Richard J. Gerrig, Mahzarin R. Banaji, in Thinking and Problem Solving, 1994 A Color Memory. When researchers first turned their attention to the Sapir-Whorf hypothesis, memory for color was considered to be an ideal domain for study (see Brown, 1976).Whorf had suggested that language users "dissect nature along the lines laid down by [their] native languages" (1956 ...

  19. Hypothesis in Machine Learning

    A hypothesis is a function that best describes the target in supervised machine learning. The hypothesis that an algorithm would come up depends upon the data and also depends upon the restrictions and bias that we have imposed on the data. The Hypothesis can be calculated as: y = mx + b y =mx+b. Where, y = range. m = slope of the lines.

  20. What are vision language models (VLMs)?

    What are vision language models (VLMs)? Vision language models (VLMs) combine machine vision and semantic processing techniques to make sense of the relationship within and between objects in images. In practice, this means combining various visual machine learning algorithms with transformer-based large language models ().Current VLMs include OpenAI's GPT-4, Google Gemini and the open-source ...

  21. Legislators announce language defining squatter in state housing law

    The language was derived from legislation introduced by Senator Liu and Assembly Member Ron Kim, S8996 ... The definition updates New York State real property law to read that "a tenant shall not include a squatter," and further define squatter as "a person who enters or intrudes upon real property without the permission of the person ...

  22. Linguistic relativity

    The idea of linguistic relativity, also known as the Sapir-Whorf hypothesis (/ s ə ˌ p ɪər ˈ hw ɔːr f / sə-PEER WHORF), the Whorf hypothesis, or Whorfianism, is a principle suggesting that the structure of a language influences its speakers' worldview or cognition, and thus individuals' languages determine or shape their perceptions of the world.. The hypothesis has long been ...

  23. Fact Sheet on FTC's Proposed Final Noncompete Rule

    The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ...

  24. FTC bans employers from using noncompete clauses

    The Federal Trade Commission on Tuesday voted to ban for-profit US employers from making employees sign agreements with noncompete clauses. Such a ban could affect tens of millions of workers.

  25. Ahead of National Prescription Drug Take Back Day, Attorney General

    OAKLAND - California Attorney General Rob Bonta today urged Californians to participate in National Prescription Drug Take Back Day this Saturday, April 27, 2024. The effort, led by the U.S. Drug Enforcement Administration in partnership with local law enforcement agencies, aims to provide a safe, convenient, and responsible means of disposing of prescription drugs, including

  26. Innateness hypothesis

    Innateness hypothesis. In linguistics, the innateness hypothesis, also known as the nativist hypothesis, holds that humans are born with at least some knowledge of linguistic structure. On this hypothesis, language acquisition involves filling in the details of an innate blueprint rather than being an entirely inductive process.

  27. Why we need to stop using 'pro-Palestine' and 'pro-Israel'

    The safety and security of Palestinians and Jews are interdependent, so we should use language carefully Thu 25 Apr 2024 06.13 EDT Last modified on Thu 25 Apr 2024 08.22 EDT Share