• Subscriber Services
  • For Authors
  • Publications
  • Archaeology
  • Art & Architecture
  • Bilingual dictionaries
  • Classical studies
  • Encyclopedias
  • English Dictionaries and Thesauri
  • Language reference
  • Linguistics
  • Media studies
  • Medicine and health
  • Names studies
  • Performing arts
  • Science and technology
  • Social sciences
  • Society and culture
  • Overview Pages
  • Subject Reference
  • English Dictionaries
  • Bilingual Dictionaries

Recently viewed (0)

  • Save Search
  • Share This Facebook LinkedIn Twitter

Related Content

Related overviews.

interaction

face-to-face interaction

More Like This

Show all results sharing this subject:

oral communication

Quick reference.

Human interaction through the use of speech, or spoken messages. In common usage loosely referred to as verbal communication, particularly face-to-face interaction, but more strictly including mediated use of the spoken word (e.g. a telephone conversation), where, in addition to spoken words, there are still also vocal cues.

From:   oral communication   in  A Dictionary of Media and Communication »

Subjects: Media studies

Related content in Oxford Reference

Reference entries.

View all related items in Oxford Reference »

Search for: 'oral communication' in Oxford Reference »

  • Oxford University Press

PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). (c) Copyright Oxford University Press, 2023. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice ).

date: 06 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|109.248.223.228]
  • 109.248.223.228

Character limit 500 /500

ENCYCLOPEDIC ENTRY

Oral language.

Comprised of syntax, pragmatics, morphology, and phonology, oral language is how we verbally communicate with one another.

Social Studies, Anthropology, World History, English Language Arts

Office Workers Talking

While all languages are built on the concepts of syntax, pragmatics, morphology, and phonology, they are all based on words.

Photograph by gradyreese

While all languages are built on the concepts of syntax, pragmatics, morphology, and phonology, they are all based on words.

Oral communication is more than just speech. It involves expressing ideas, feelings, information, and other things that employ the voice, like poetry or music, verbally . Because so much of human life is dominated by speech and verbal communication , it would be difficult to fully express oneself without an oral language . Language involves words, their pronunciations, and the various ways of combining them to communicate . The building blocks of an oral language are the words people speak. Children begin learning to speak extremely early in life. They begin by babbling, an attempt to mimic the speech they hear from older people. As they get older, they develop more language skills and start forming sentences. They continue building their vocabularies throughout their lives. Vocabulary is just one of the components of oral language . Other components include syntax , pragmatics , morphology , and phonology . Syntax refers to how words are arranged into sentences. How people use oral language to communicate is known as pragmatics . Morphology refers to how words are structured and formed in different languages . The study of the sound of speech is called phonology . The history of oral language as a whole is difficult to trace to its beginning, however, there is a wealth of information on the histories of specific languages . The group of languages known as Indo-European languages , which account for almost half of the languages spoken throughout the world today, likely originated in Europe and Asia. Indo-European languages are thought to stem from a single language , which nomads spoke thousands of years ago. Recent evidence has shown that the origin of oral language may go back even further. The discovery of a Neanderthal hyoid bone in 1989, as well as the FOXP2 gene—thought to be essential for spoken language —in Neanderthal DNA, is evidence that Neanderthals may have communicated with speech sounds, possibly even language . Although many animal species make sounds—ones that may even sound like speech­—to communicate , oral language is unique to humans, as far as we know. It involves using a finite set of words and rules in an infinite amount of comprehensible combinations. Today, the people of the world speak over 7,000 different languages . Through oral language , people learn to understand the meanings of words, to read, and, of course, to express themselves. As the world changes, oral language changes along with it to reflect the needs, ideas, and evolution of the human race.

Media Credits

The audio, illustrations, photos, and videos are credited beneath the media asset, except for promotional images, which generally link to another page that contains the media credit. The Rights Holder for media is the person or group credited.

Production Managers

Program specialists, last updated.

October 19, 2023

User Permissions

For information on user permissions, please read our Terms of Service. If you have questions about how to cite anything on our website in your project or classroom presentation, please contact your teacher. They will best know the preferred format. When you reach out to them, you will need the page title, URL, and the date you accessed the resource.

If a media asset is downloadable, a download button appears in the corner of the media viewer. If no button appears, you cannot download or save the media.

Text on this page is printable and can be used according to our Terms of Service .

Interactives

Any interactives on this page can only be played while you are visiting our website. You cannot download interactives.

Related Resources

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • How to prepare and...

How to prepare and deliver an effective oral presentation

  • Related content
  • Peer review
  • Lucia Hartigan , registrar 1 ,
  • Fionnuala Mone , fellow in maternal fetal medicine 1 ,
  • Mary Higgins , consultant obstetrician 2
  • 1 National Maternity Hospital, Dublin, Ireland
  • 2 National Maternity Hospital, Dublin; Obstetrics and Gynaecology, Medicine and Medical Sciences, University College Dublin
  • luciahartigan{at}hotmail.com

The success of an oral presentation lies in the speaker’s ability to transmit information to the audience. Lucia Hartigan and colleagues describe what they have learnt about delivering an effective scientific oral presentation from their own experiences, and their mistakes

The objective of an oral presentation is to portray large amounts of often complex information in a clear, bite sized fashion. Although some of the success lies in the content, the rest lies in the speaker’s skills in transmitting the information to the audience. 1

Preparation

It is important to be as well prepared as possible. Look at the venue in person, and find out the time allowed for your presentation and for questions, and the size of the audience and their backgrounds, which will allow the presentation to be pitched at the appropriate level.

See what the ambience and temperature are like and check that the format of your presentation is compatible with the available computer. This is particularly important when embedding videos. Before you begin, look at the video on stand-by and make sure the lights are dimmed and the speakers are functioning.

For visual aids, Microsoft PowerPoint or Apple Mac Keynote programmes are usual, although Prezi is increasing in popularity. Save the presentation on a USB stick, with email or cloud storage backup to avoid last minute disasters.

When preparing the presentation, start with an opening slide containing the title of the study, your name, and the date. Begin by addressing and thanking the audience and the organisation that has invited you to speak. Typically, the format includes background, study aims, methodology, results, strengths and weaknesses of the study, and conclusions.

If the study takes a lecturing format, consider including “any questions?” on a slide before you conclude, which will allow the audience to remember the take home messages. Ideally, the audience should remember three of the main points from the presentation. 2

Have a maximum of four short points per slide. If you can display something as a diagram, video, or a graph, use this instead of text and talk around it.

Animation is available in both Microsoft PowerPoint and the Apple Mac Keynote programme, and its use in presentations has been demonstrated to assist in the retention and recall of facts. 3 Do not overuse it, though, as it could make you appear unprofessional. If you show a video or diagram don’t just sit back—use a laser pointer to explain what is happening.

Rehearse your presentation in front of at least one person. Request feedback and amend accordingly. If possible, practise in the venue itself so things will not be unfamiliar on the day. If you appear comfortable, the audience will feel comfortable. Ask colleagues and seniors what questions they would ask and prepare responses to these questions.

It is important to dress appropriately, stand up straight, and project your voice towards the back of the room. Practise using a microphone, or any other presentation aids, in advance. If you don’t have your own presenting style, think of the style of inspirational scientific speakers you have seen and imitate it.

Try to present slides at the rate of around one slide a minute. If you talk too much, you will lose your audience’s attention. The slides or videos should be an adjunct to your presentation, so do not hide behind them, and be proud of the work you are presenting. You should avoid reading the wording on the slides, but instead talk around the content on them.

Maintain eye contact with the audience and remember to smile and pause after each comment, giving your nerves time to settle. Speak slowly and concisely, highlighting key points.

Do not assume that the audience is completely familiar with the topic you are passionate about, but don’t patronise them either. Use every presentation as an opportunity to teach, even your seniors. The information you are presenting may be new to them, but it is always important to know your audience’s background. You can then ensure you do not patronise world experts.

To maintain the audience’s attention, vary the tone and inflection of your voice. If appropriate, use humour, though you should run any comments or jokes past others beforehand and make sure they are culturally appropriate. Check every now and again that the audience is following and offer them the opportunity to ask questions.

Finishing up is the most important part, as this is when you send your take home message with the audience. Slow down, even though time is important at this stage. Conclude with the three key points from the study and leave the slide up for a further few seconds. Do not ramble on. Give the audience a chance to digest the presentation. Conclude by acknowledging those who assisted you in the study, and thank the audience and organisation. If you are presenting in North America, it is usual practice to conclude with an image of the team. If you wish to show references, insert a text box on the appropriate slide with the primary author, year, and paper, although this is not always required.

Answering questions can often feel like the most daunting part, but don’t look upon this as negative. Assume that the audience has listened and is interested in your research. Listen carefully, and if you are unsure about what someone is saying, ask for the question to be rephrased. Thank the audience member for asking the question and keep responses brief and concise. If you are unsure of the answer you can say that the questioner has raised an interesting point that you will have to investigate further. Have someone in the audience who will write down the questions for you, and remember that this is effectively free peer review.

Be proud of your achievements and try to do justice to the work that you and the rest of your group have done. You deserve to be up on that stage, so show off what you have achieved.

Competing interests: We have read and understood the BMJ Group policy on declaration of interests and declare the following interests: None.

  • ↵ Rovira A, Auger C, Naidich TP. How to prepare an oral presentation and a conference. Radiologica 2013 ; 55 (suppl 1): 2 -7S. OpenUrl
  • ↵ Bourne PE. Ten simple rules for making good oral presentations. PLos Comput Biol 2007 ; 3 : e77 . OpenUrl PubMed
  • ↵ Naqvi SH, Mobasher F, Afzal MA, Umair M, Kohli AN, Bukhari MH. Effectiveness of teaching methods in a medical institute: perceptions of medical students to teaching aids. J Pak Med Assoc 2013 ; 63 : 859 -64. OpenUrl

definition of oral speech

Orality: Definition and Examples

  Saint Louis University

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

Orality is the use of speech  rather than writing  as a means of communication , especially in communities where the tools of literacy are unfamiliar to the majority of the population.

Modern interdisciplinary studies in the history and nature of orality were initiated by theorists in the "Toronto school," among them Harold Innis, Marshall McLuhan, Eric Havelock, and Walter J. Ong.  

In Orality and Literacy (Methuen, 1982), Walter J. Ong identified some of the distinctive ways in which people in a "primary oral culture" [see the definition below] think and express themselves through narrative discourse:

  • Expression is coordinate and polysyndetic (" . . . and . . . and . . . and . . .") rather than subordinate and hypotactic.
  • Expression is aggregative (that is, speakers rely on epithets and on parallel and antithetical phrases) rather than analytic .
  • Expression tends to be redundant and copious.
  • Out of necessity, thought is conceptualized and then expressed with relatively close reference to the human world; that is, with a preference for the concrete rather than the abstract.
  • Expression is agonistically toned (that is, competitive rather than cooperative).
  • Finally, in predominantly oral cultures, proverbs (also known as maxims ) are convenient vehicles for conveying simple beliefs and cultural attitudes.

From the Latin oralis , "mouth"

Examples and Observations

  • James A. Maxey What is the relationship of orality to literacy? Though disputed, all sides agree that orality is the predominant mode of communication in the world and that literacy is a relatively recent technological development in human history.
  • Pieter J.J. Botha Orality as a condition exists by virtue of communication that is not dependent on modern media processes and techniques. It is negatively formed by the lack of technology and positively created by specific forms of education and cultural activities. . . . Orality refers to the experience of words (and speech) in the habitat of sound.

Ong on Primary Orality and Secondary Orality

  • Walter J. Ong I style the orality of a culture totally untouched by any knowledge or writing or print, ' primary orality .' It is 'primary' by contrast with the 'secondary orality' of present-day high-technology culture, in which a new orality is sustained by telephone, radio, television, and other electronic devices that depend for their existence and functioning on writing and print. Today primary oral culture in the strict sense hardly exists, since every culture knows of writing and has some experience of its effects. Still, to varying degrees many cultures and subcultures, even in a high-technology ambiance, preserve much of the mind-set of primary orality.

Ong on Oral Cultures

  • Walter J. Ong Oral cultures indeed produce powerful and beautiful verbal performances of high artistic and human worth, which are no longer even possible once writing has taken possession of the psyche. Nevertheless, without writing, human consciousness cannot achieve its fuller potentials, cannot produce other beautiful and powerful creations. In this sense, orality needs to produce and is destined to produce writing. Literacy . . . is absolutely necessary for the development not only of science but also of history, philosophy, explicative understanding of literature and of any art, and indeed for the explanation of language (including oral speech) itself. There is hardly an oral culture or a predominantly oral culture left in the world today that is not somehow aware of the vast complex of powers forever inaccessible without literacy. This awareness is agony for persons rooted in primary orality, who want literacy passionately but who also know very well that moving into the exciting world of literacy means leaving behind much that is exciting and deeply loved in the earlier oral world. We have to die to continue living.

Orality and Writing

  • Rosalind Thomas Writing is not necessarily the mirror-image and destroyer of orality , but reacts or interacts with oral communication in a variety of ways. Sometimes the line between written and oral even in a single activity cannot actually be drawn very clearly, as in the characteristic Athenian contract which involved witnesses and an often rather slight written document, or the relation between the performance of a play and the written and published text.

Clarifications

  • Joyce Irene Middleton Many misreadings, misinterpretations, and misconceptions about orality theory are due, in part, to [Walter J.] Ong's rather slippery use of seemingly interchangeable terms that very diverse audiences of readers interpret in various ways. For example, orality is not the opposite of literacy , and yet many debates about orality are rooted in oppositional values . . .. In addition, orality was not 'replaced' by literacy: Orality is permanent--we have always and will continue to always use human speech arts in our various forms of communication, even as we now witness changes in our personal and professional uses of alphabetic forms of literacy in a number of ways.

Pronunciation: o-RAH-li-tee

  • Defining and Understanding Literacy
  • What is the Difference Between 'Aural' and 'Oral'?
  • What Is a Maxim?
  • Oration (Classical Rhetoric)
  • classical rhetoric
  • What Is a Primary Source?
  • Definition and Examples of Interjections in English
  • The Power of Literacy Narratives
  • The Power of Indirectness in Speaking and Writing
  • The Difference Between a Speech and Discourse Community
  • Context in Language
  • What Is Written English?
  • An Introduction to Translation and Interpretation
  • Oral and Verbal
  • Storytelling and the Greek Oral Tradition
  • Multiple Literacies: Definition, Types, and Classroom Strategies
  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of speech

  • declamation

Examples of speech in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'speech.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Middle English speche , from Old English sprǣc, spǣc ; akin to Old English sprecan to speak — more at speak

before the 12th century, in the meaning defined at sense 1a

Phrases Containing speech

  • acceptance speech
  • figure of speech
  • freedom of speech
  • free speech
  • hate speech
  • part of speech
  • polite speech

speech community

  • speech form
  • speech impediment
  • speech therapy
  • stump speech
  • visible speech

Dictionary Entries Near speech

Cite this entry.

“Speech.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/speech. Accessed 6 Jun. 2024.

Kids Definition

Kids definition of speech, medical definition, medical definition of speech, legal definition, legal definition of speech, more from merriam-webster on speech.

Nglish: Translation of speech for Spanish Speakers

Britannica English: Translation of speech for Arabic Speakers

Britannica.com: Encyclopedia article about speech

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

What's the difference between 'fascism' and 'socialism', more commonly misspelled words, commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, popular in wordplay, 8 words for lesser-known musical instruments, 9 superb owl words, 'gaslighting,' 'woke,' 'democracy,' and other top lookups, 10 words for lesser-known games and sports, etymologies for every day of the week, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

  • Daily Crossword
  • Word Puzzle
  • Word Finder
  • Word of the Day
  • Synonym of the Day
  • Word of the Year
  • Language stories
  • All featured
  • Gender and sexuality
  • All pop culture
  • Writing hub
  • Grammar essentials
  • Commonly confused
  • All writing tips
  • Pop culture
  • Writing tips

Advertisement

[ awr - uh l ]

oral testimony.

oral methods of language teaching; oral traditions.

the oral cavity.

an oral dose of medicine.

  • Phonetics. articulated with none of the voice issuing through the nose, as the normal English vowels and the consonants b and v.
  • of or relating to the earliest phase of infantile psychosexual development, lasting from birth to one year of age or longer, during which pleasure is obtained from eating, sucking, and biting.

oral anxiety.

  • of or relating to gratification by stimulation of the lips or membranes of the mouth, as in sucking, eating, or talking
  • Zoology. pertaining to that surface of polyps and marine animals that contains the mouth and tentacles.
  • an oral examination in a school, college, or university, given especially to a candidate for an advanced degree.

/ ˈɒrəl; ˈɔːrəl /

an oral agreement

an oral thermometer

  • of or relating to the surface of an animal, such as a jellyfish, on which the mouth is situated

an oral contraceptive

  • of, relating to, or using spoken words
  • phonetics pronounced with the soft palate in a raised position completely closing the nasal cavity and allowing air to pass out only through the mouth
  • relating to a stage of psychosexual development during which the child's interest is concentrated on the mouth
  • denoting personality traits, such as dependence, selfishness, and aggression, resulting from fixation at the oral stage Compare anal genital phallic
  • an examination in which the questions and answers are spoken rather than written

/ ôr ′ əl /

  • Relating to or involving the mouth.

Discover More

Derived forms.

  • ˈorally , adverb

Other Words From

  • o·ral·i·ty noun
  • o·ral·ly adverb
  • non·o·ral adjective
  • post·o·ral adjective
  • sub·o·ral adjective
  • un·o·ral adjective

Word History and Origins

Origin of oral 1

Example Sentences

The Supreme Court’s first oral arguments in its new term will be held by remote teleconference because of the continued threat posed by the coronavirus pandemic, the justices announced Wednesday.

Sabin’s so-called attenuated strains became the famous oral polio vaccine given on a sugar cube to billions of children.

There’s little doubt that in this process, the virus also spreads to the body’s oral cavity.

In a typical LAMP assay, a patient’s nasal or oral swab sample is mixed with enzymes and specially designed DNA fragments, then heated to 65° Celsius to copy the viral RNA to DNA and produce many more DNA copies.

It’s transmitted through the fecal-oral route, meaning that humans consume food that has contaminated feces on it.

My doctor put me on oral contraceptives to induce a period, figuring it would help build bone.

The second major split between the capital and the court occurred over oral care.

“He was attempting to force me into oral sex,” Ruehli told Philadelphia Magazine.

Also, when Nelson died and Hugh Morrow did his own oral history project and talked to about 75 Rockefeller associates.

The papers report that J.W. was too afraid to resist his command for her to perform oral sex on him.

These oral inanities only served to make Lyn give me the benefit of a look of amused wonder.

But none of the orders given were more than oral, for the governor did not want them set on the records.

Oral evidence may be admitted to establish the location of monuments, and even hearsay evidence may be used for the purpose.

Oral evidence is admissible to prove the fraud or mistake; it must, however, be clear before a court will grant relief.

The improvement of land by the purchaser under an oral contract is an act which enables him to enforce the contract in equity.

new-logo-horiz.jpg

  • >> Return to Heinemann.com

blog-bkg.jpg

Dedicated to Teachers

  • Education Policy
  • Building Language
  • Early Childhood Literacy Summit
  • Julie Russ Harris
  • Nell K. Duke
  • Nonie K. Lesaux
  • Cultivating Knowledge, Building Language
  • Middle School
  • Language Arts
  • Lucy Calkins
  • View All Topics

What Is Oral Language?

Oral-Language

In today’s linguistically diverse elementary classrooms, research suggests that a universal approach to building academic vocabulary and conceptual knowledge holds huge promise for closing the opportunity gaps among English learners. 

Today's blog is adapted from Cultivating Knowledge, Building Language , wherein Nonie Lesaux and Julie Harris present a knowledge-based approach to literacy instruction that supports young English learners’ (ELs) development of academic content and vocabulary knowledge and sets them up for reading success.

Download a sample chapter of Cultivating Knowledge, Building Language

What is Oral Language?

Oral language is the system through which we use spoken words to express knowledge, ideas, and feelings. Developing ELs’ oral language, then, means developing the skills and knowledge that go into listening and speaking—all of which have a strong relationship to reading comprehension and to writing. Oral language is made up of at least five key components (Moats 2010): phonological skills, pragmatics, syntax, morphological skills, and vocabulary (also referred to as semantics ). All of these components of oral language are necessary to communicate and learn through conversation and spoken interaction, but there are important distinctions among them that have implications for literacy instruction.

Components of Oral Language

The Components of Oral Language

A student’s phonological skills are those that give her an awareness of the sounds of language, such as the sounds of syllables and rhymes (Armbruster, Lehr, and Osborne 2001). In addition to being important for oral language development, these skills play a foundational role in supporting word-reading development. In the early stages of learning how to read words, children are often encouraged to sound out the words. But before even being able to match the sounds to the letters, students need to be able to hear and understand the discrete sounds that make up language. Phonological skills typically do not present lasting sources of difficulty for ELs; we know that under appropriate instructional circumstances, on average, ELs and their monolingual English-speaking peers develop phonological skills at similar levels, and in both groups, these skills are mastered by the early elementary grades. Students’ skills in the domains of syntax , morphology , and pragmatics are central for putting together and taking apart the meaning of sentences and paragraphs, and for oral and written dialogue.

Syntax refers to an understanding of word order and grammatical rules (Cain 2007; Nation and Snowling 2000). For example, consider the following two sentences: Sentence #1: Relationships are preserved only with care and attention. Sentence #2: Only with care and attention are relationships preserved.

In these cases, although the word orders are different, the sentences communicate the same message. In other cases, a slight change in word order alters a sentence’s meaning. For example:

Sentence #1: The swimmer passed the canoe. Sentence #2: The canoe passed the swimmer.

Morphology , discussed in more detail in Chapter 7, refers to the smallest meaningful parts from which words are created, including roots, suffixes, and prefixes (Carlisle 2000; Deacon and Kirby 2004). When a reader stumbles upon an unfamiliar word (e.g., unpredictable ), an awareness of how a particular prefix or suffix (e.g., un - and -able ) might change the meaning of a word or how two words with the same root may relate in meaning to each other (e.g., predict , predictable , unpredictable ) supports her ability to infer the unfamiliar word’s meaning. In fact, for both ELs and monolingual English speakers, there is a reciprocal relationship between morphological awareness and reading comprehension, and the strength of that relationship increases throughout elementary school (Carlisle 2000; Deacon and Kirby 2004; Goodwin et al. 2013; Kieffer, Biancarosa, and Mancilla-Martinez 2013; Nagy, Berninger, and Abbott 2006).

Pragmatics refers to an understanding of the social rules of communication (Snow and Uccelli 2009). So, for example, pragmatics involve how we talk when we have a particular purpose (e.g., persuading someone versus appeasing someone), how we communicate when we’re engaging with a particular audience (e.g., a family member versus an employer), and what we say when we find ourselves in a particular context (e.g., engaging in a casual conversation versus delivering a public speech). These often implicit social rules of communication differ across content areas or even text genres. Pragmatics play a role in reading comprehension because much of making meaning from text depends upon having the right ideas about the norms and conventions for interacting with others—to understand feelings, reactions, and dilemmas among characters or populations, for example, and even to make inferences and predictions. The reader has to be part of the social world of the text for effective comprehension.

Vocabulary knowledge must be fostered from early childhood through adolescence.

Finally, having the words to engage in dialogue—the vocabulary knowledge — is also a key part of oral language, not to mention comprehending and communicating using print (Beck, McKeown, and Kucan 2013; Ouellette 2006). Vocabulary knowledge, also referred to as semantic knowledge, involves understanding the meanings of words and phrases (aka receptive vocabulary ) and using those words and phrases to communicate effectively (aka expressive vocabulary ).

Notably, vocabulary knowledge exists in degrees, such that any learner has a particular “level” of knowledge of any given word (Beck, McKeown, and Kucan 2013). This begins with the word sounding familiar and moves toward the ability to use the word flexibly, even metaphorically, when speaking and writing. Vocabulary knowledge must be fostered from early childhood through adolescence. Deep vocabulary knowledge is often a source of difficulty for ELs, hindering their literacy development (August and Shanahan 2006).

If you would like to learn more about Cultivating Knowledge, Building Language , you can download a sample chapter here:

Cultivating Knowledge_LesauxHarris

Nonie K. Lesaux, PhD , is the Juliana W. and William Foss Thompson Professor of Education and Society at the Harvard Graduate School of Education. Lesaux leads a research program guided by the goal of increasing opportunities to learn for students from diverse linguistic, cultural, and economic backgrounds. Her research on reading development and instruction, and her work focused on using data to prevent reading difficulties, informs setting-level interventions, as well as public policy at the national and state level.

Julie Russ Harris, EdM , is the manager of the Language Diversity and Literacy Development Research Group at the Harvard Graduate School of Education.  A former elementary school teacher and reading specialist in urban public schools, Harris’s work continues to be guided by the goal of increasing the quality of culturally diverse children’s learning environments.

 alt=

Topics: Education Policy , EL , ELL , ESL , Reading , Writing , Building Language , Early Childhood Literacy Summit , Elementary , Julie Russ Harris , Nell K. Duke , Nonie K. Lesaux , Cultivating Knowledge, Building Language

Recent Posts

Popular posts, related posts, opening letter: a human who teaches, heinemann professional books about equity, systemic support for teacher mental health with dr. kris scardamalia.

Heinemann-primary-logo-RGB

© 2023 Heinemann, a division of Houghton Mifflin Harcourt

definition of oral speech

  • Dictionaries home
  • American English
  • Collocations
  • German-English
  • Grammar home
  • Practical English Usage
  • Learn & Practise Grammar (Beta)
  • Word Lists home
  • My Word Lists
  • Recent additions
  • Resources home
  • Text Checker

Definition of oral adjective from the Oxford Advanced Learner's Dictionary

  • a test of both oral and written French
  • oral evidence
  • stories passed on by oral tradition
  • an exam in spoken English
  • There will be a test of both oral and written French.
  • vocal music
  • the vocal organs (= the tongue, lips, etc.)
  • spoken/​oral French/​English/​Japanese, etc.
  • spoken/​oral language skills

Definitions on the go

Look up any word in the dictionary offline, anytime, anywhere with the Oxford Advanced Learner’s Dictionary app.

definition of oral speech

The Importance Of Oral Communication

The South Korean film Parasite made history at the 2020 Oscars when it became the first non-English language film to…

683. 10 Behavioral Interview Questions To Prepare For

The South Korean film Parasite made history at the 2020 Oscars when it became the first non-English language film to win the Academy Award for Best Picture. For his acceptance speech, director Bong Joon Ho said, “Once you overcome the one-inch-tall barrier of subtitles, you will be introduced to so many more amazing films.”

Bong was trying to change the way people perceive foreign language films. And he did. His words resonated not just with the South Korean audience, but with moviegoers worldwide.

Not every speaker leaves a lasting impression on their audience. But imagine if you could always speak with impact in your professional setting.

Strong oral communication is one of the best skills you can have in the workplace. Not only can you move, persuade and encourage others to think and act differently, your speaking skills also help you stand out among your co-workers.

Let’s explore the importance of different types of oral communication you need to become a competent professional.

What Is Oral Communication?

Importance of oral communication, types of oral communication.

Oral communication is communicating with spoken words. It’s a verbal form of communication where you communicate your thoughts, present ideas and share information. Examples of oral communication are conversations with friends, family or colleagues, presentations and speeches.

Oral communication helps to build trust and reliability. The process of oral communication is more effective than an email or a text message. For important and sensitive conversations—such as salary negotiations and even conflict resolution, you can rely on oral communication to get your point across, avoid misunderstandings and minimize confusion.

In a professional setting, effective oral communication is important because it is built on transparency, understanding and trust. Your oral communication skills can boost morale, encourage improved performance and promote teamwork .

Here are some benefits of oral communication:

It saves time by letting you convey your message directly to the other person and getting their response immediately.

It’s the most secure form of communication for critical issues and important information

It helps to resolve conflicts with face-to-face communication

It’s a more transparent form of communication as it lets you  gauge how others react to your words

There are different examples of oral communication in a business setting. You need several oral communication skills for career advancement. Let’s look at different types of oral communication:

Elevator Pitch

Imagine you meet the CEO of your organization in the elevator. Now, you have 30 seconds to introduce yourself before they get out on the next floor. This is your elevator pitch. It’s a form of oral communication where you have to succinctly explain who you are and what you want from the other person.

Formal Conversations

These are common at work because you have to constantly interact with your managers, coworkers and stakeholders such as clients and customers. Formal conversations are crisp, direct and condensed. You have to get your point across in a few words because everyone has only limited time to spare.

Informal Conversations

These are conversations that you have with your team members or friends and family. They are mostly without an agenda. You can talk about your day, what you’re going to eat for lunch or discuss weekend plans. These are friendly conversations peppered with light banter.

Business Presentations

This is where you need to make the best use of your speaking skills. Public speaking is an important skill to develop if you want to command a room full of people. For this, you need to leverage Harappa’s LEP and PAM Frameworks as well as the Four Ps of Pitch, Projection, Pace and Pauses.

Speeches are important in businesses like event management or community outreach. In a corporate setup, speeches are reserved for top management and leaders.

Arming yourself with effective oral communication skills will boost your confidence, prepare you for challenging tasks like meeting and impressing clients.

Harappa Education’s Speaking Effectively course is carefully designed to teach you how to improve your communication skills. You’ll learn about both oral and nonverbal communication with important frameworks like the Rule of Three and Aristotle’s Appeals of logic, credibility and emotion. Persuade your audience, deliver well-crafted ideas and connect with others with advanced speaking skills.

Explore topics & skills such as Public Speaking , Verbal Communication , Speaking Skills & Oratory Skills from Harappa Diaries and learn to express your ideas with confidence.

Reskilling Programs

L&D leaders need to look for reskilling programs that meet organizational goals and employee aspirations. The first step to doing this is to understand the skills gaps and identify what’s necessary. An effective reskilling program will be one that is scalable and measurable. Companies need to understand their immediate goals and prepare for future requirements when considering which employees to reskill.

Are you still uncertain about the kind of reskilling program you should opt for?  Speak to our expert   to understand what will work best for your organization and employees.

Thriversitybannersidenav

Getuplearn.com

Oral Communication: Definitions, Importance, Methods, Types, Advantages, and Disadvantages

definition of oral speech

Table of Contents

  • 1 What is Oral Communication?
  • 2 Definitions of Oral Communication
  • 3.1 Clear Pronunciation
  • 3.2 Brevity
  • 3.3 Precision
  • 3.4 Conviction
  • 3.5 Logical Sequence
  • 3.6 Appropriate Word Choice
  • 3.7 Use natural voice
  • 3.8 Communicate With Right Person
  • 3.9 Do Not Get Guided by Assumptions
  • 3.10 Look for Feedback
  • 3.11 Allow to Ask Questions
  • 4.1 Face-to-Face Conversation
  • 4.2 Telephone
  • 4.3 Presentation
  • 4.4 Public Speech
  • 4.5 Interview
  • 4.6 Meeting
  • 5.1 Speak in a Clear, Confident Strong Voice
  • 5.2 Be Coherent
  • 5.3 Avoid Using Filler Words
  • 5.4 Be an Active Listener
  • 6 Advantages and Disadvantages of Oral Communication
  • 7.1 Quickness in Exchange of Ideas
  • 7.2 Immediate Feedback
  • 7.3 Flexibility
  • 7.4 Economic Sources
  • 7.5 Personal Touch
  • 7.6 Effective Source
  • 7.7 Saves Time and Increases Efficiency
  • 8.1 Unfit for Lengthy Message
  • 8.2 Unfit for Policy Matters
  • 8.3 Lack of Written Proof
  • 8.4 Expensive Method
  • 8.5 Lack of Clarity
  • 8.6 Misuse of Time
  • 8.7 Presence of Both the Parties Necessary
  • 9 Oral Mode is Used Where
  • 10.1 What is oral communication in one word?
  • 10.2 What is oral communication according to different authors?
  • 10.3 What is the importance of an oral communication essay?
  • 10.4 What are the methods of oral communication?
  • 10.5 What is oral communication according to the authors?
  • 10.6 What is the importance of oral communication?
  • 10.7 What are the six types of oral communication?
  • 10.8 What are the advantages of communication?
  • 10.9 What are the disadvantages of communication?
  • What is Oral Communication?

Oral communication implies communication through the mouth. It includes individuals conversing with each other, be it direct conversation or telephonic conversation. Speeches, presentations, and discussions are all forms of oral communication .

Oral communication is generally recommended when the communication matter is of a temporary kind or where a direct interaction is required. Face-to-face communication (meetings, lectures, conferences, interviews, etc.) is significant so as to build rapport and trust.

What is Oral Communication

In other words, Oral communication is the process of expressing information or ideas by talking. It is predominantly referred to as speech communication.

  • Definitions of Oral Communication

These are the following definitions of oral communication :

  • Importance of Oral Communication

The following are the importance of oral communication :

Clear Pronunciation

Logical sequence, appropriate word choice, use natural voice, communicate with right person, do not get guided by assumptions, look for feedback, allow to ask questions.

Importance of Oral Communication

The message should be pronounced clearly, otherwise, the receiver may not understand the words of the sender.

A brief message is considered the most effective factor since the receiver’s retention capacity is limited in oral communication . The sender should be as brief as possible.

The sender should ensure the exactness of the message. The only relevant issue should be included in the message and that too with accuracy.

The sender should believe in the facts that are being communicated to others. The oral presentation should evince the confidence of the sender.

The sender should present the message logically. The points to be spoken first and what should follow to convey the meaning and motives of the sender effectively to the receiver need to be looked into.

Words are symbols. They have no fixed or universal meanings. The meanings of words at that moment are in the mind of the sender. Therefore, the sender should select the words which are suitable and understandable to the other party and those which convey exactly the same meanings as the sender wanted.

A natural voice conveys integrity and conviction. It is advised to use a natural voice in oral communication .

It is essential to know with whom to communicate. If you communicate the right message to the wrong person, it may lead to a lot of problems. Be sure in recognizing the right person to communicate with.

Never assume that your listener has knowledge already of the subject matter. You may be wrong many times in such assumptions. You can be good only when you are confident in your message without any omission.

When communicating, if you are smart enough in collecting feedback verbally or non-verbally, you can quickly alter the message, if necessary.

It is important to give freedom to the receiver to rise questions whenever he feels ambiguity or confusion. In a way, the communicator should encourage the receiver to ask questions. Such questions are opportunities to clarify doubts.

Types of Oral Communication

These are the types of oral communication discussed below in detail:

Face-to-Face Conversation

Presentation, public speech.

Oral communication is best when it is face-to-face . A face-to-face setting is possible between two individuals or among a small group of people in an interview or in a small meeting; communication can flow both ways in these situations. There is always immediate feedback, which makes clarification possible.

Telephone talk depends entirely on the voice. It does not have the advantage of physical presence. Clarity of speech and skillful use of voice is important. There can be confusion between similar sounding words like pale and bale or between light and like.

Names and addresses communicated on the telephone are sometimes wrongly received. It is therefore customary to clarify spellings by saying C for Cuttack, B for Bal sore, and so on.

A presentation has a face-to-face setting. It is a formal and well-prepared talk on a specific topic, delivered to a knowledgeable and interested audience. Visual aids are used to enhance a presentation. The person who makes the presentation is expected to answer questions at the end.

It is the responsibility of the presenter to ensure that there is a clear understanding of all aspects of the topic among the audience.

A public speech or lecture, with or without microphones, has a face-to-face setting, but the distance between the speaker and audience is great; this distance increases as the audience gets larger, as in an open-air public meeting.

The purpose of a public speech may be to entertain, encourage and inspire. Much depends on the speaker’s skill in using gestures and using the microphone. Feedback is very little as the speaker can hardly see the facial expressions of people in the audience. A public speech is followed by applause rather than by questions from the audience.

An interview is a meeting in which a person or a panel of persons, who are the interviewers, ask questions from the interviewee. The purpose is, usually, to assess and judge whether it would be worthwhile to enter into a business relationship with the other.

Each side makes an assessment of the other. An interview is structured and is characterized by the question and answer type of communication .

Usually, a meeting involves many persons; there is a chairman or a leader who leads and guides the communication and maintains proper order. There is a fixed agenda, i.e., a list of issues to be discussed at the meeting.

Meetings are of many types, from the small committee meeting consisting of three or four persons to the large conference or the shareholders’ meeting. This type of oral communication is backed up by note-taking and writing up minutes.

  • Methods to Improve Oral Communication Skills

These are some methods to improve oral communication skills :

Speak in a Clear, Confident Strong Voice

Be coherent, avoid using filler words, be an active listener.

Methods to Improve Oral Communication Skills

one should speak in a confident, clear, and strong voice so that it is audible to everyone in the audience. Keep the pace of your speaking average, not very slow not very fast. While speaking, face the audience.

One should speak coherently with a concentration on your subject only. Try not to be distracted from your subject, try to prevent other thoughts at that time.

It is better to pause for a second rather than using filler words, such as “Yeah”, “So”, “Um”, and “Like” frequent use of filler words disturbs coherence and distracts the audience.

Verbal communication is a two-way process; you should, therefore, be an active listener too. Try to understand a question/query quickly, because it looks odd to ask to repeat the question.

  • Advantages and Disadvantages of Oral Communication

These are the following advantages and disadvantages of oral communication :

Advantages of Oral Communication

Disadvantages of oral communication.

Advantages and Disadvantages of Oral Communication

Following are the advantages of oral communication :

Quickness in Exchange of Ideas

Immediate feedback, flexibility, economic sources, personal touch, effective source, saves time and increases efficiency.

Advantages of Oral Communication

Quickness in Exchange of Ideas : The ideas can be conveyed to distant places quickly because this medium does not require the message to be written.

Immediate Feedback : Oral communication helps in understanding the extent to which the receiver has understood the message through his feelings during the course of the conversation.

Flexibility : Oral communication has an element of flexibility inherent in it. Flexibility means changing ideas according to the situation or changing ideas according to the interest of the receiver.

Economic Sources : It is an economic source of communication because the message is communicated only orally.

Personal Touch : Oral communication has a personal touch. Both sides can understand each other’s feelings, being face to face. The conversation takes place in a clean environment, which increases mutual confidence..

Effective Source : Oral Communication leaves much impression on the receiver. It is said that sometimes a thing can be communicated more effectively with the help of some sign. The use of signs or gesticulation can only be made in oral communication.

Saves Time and Increases Efficiency : This communication consumes less time and the superiors can utilize the time saved for some other more important work. As a result of this the efficiency of the sender increases.

Let’s discuss some disadvantages of oral communication :

Unfit for Lengthy Message

Unfit for policy matters, lack of written proof, expensive method, lack of clarity, misuse of time, presence of both the parties necessary.

Disadvantages of Oral Communication

Unfit for Lengthy Message : Oral communication is profitable in having a brief exchange of ideas only. It is not possible for the receiver to remember a long message.

Unfit for Policy Matters : Where policies, rules, or other important messages are to be communicated, oral communication has no importance.

Lack of Written Proof : In the case of oral communication no written proof is left for future reference. Therefore, sometimes difficulty has to be faced.

Expensive Method : When less important information is sent to distant places through telephone, etc. oral communication proves costly.

Lack of Clarity : This is possible when there is little time for conversation. Sometimes wrong can be uttered in a hurry, which can lead to adverse results.

Misuse of Time : Oral communication is considered a misuse of time when during meetings the conversation is lengthened unnecessarily. Parties involved in the communication waste their time in useless talks.

Presence of Both the Parties Necessary : In oral communication, it is essential for the sender and the receiver to be present face to face, it does not mean in the physical sense. But in written communication , one party is required.

  • Oral Mode is Used Where

These are the following points where we used oral mode :

  • Personal authentication is needed. e.g., between an officer and her personal secretary; a journalist and her source (“I heard it from a reliable source”)
  • Social or gregarious needs must be met. e.g.,’ speaking with a visiting delegation
  • Warmth and personal qualities are called for. e.g., group or team interaction
  • Exactitude and precision are not vitally important. e.g., brainstorming for ideas I
  • Situations demand maximum understanding. e.g., sorting out problems or differences between individuals, or between two groups such as administration and students.
  • An atmosphere of openness is desired. e.g., talks between management and. workers
  • Added impact is needed to get the receiver’s focus. e.g., a chairperson of an organization addressing the staff; a presidential or royal address to a nation
  • Decisions or information have to be communicated quickly. e.g., officers issuing officers during natural disasters such as floods or an earthquake
  • Confidential matters are to be discussed. e.g., exchange of positive or negative information about an organization or an individual. In the process of appointments or promotion or selection of individuals, a period of open discussion may precede the final decision that is recorded in writing.

Read More Related Articles

What is Communication?

  • Meaning of Communication
  • Definitions of Communication
  • Functions of Communication
  • Importance of Communication
  • Principles of Communication
  • Process of Communication

Types of Communication

  • Elements of Communication
  • Mass Communication
  • What is Mass Communication?
  • Definitions of Mass Communication
  • Functions of Mass Media
  • Characteristics of Mass Communication
  • Types of Mass Communication
  • Importance of Mass Communication
  • Process of Mass Communication

Verbal Communication

  • Non-Verbal Communication

Written Communication

  • Visual Communication
  • Feedback Communication
  • Group Communication
  • What are the 7 principles of communication?

Nonverbal Communication

  • What is Nonverbal Communication?
  • Advantages of Non verbal Communication
  • Disadvantages of Non Verbal Communication
  • Functions of Nonverbal Communication
  • Types of Nonverbal Communication
  • Principles of Nonverbal Communication
  • How to Improve Non Verbal Communication Skills
  • What is Verbal Communication?
  • Types of Verbal Communication
  • Functions of Verbal Communication
  • Advantages and Disadvantages of Verbal Communication
  • What is Written Communication?
  • Ways to Improve Written Communication
  • Principles of Written Communication
  • Advantages and Disadvantages of Written Communication

Oral Communication

Business Communication

  • What is Business Communication?
  • Definition of Business Communication
  • Types of Communication in Business
  • Importance of Communication in Business
  • 7 Cs of Communication in Busi n ess
  • 4 P’s of Business Communication
  • Purpose of Business Communications
  • Barriers to Business Communications

Organizational Communication

  • What is Organizational Communication?
  • Types of Organizational Communication
  • Directions of Organizational Communication
  • Importance of Organizational Communication

Formal Communication

  • What is Formal Communication?
  • Definition of Formal Communication
  • Types of Formal Communication
  • Advantages of Formal Communication
  • Limitations of Formal Communication

Informal Communication

  • What is Informal Communication?
  • Types of Informal Communication
  • Characteristics of Informal Communication
  • Advantages of Informal Communication
  • Limitations of Informal Communication

Interpersonal Communication

  • What is Interpersonal Communication?
  • Elements of Interpersonal Communication
  • Importance of Interpersonal Communication
  • Principles of Interpersonal Communication
  • 10 Tips for Effective Interpersonal Communication
  • Uses of Interpersonal Communication

Development Communication

  • What is Development Communication?
  • Definitions of Development Communication
  • Process of Development Communication
  • Functions of Development Communication
  • Elements of Development Communication
  • 5 Approaches to Development Communication
  • Importance of Development Communication

Downward Communication

  • What is Downward Communication?
  • Definitions of Downward Communication
  • Types of Downward Communication
  • Purposes of Downward Communication
  • Objectives of Downward Communication
  • Advantages of Downward Communication
  • Disadvantages of Downward communication

Upward Communication

  • What is Upward Communication?
  • Definitions of Upward Communication
  • Importance of Upward Communication
  • Methods of Improving of Upward Communication
  • Important Media of Upward Communication

Barriers to Communication

  • What are Barriers to Communication?
  • Types of Barriers to Communication
  • How to Overcome Barriers of Communication

Horizontal or Lateral Communication

  • What is Horizontal Communication?
  • Definitions of Horizontal Communication
  • Methods of Horizontal Communication
  • Advantages of Horizontal Communication
  • Disadvantages of Horizontal Communication

Self Development

  • What is Self Development?
  • Self Development and Communication
  • Objectives of Self Development
  • Interdependence Between Self Development and Communication

Effective Communication

  • What is Effective Communication?
  • Characteristics Of Effective Communication
  • Importance of Effective Communication
  • Essentials for Effective Communication
  • Miscommunication
  • Difference Between Oral and Written Communication

Theories of Communication

  • What is Theories of Communication?
  • Types of Theories of Communication
  • Theories Propounded to Create Socio-cultural Background Environment
  • Theories based on Ideas of Different Scholars

FAQ Related to Oral Communication

What is oral communication in one word.

Oral communication expresses ideas through the spoken word.

What is oral communication according to different authors?

Oral communication takes place when spoken words are used to transfer information and understanding from one person to another. BY S. K. Kapur

What is the importance of an oral communication essay?

The following are the importance of oral communication: Clear Pronunciation, Brevity, Precision, Conviction, Logical Sequence, Appropriate Word Choice, Use of natural voice, etc.

What are the methods of oral communication?

Following are some methods to improve oral communication skills: Speak in a Clear, Confident Strong Voice, Be Coherent, Avoid Using Filler Words, Be an Active Listener, etc.

What is oral communication according to the authors?

Oral communication expresses ideas through the spoken word. By  Bovee

What is the importance of oral communication?

Following are the importance of oral communication: 1. Clear Pronunciation 2. Brevity 3. Precision 4. Conviction 5. Logical Sequence 6. Appropriate Word Choice 7. Use a natural voice 8. Communicate With Right Person 9. Do Not Get Guided by Assumptions 10. Look for Feedback 11. Allow to Ask Questions.

What are the six types of oral communication?

These are the six types of oral communication: 1. Face-to-Face Conversation 2. Telephone 3. Presentation 4. Public Speech 5. Interview 6. Meeting.

What are the advantages of communication?

Advantages of Communication given below: 1. Quickness in Exchange of Ideas 2. Immediate Feedback 3. Flexibility 4. Economic Sources 5. Personal Touch 6. Effective Source 7. Saves Time and Increases Efficiency.

What are the disadvantages of communication?

Disadvantages of Communication: 1. Unfit for Lengthy Message 2. Unfit for Policy Matters 3. Lack of Written Proof 4. Expensive Method 5. Lack of Clarity 6. Misuse of Time 7. Presence of Both the Parties Necessary.

Related posts:

  • Media of Communication: Definitions, Types and Examples
  • Written Communication: Definitions, Principal, Types, Advantages and Disadvantages, Ways to Improve
  • Mass Communication: Definitions, Functions, Characteristics, Types, Importance, and Process
  • Downward Communication: Definitions, Types, Purposes, Objectives
  • 7 Types of Barriers of Communication
  • 17 Ways to Overcome Barriers to Communication
  • 8 Functions of Mass communication
  • 8 Elements of Mass Communication
  • 10 Characteristics of Mass Communication
  • Scope of Business Communication
  • Human Communication: Meaning, Origins, Stages, and Types
  • Communication: Definitions, Functions, Importance, Principles, Process, Types, and Elements
  • Elements of Communication ( Elements Universals of Communication)
  • Principles of Communication: 7 Cs of Communication

definition of oral speech

Quick links

Controlling speech anxiety, vocal variety, body language, practicing and preparation, general delivery tips.

As we know, words communicate meaning. But the way we say words also communicates meaning . This is why effective speakers devote time to improving their delivery.

So what exactly is delivery? Delivery is the speaker’s physical (vocal and bodily) actions during a speech. The main purpose of delivery is to enhance, not distract from, the message. In order to help you avoid distracting from your message, we’ve created a document about what not to do while delivering a speech.

We consider several aspects of delivery: controlling speech anxiety , vocal variety , body language , and practice .

It’s important to thoughtfully consider both the organization and oral style of the speech before discussing the principles of delivery. It doesn’t matter how great your delivery is if the speech is disorganized and hard to understand.

If you’re confident about organization and oral style , it’s time to work on delivery!

To control your speech anxiety the first step is simple: practice. The better you know your speech, the more comfortable you’ll feel. Comfort generally helps reduce anxiety.

In addition, it would be incredibly helpful to practice in the same room you’ll be giving the speech or to practice in front of other people. Both of these situations will simulate the speaking experience, and make the actual speaking experience feel less foreign, and less anxiety provoking.

You’ll be most anxious in the first minute of the speech. After the first minute (or so) anxiety levels tend to stabilize and decrease. Since the first minute can be the most challenging, it may be wise to memorize your opening.

For additional methods to deal with speech anxiety, this article will prove useful.  If you’re in a rush, here are a few quick tips summarized:

  • Visualize your success.
  • Find a friendly face.
  • Take a few calming breaths.

Even the best orators still get nervous when speaking; it’s normal, so don’t worry about it.

Please contact the Center for Counseling and Wellness if your anxiety is daily and inhibiting.

Vocal variety is essential to a captivating delivery. In oral rhetoric classes, you’ll learn about effective vocal variety. You may have an excellently written speech, but if delivered without vocal variety, it will be boring and dry. However, not every speech assignment at Calvin is for an oral rhetoric class. So what is vocal variety and how does one use it?

Vocal variety includes elements such as pitch, tone, volume, and rate .

How do I speak with vocal variety?

To learn about pitch, tone, volume, and rate variation, watch this video by  Florida International University's Comm Art Studio from 1:10 to 6:09.

Additional tips related to vocal variety:

  • Practice in everyday conversation to make vocal changes second nature.
  • If you’re using a notecard, use slash marks or asterisks to mark cardinal places to vary pitch or slow down.
  • Pause or slow down to emphasize important words or concepts.
  • Speed up to solicit excitement or energy.
  • Use a lower pitch to create an authoritative tone.

Many extraordinary speakers rely heavily on vocal variation. Listen to Meghan Markle, now the Duchess of Sussex, in her 2015 address to the United Nations. Markle frequently changes pitch and rate--pitch to make the speech interesting and rate to emphasize different points. (At 5:20-6:05 she slows down to emphasize her message; this is an effective rate change.)

Sometimes our voices sound scratchy when we speak. This may be vocal fry.

What is vocal fry and how to avoid it?

Vocal fry is the lowest voice register of the human voice. It sounds muffled and unclear. Speaking professionals recommend you avoid it during public speaking. If you think vocal fry may be a problem for you, watch this short video .

If you need help or someone to listen, come to the Rhetoric Center!

Many speakers use notecards for guidance through speech delivery. Notecards aren’t the same as the outline . Normally, the outline is turned in to the teacher and consists of complete sentences. The notecards would be the reverse: they are guideposts for the speaker, normally written as single words, phrases, or bullet points that are easy to glance read during a speech.

Note cards can also be used for notes about the delivery. Many speakers use slash marks to signal places where a pause would help. If you sound monotone or struggle with vocal variety, note places where vocal variation would be appropriate.

For a deeper look on how to use notecards, check out this resource from Oral Communication Center, Hamilton College.

Some people recommend using only one or two notecards. Typically, we at the Rhetoric Center don’t recommend that it’s better to have more cards to make sure you don’t get lost.

While you’re using notecards, it’s tempting to forget to make eye contact with the audience; remember, eye contact will communicate confidence.

For an example, look at this notecard for a speech about the Orlando shooting.

For further direction, check out Hamilton College or visit the Rhetoric Center.

Body language communicates meaning just like your words and how you say them. Consider what your body language says about you and your message while you’re speaking. For instance, your body language can affect your tone, your audience’s attention, and your audience perception of you. 

Body language and movement affects the tone of your speech or presentation. For instance, how much moving should take place during a eulogy? Probably none. In a eulogy you want to be more composed. However, in a more energetic environment, such as an informative speech, it would be more acceptable to use movement. For example, Michelle Obama,  in her campaign speech for Hillary Clinton , uses energetic body language to excite voters about voting for her preferred candidate.

Body language can either gain or lose the attention of your audience over the course of a speech. If used conservatively and properly, it can make your speech more interesting and engaging. If used excessively and carelessly, it can distract your audience from your message.

Confidence or insecurity

Body language communicates confidence or insecurity. If your back is turned to the audience, you’re pacing back and forth, or your hands are in your pockets, you’ll probably come off as insecure. On the other hand, walking with confidence and using hand gestures meaningfully will communicate confidence.

Now that you’re aware of how body language can affect your speeches, let’s consider how to use our body language while speaking.

What do I move and how?

Need some options for body movement? Watch from (6:52-9:55) for fundamental body movement concepts during a speech. Pay attention, but also put it to practice!

A few suggestions on body movement:

  • Face the audience and don’t turn your back to them.
  • Be in the center of room and don’t walk to near to the edges.
  • Don’t put your hands in your pockets or on your hips; this creates emotional barriers.

Practicing is critical to the performance and success of the speech. When practicing, it’s normal to touch up and fix slight wording issues, but at this point in the writing process the speech should be pretty much finished. If you still have more to write, we recommend the speech writing and organization pages.

How do you practice for a speech?

Keep practicing and don’t always start at the beginning; change where you start practicing.

If you always start with the beginning, then you’ll know the beginning best, and the rest will get progressively harder. You should know every part of the speech equally well if you keep changing your starting location. However, memorizing the first few lines isn’t a bad idea because the first few lines will be the hardest to recite when confronted by speech anxiety.

This resource from University of Hawai'i Maui Community College Speech Department will prove helpful when practicing speeches. Read the “Do’s” and “Don'ts” carefully.

It’s also beneficial to imitate the environment of the speech. This can be done by practicing in front of people and in the real space you’ll be giving the speech in. It’ll be easier to deliver if it’s not your first time seeing the space.

Watch two examples related to preparation:   one bad and one good .

  • Avoid filler words. Filler words, such as “umm” and “like,” take away from your credibility as a speaker, affecting how the audience receives your message.
  • Maintain eye contact; it demonstrates confidence!
  • If you slip up, don’t apologize. Apologizing makes you appear insecure and affects your credibility as a speaker.
  • Pauses help. They can make a speech somber and serious. President Obama used pauses for effective emotional measures in his speech following the shooting at Sandy Hook elementary school , particularly in his introduction.
  • Slow down; you deliver the speech faster than your practiced it. Even if your practice trials were perfectly timed, it’s common for speakers to speed up the actual delivery.

For additional tips, check out this resource on general guidelines for speech delivery (The Writing Resources Center, Swem Library, College of William & Mary).

If you have any questions or would like a Rhetoric Center consultant to listen to your speech, schedule an appointment.

In this presentation, business tycoon Elon Musk appears extremely nervous and unprepared. This probably results from the fact that   Musk doesn't practice his speeches .  To see how fragmented and clustered the speech is watch the first 1:30. In addition to being fragmented and clustered, Musk uses filler words, such as “um” and “eh,” like air. This makes him appear insecure. Ironically, this is an important speech updating the world on a possible mission to Mars, delivered by an important man who needs no introduction, yet the beginning of the speech sounds amateur because of his lack of preparation. Further practice would’ve made him seem more natural and effective, and therefore would’ve reduced his anxiety.

Practice helps, even for Elon Musk.

On the other hand, practice puts the most nervous speakers at peace during their presentations.

Winston Churchill, on June 18 1940, delivered one of the most enduring speeches in English history: “Their Finest Hour.” This World War II speech was given via broadcast to the British people and just after France had accepted the German armistice.

It was only a month into Churchill’s ministership, and he was terrified of public speaking because of his speech impediment. However, according to Carmine Gallo from Forbes , Churchill’s practice helped him overcome his anxiety. If you listen to 26:55-30:02 of Churchill’s speech, you can clearly identify his rhythm and diction; this would be impossible to achieve without considerable preparation and knowledge of his own speech.

definition of oral speech

This note card was used for a speech commemorating the victims of the Pulse shooting in Orlando, Florida.

For starters, the notecard is legible and uses slash marks as possible places to pause when reading. These pauses, a form of vocal variety, place emphasis on each name being read, which would trigger an emotional response. However, more information could probably be fit on the bottom of the card. Perhaps something to prompt the next sentence.

It’s important that the notecard doesn’t use full sentences; rather, use trigger words and phrases. Compare the notecard to the outline version of the first sentence: “Today, I wanted to individually remember those lost on June 12, 2016.” The word “individually” is nowhere near as important as “remember.” So “individually” gets put on the card and not “remember.”

Furthermore, things that must be exact need to be written on the cards. This includes dates, names, and quotes. You don’t want to say (or pronounce) any of these incorrectly as it could reduce your credibility.

Michael Bay, the famous film director, didn’t prepare sufficiently for this speech. This embarrassing mess resulted from a miscommunication between Bay and the teleprompt person. However, this could’ve been prevented with preparation. For starters, Bay should have had a backup, hard copy version of the speech. This may not have been ideal, but it would’ve prevented the disaster.

Make sure you have a backup plan .

Bay’s problems didn’t end with his miscommunication; he also didn’t know when to begin the speech. He started before his introducer asked him the first question. If prepared, he would’ve known how to start. In addition, the final question about his movies was a simple throw away by the interviewer in attempt to save Bay. Bay said he couldn’t read the teleprompter, apologized, and left. He didn’t need the teleprompter to answer this question about his movies (that he has spend hundreds or thousands of hours working on). This demonstrates unpreparedness; If he knew the speech, rather than just relying on the teleprompter, he would’ve been able to answer the last question.

On the other hand, careful preparation can make for a great speech. For Monica Lewinsky's Ted Talk on “The Price of Shame” she had no teleprompter, so she used paper notes, which can be seen on the small podium in front of her. Even though she used paper notes, her speech was well practiced. This is evident in her consistent eye-contact, lack of mistakes, and how she carried the room with evidently prepared diction and vocal variety.

  • Faculty & Staff
  • Admitted Students
  • Administration
  • Diversity & Inclusion
  • Vision 2030
  • Sustainability
  • Media Center
  • Campus & Location
  • Consumer Information
  • Safer Spaces
  • Majors & Programs
  • Graduate Programs
  • Online Programs »
  • First-Year Opportunities
  • Academic Calendar
  • Course Offerings
  • The Calvin Core
  • Calvin Academy for Lifelong Learning
  • Off-Campus Programs
  • Student Services
  • Centers & Institutes
  • Global Campus
  • Hekman Library »
  • Request Information
  • Facts & Standards
  • Cost & Aid »
  • Internationals
  • Military & Veterans »
  • Residence Life
  • Faith & Worship
  • Student Involvement & Activities
  • Life in Grand Rapids
  • Careers & Outcomes
  • Multicultural & International Students
  • Wellness & Safety
  • Service Learning
  • Arts Collective
  • Box Office »
  • Get Involved
  • CalvinKnights.com
  • Outdoor Recreation
  • Intramurals
  • Group Fitness
  • Sports Camps

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Meaning of speech in English

Your browser doesn't support HTML5 audio

speech noun ( SAY WORDS )

  • She suffers from a speech defect .
  • From her slow , deliberate speech I guessed she must be drunk .
  • Freedom of speech and freedom of thought were both denied under the dictatorship .
  • As a child , she had some speech problems .
  • We use these aids to develop speech in small children .
  • a problem shared is a problem halved idiom
  • banteringly
  • bull session
  • chew the fat idiom
  • conversation
  • conversational
  • put the world to rights idiom
  • take/lead someone on/to one side idiom
  • tête-à-tête

You can also find related words, phrases, and synonyms in the topics:

speech noun ( FORMAL TALK )

  • talk She will give a talk on keeping kids safe on the internet.
  • lecture The lecture is entitled "War and the Modern American Presidency".
  • presentation We were given a presentation of progress made to date.
  • speech You might have to make a speech when you accept the award.
  • address He took the oath of office then delivered his inaugural address.
  • oration It was to become one of the most famous orations in American history.
  • Her speech was received with cheers and a standing ovation .
  • She closed the meeting with a short speech.
  • The vicar's forgetting his lines in the middle of the speech provided some good comedy .
  • Her speech caused outrage among the gay community .
  • She concluded the speech by reminding us of our responsibility .
  • call for papers
  • deliver a speech
  • maiden speech
  • presentation
  • public speaking
  • talk at someone

speech | American Dictionary

Speech noun ( talking ), examples of speech, collocations with speech.

These are words often used in combination with speech .

Click on a collocation to see more examples of it.

Translations of speech

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

any of the rods that join the edge of a wheel to its centre, so giving the wheel its strength

Worse than or worst of all? How to use the words ‘worse’ and ‘worst’

Worse than or worst of all? How to use the words ‘worse’ and ‘worst’

definition of oral speech

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • speech (SAY WORDS)
  • speech (FORMAL TALK)
  • speech (TALKING)
  • Collocations
  • Translations
  • All translations

To add speech to a word list please sign up or log in.

Add speech to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Speech and nonspeech: What are we talking about?

Department of Communication Sciences and Disorders, Temple University, Philadelphia, PA, USA

Understanding of the behavioural, cognitive, and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system, (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control ( Ziegler, 2003a ). The purpose of this paper is to stimulate thinking about speech production, its disorders, and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims – including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.

INTRODUCTION

In recent years, there has been debate regarding the special status of speech among motor behaviours ( Ballard, Robin, & Folkins, 2003 ; Bunton, 2008 ; Weismer, 2006 ; Ziegler, 2003a , b ; Ziegler & Ackermann, 2013 ). Two main views in this debate are the task-dependent model (TDM; Ziegler, 2003a , b ; Ziegler & Ackermann, 2013 ) and the integrative model (IM; Ballard et al., 2003 ) 1 . Briefly, the TDM proposes a specialised, distinct neuromotor control system dedicated to speech production, whereas other actions of the anatomical apparatus involved in speaking (e.g. laughing, novel oral movements) are controlled by fundamentally different motor control systems. In contrast, the IM proposes that speech production involves a particular, unique combination of skills and properties, some of which are shared with other motor behaviours, and as such proposes overlapping behavioural and neural control systems for speech and other motor behaviours. This debate is relevant for the understanding of human motor behaviour in general and speech behaviour in particular as well as the neural mechanisms underlying such behaviour, but also for the methods by which we study or influence such behaviour in the lab and in the clinic. The TDM seems to represent a common, if not the prevailing view in current literature.

The present paper seeks to bring some of the issues into sharper focus, raise some critical questions for two particular claims integral to the TDM, and explicate and explore implications of these claims. The hope is that this paper will make a positive contribution by identifying areas where views diverge – and thus, where theoretical and empirical attention can be most fruitfully directed to adjudicate between alternatives and advance our understanding of speech production.

This paper will focus on two particular claims, versions of which have been eloquently laid out in recent years by various authors ( Bunton, 2008 ; Weismer, 2006 ; Ziegler, 2003a , b ; Ziegler & Ackermann, 2013 ). In particular, we will examine the claims that (1) speech is controlled by a specialised, distinct, dedicated neuromotor control system , and (2) speech is a holistic behaviour which cannot be decomposed into smaller parts . Although the intent is not to rehash old arguments, occasional clarifications of such arguments will be provided to resolve possible misinterpretations and develop the discussion. The purpose of this paper is to stimulate further thought about what it means to say that speech is special, and how different views affect clinical decisions regarding assessment and treatment of speech disorders.

Although occasional references to neuroimaging studies will be made, the primary emphasis in this paper will be on behavioural rather than neuroimaging studies (the interested reader is referred to Hickok, Houde, & Rong, 2011 , for a synthesis and review). The main reason is that neural activation patterns represent dependent measures that can be interpreted and understood only in relation to the behaviour they are thought to capture (see also Coltheart, 2006 , for further discussion of neuroimaging studies to address cognitive theories). In other words, an essential first step is to define the behaviour of interest and the tasks which represent this behaviour, so that tasks can be compared ( Weismer, 2006 ). The literature contains findings of neural overlap (e.g. Chang et al., 2009 ; Segawa et al., 2015 ) as well as neural differences (e.g. Wildgruber et al., 1996 ) between tasks designated as speech or nonspeech. The present paper is concerned with this essential first step in that it discusses some of the complications in drawing distinctions between tasks and designating them as speech or nonspeech, and as such may help shed light on these discrepant findings from the neuroimaging literature and their implications for our understanding of speech motor control.

The structure of the paper is as follows. First, I will set the stage by outlining the claims and contrast them with an integrative view to highlight the crux of the disagreement. Next, I will raise some conceptual and empirical challenges to the two claims above. Finally, I will discuss implications for the scientific study of speech production and for clinical practice.

PRELIMINARIES

A point of agreement is that both views accept that speech is indeed a special motor skill. To deny this is even on its face not a tenable position. Both views agree that typical speech production involves a particular combination of properties (e.g. control of an acoustic signal, articulator movements). At issue is how this behaviour is controlled, in terms of neural and cognitive organisation, and the associated scientific and clinical implications.

The TDM espouses two more specific claims about the specialness of speech. First, the specialness is reflected in the existence of a distinct motor control system used only for producing speech. This claim was formulated clearly by Bunton (2008 : 271–272), who wrote ‘Even though [nonspeech tasks] may involve the same musculature as speech, the tasks are so different that their control must be assumed to be based on different neural networks.’ Similarly, Ziegler (2003 : 5) stated ‘These subsystems [for speaking versus other tasks] are separate to the extent that each of them has unique properties, is subserved by a specialised neural circuitry […].’ In other words, this view postulates a categorical distinction between speech and other motor behaviours. 2 Second, speech motor control is holistic and speech movements cannot be decomposed into component parts (“primitives”). This claim is reflected in Ziegler and Ackermann’s (2013) statement that ‘[…] vocal tract gestures in speaking […] can only be understood properly through their joint interaction in fabricating the sounds of syllables and words. From such a connectivist point of view, the constituents of a speech motor action can neither be isolated from their gestural context nor from their linguistic reference frame.’ (p. 62). Similarly, Weismer (2006 : 332) wrote that ‘disintegrating a system for isolated study of component parts does not allow study of the system’s typical behaviors.’ Although in a trivial sense this is true (one cannot observe the system’s typical behaviours when typical behaviours are absent), this claim suggests that speech motor control can only be understood and studied if all components of typical speech are present (i.e. the “primitive” is the task of speaking).

Although these are two separate claims, they are related in that the first claim depends on speech constituting a single category: delineation of the control system for speech versus other motor behaviours is best characterised in terms of broadly defined superordinate tasks (e.g. “speaking”, “chewing”), rather than in terms of the various subordinate properties or components involved in these actions. However, the second claim does not depend on the first: a holistic, indecomposable behaviour need not be subserved by a separate, dedicated neuromotor control system. Central to both claims is the delineation of speech as a unitary task category. Insofar as speech comprises describable components (e.g. articulator movement, control of acoustic signal), only when all components are present does the task represent speech (claim 2) and engage a distinct, specialised neuromotor control system (claim 1). When only a subset of these properties is combined into an action, a fundamentally different system is responsible for its control.

In contrast, according to the IM, control of speech involves a motor system that integrates and coordinates movement properties and components for a variety of motor tasks, and speech is one of such motor tasks. Ballard et al. (2003 : 38) proposed ‘… an integrative model in which some nonspeech motor tasks share principles with speech and some do not […]. This leads us to postulate overlapping neural and behavioural systems for the control of speech and volitional nonspeech tasks.’ They go on to say ‘Thus speech motor control is integrative, sharing properties with some but not all nonspeech motor tasks. We are not claiming complete task-independence or task-dependence, but rather suggesting that certain volitional nonspeech tasks share principles in common with speech and therefore speech motor anomalies (e.g. apraxia). We hypothesise that, at complex behavioural levels, there must be overlapping functional components and therefore overlapping and integrative neural pathways or networks.’ ( Ballard et al., 2003 : 39). In other words, this view proposes a gradient distinction between speech and other motor behaviours, with some but not all properties shared. In a sense, this view represents a position intermediate between two extremes (a completely task-specific vs. a completely domain-general motor control system). This view holds that speech can be decomposed and that the motor control system can best be understood in terms of task components rather than broadly defined tasks (constellations of components). Thus, although speech is a special motor skill, it does not require postulation of a specialised neuromotor control system, and can be understood by examining properties in isolation and in various combinations (including typical speech, the “full” combination). Comparisons between typical speech and certain nonspeech motor tasks is considered potentially informative regarding organising principles underlying speech production.

Several authors have posed the question for the IM of how to define similarity of speech and nonspeech movements ( Bunton, 2008 ; Weismer, 2006 ). Indeed, the onus is on the IM to identify, in a principled manner, the properties presumed to be shared between speech and nonspeech tasks, and this enterprise has not been straightforward ( Weismer, 2006 ). This is a valid criticism, and I will not reiterate the cogent arguments presented by these authors. Instead, I submit that neither view escapes this need to clarify and define criteria of similarity. Just as the IM must define similarity , the TDM must define dissimilarity between speech and nonspeech motor behaviours. This is essentially the same concern approached from opposite directions, but this requirement is perhaps even more pressing for a view that stipulates a categorical distinction. To understand what speech is, we must also understand what it is not. As I discuss below, this enterprise is not straightforward either, and has largely been avoided to date ( Kent, 2015 ).

CONCEPTUAL AND EMPIRICAL CHALLENGES

In this section, I will pose some critical challenges for the two claims of the TDM. I will do so by addressing two main questions: What is speech? and What is a system? This section ends with a brief discussion of challenges regarding the emergence of dedicated systems. As will become evident, possible solutions to these challenges tend to be unprincipled, inconsistently applied, and/or constitute de facto acceptance of the IM and decomposability of speech.

Definitions: What is speech?

A special, distinct control system for speech is predicated on delineation of speech from other behaviours. Yet despite the centrality of “speech” to the TDM, three nontrivial problems exist regarding definition and delineation of this construct: lack of explicit definitions, lack of consensus about necessary and sufficient criteria, and inconsistent application of definitions.

First, as Kent (2015) noted, explicit definitions of speech are often conspicuously absent from articles proposing a TDM, even those that include a section with definitions (e.g. Weismer, 2006 ). To quote Weismer (2006 : 343): ‘Gardner (1985, p. 286) […] wrote, “One cannot have an adequate theory about anything the brain does unless one also has an adequate theory about that activity itself.” ’ I would argue that this includes an adequate definition of that activity.

Second, although proponents of a TDM have suggested a number of task properties that supposedly delineate speech from nonspeech tasks, there does not appear to be a consensus about which ones are necessary and/or sufficient. Some tasks not considered speech by proponents of a TDM share these properties, and other tasks considered speech by TDM proponents lack these properties. In the next few paragraphs, I review several proposed properties to illustrate some reasons for this lack of consensus. Two important suggested properties of speech are that (1) it produces an acoustic signal (2) that is used to communicate ( Weismer, 2006 ; Ziegler, 2003a , b ). The first property (acoustic signal) appropriately excludes oral movements without acoustic consequences such as tongue wagging. However, it also excludes articulation without sound, as may occur in natural environments ( Gick et al., 2012 ) or in experimental contexts. One could argue that such soundless tasks are not speech, yet TDM proponents consider covert speech (silent mouthing of words) to reflect the speech system ( Bunton, 2008 ; Wildgruber, Ackermann, & Grodd, 2001 ; Wildgruber et al., 1996 ), despite differences between overt and covert speech in terms of neural circuitry (e.g. Riecker, Ackermann, Wildgruber, Dogil, & Grodd, 2000 ). Thus, an acoustic signal appears to be neither sufficient nor necessary.

The second property (communicative purpose) appropriately excludes oral motor behaviours that produce rhythmic acoustic signals but are not used to communicate, such as human beat box performance ( De Torcy et al., 2014 ). It also excludes diadochokinetic (DDK) tasks (e.g. saying pataka rapidly and repeatedly), which are indeed explicitly designated nonspeech tasks by TDM proponents ( Bunton, 2008 ; Ziegler, 2002 ). However, the requirement for communicative intent also excludes behaviours that might be considered speech, such as talking in one’s sleep ( Kent, 2015 ), or speech-like, such as infant babble ( Moore & Ruark, 1996 ). 3 To complicate matters further, oral movements that produce acoustic signals with a communicative purpose, such as the click sound tsk-tsk to convey disapproval or sighing loudly to communicate exasperation, are not considered speech ( Aichert & Ziegler, 2013 ). Thus, communicative purpose is not a necessary or sufficient property of speech either.

One possible solution was suggested indirectly by Weismer (2006) . His definition of nonspeech tasks refers to phonetic goals: ‘Oromotor, nonverbal tasks: Any performance task, absent phonetic goals , in which structures of the speech mechanism […] are measured’ (p. 319, italics mine). Similarly, Ziegler and Ackermann (2013) refer to ‘vocal tract motor circuitry which is specifically dedicated to the generation of acoustic patterns typical of a speaker’s native language’. (p. 61). Kent (2015) , one of few authors to provide an explicit definition, also refers to phonetic structure: ‘Speech is defined as movements or movement plans that produce as their end result acoustic patterns that accord with the phonetic structure of a language’ (p. 765).

Phonetic structure does appear to constitute a necessary condition for speech (assuming that covert speech has phonetic structure). Nevertheless, even here complications arise. For instance, if phonetic patterns must be those of the native language, this implies that non-native speech patterns involve a nonspeech oral motor system. In support of this idea, Ziegler and Ackermann (2013) note the persistence of foreign accents in late second-language learners. However, an alternative interpretation is that the residual accent is evidence for use of a speech motor system: the accent reveals the influence of the native-language speech motor system. Use of a nonspeech motor system cannot account for language-specific influences on the second language ( Flege, Schirru, & MacKay, 2003 ) or vice versa ( Major, 1992 ). Instead, one would predict more universally similar non-native accents. Further, some oral motor behaviors not typically considered speech also have phonetic structure, such as human beat box performance ( De Torcy et al., 2014 : 38: ‘to achieve their ends, the beatboxers manipulate speech sounds’) and communicative utterances such as mmm (/m:/) to convey enjoyment of a tasty treat, or shh (/ʃ:/) to request silence ( Aichert & Ziegler, 2013 ). Aichert and Ziegler attempt to resolve this conflict by stating that speech patterns must consist of at least a syllable. Thus, isolated diphthongs or vowels such as /ɑɪ/ ( eye ) or /ɑ/ ( awe ) are speech because they can be syllables, but utterances consisting of isolated consonants are instead ‘high frequent, overlearned nonverbal expressions and not speech’ (p. 1194), because they cannot be syllables. However, some consonants can also form syllables used to communicate, both as “nonverbal expressions” (e.g. m-m [ʔm̩ʔm̩] to express disagreement) or as (parts of) words (e.g. rhythm [ɹɪðm̩], pack them up [pʰæk m̩ ʌp]). Finally, DDK tasks (e.g. saying pataka repeatedly) 4 generate acoustic-phonetic patterns of the native language. Yet, as noted above, such tasks are explicitly designated as nonspeech by TDM proponents ( Bunton, 2008 ; Weismer, 2006 ; Ziegler, 2002 , 2003a ).

What, then, are the critical task aspects that delineate speech from nonspeech tasks such as DDK? Two features have been proposed as distinguishing criteria ( Ziegler, 2002 ), including repetitive production and maximal rate demands. However, repeated production of a sound sequence also occurs in conversational speech, for example in emphatic (dis)agreements ( yes yes yes or no no no no ), the Seinfeldian ellipsis phrase yada yada yada , invocations ( Beetlejuice Beetlejuice Beetlejuice ), or utterances such as It went on and on and on . Thus, repetitive production is neither necessary nor sufficient to change speech into a nonspeech task.

Several authors suggested maximal repetition rate as a distinguishing criterion ( Weismer, 2006 ; Ziegler, 2002 ): producing pataka at a normal rate ( This is a nice pataka ) is speech, but repeating pataka as fast as possible is nonspeech (a ‘DDK mode’ of oral motor control; Ziegler, 2002 : 571). However, by this criterion, the acoustic patterns of oral communication produced by auctioneers would not be speech, because of their very rapid (likely maximal) rates. Further, if one slows speech down enough, the speaker may enter ‘an alternative, more conscious control mode’ ( Ziegler, 2003a : 24). Does this mean that speakers with apraxia of speech (AOS) or dysarthria, who may have slow speech rate (although they may speak at the fast end of their range), do not produce speech? The difficulty here is how to independently, in a principled way, establish the “speech range rate” for a given speaker. At what rate does a pattern of oral movements with acoustic output change from speech to nonspeech (on either end of the range)?

The third problem with defining and delineating speech and nonspeech tasks is that criteria have been inconsistently applied between, and even within authors. For instance, in addition to examples already given (e.g. acoustic signal is necessary vs. covert speech), reiterant syllable production in DDK is considered a nonspeech task (e.g. Bunton, 2008 ; Ziegler, 2003a ), yet elsewhere reiterant syllable production has been considered speech (e.g. Bunton & Weismer, 1994 ; Deger & Ziegler, 2002 ). This problem has important consequences for how we study, and draw inferences about, speech motor control (see Methodological Implications below).

The foregoing discussion highlights difficulties in delineating, in a principled, consistent manner, speech as a single behaviour that is categorically distinct from nonspeech behaviours. The crux of the difference between the IM and TDM is that, unlike the IM, the TDM essentially stipulates such a distinction between (more or less speech-like) nonspeech tasks and “true” speech (which itself also likely comprises a range of tasks; Kent, 2015 ). The lack of consistent and principled criteria to support such a delineation, upon which the TDM is predicated, undermines the validity or utility of the distinction – and thereby the notion of a specialised control system. Perhaps the wide range and complexity of oral motor behaviors makes it fundamentally impossible to delineate all speech from all nonspeech tasks. However, clear and consistent delineation of speech and nonspeech tasks is necessary in order to advance and empirically test a theoretical view that critically hinges on the existence of a category of speech as distinct from nonspeech tasks. Note also that the postulation of ‘quasi-speech’ tasks ( Weismer, 2006 : 319) is at variance with a categorical distinction and indecomposability: accepting that tasks can be more or less speech-like (in both speech and nonspeech categories) suggests a gradient distinction or a task space ranging from very speech-like (e.g. naturalistic conversation) to very nonspeech-like (e.g. lateral tongue wags). This is in fact what the IM proposes. To stipulate some (ill-defined) categorical point along this task space, more-or-less arbitrarily refer to one set of tasks as “speech” and another as “nonspeech”, and then propose specialised machinery for these categories is neither necessary nor illuminating.

Dissociations and differences: What is a system?

Inextricably linked to the issue of distinguishing speech from nonspeech behaviours is the question of how to distinguish systems. A common approach is to identify task dissociations or differences (e.g. in kinematic or neural measures). Associations are less informative about the organisation and architecture of a cognitive system than dissociations, especially double dissociations, given possible third-variable correlations with factors such as severity or shared neural tissue ( Weismer, 2006 ; Ziegler, 2003a ). 5 There is no disagreement here. However, assuming clear, consistent, and agreed-upon task definitions can be formulated, two considerations limit the value of dissociations to distinguish speech from nonspeech motor control: (1) dissociations need not reflect motor system distinctions, and (2) they also exist between different speech tasks.

First, dissociations do not require an interpretation involving different motor systems, one for speech and one for nonspeech tasks. For example, in addition to the many differences in motoric aspects ( Ballard et al., 2000 ; Ziegler, 2003a ), dissociations between speech (AOS) and nonspeech volitional oral movements such as tongue protrusion (nonverbal oral apraxia) may be explained in terms of visuo-spatial processing 6 (e.g. Bizzozero et al., 2000 ; Kramer et al., 1985 ), language deficits ( Botha et al., 2014 ; Square-Storer et al., 1990 ), or other cognitive factors.

Second, even if non-motoric factors are ruled out, a dissociation between two motor tasks does not in itself indicate that one is a speech motor task and the other is a nonspeech motor task, because dissociations and differences also exist between tasks that are both considered speech (e.g. Caviness, Liss, Adler, & Evidente, 2006 7 ; Deger & Ziegler, 2002 ; Tasko & McClean, 2004 ; Tsao & Weismer, 1997 ; Ziegler, Kilian, & Deger, 1997 ). For instance, Tsao and Weismer asked participants to read a passage 10 times each at a habitual and at maximum rate, and classified speakers into a slow and a fast group based on their habitual speaking rate. They reported a double dissociation: at least one speaker from the slow group produced among the fastest maximum rates, and several speakers from the fast group had maximum rates in the range of the slow group. Does this mean that speaking at habitual rate and speaking at maximum rate are controlled by two distinct motor control systems – and that only one of these is speech? This would be consistent with the notion that DDK tasks are not speech because of the maximal rate demands. Yet Tsao and Weismer do not draw this conclusion, and instead suggest that variation in habitual speaking rate may be explained by differences in motor limits, which may depend on a cerebellar time keeping mechanism also involved in limb motor control.

In terms of neurological dissociations, two studies by Ziegler and colleagues provide support for the dissociability of syllable sequencing and initiation and assembling multisyllabic sequences into a single program ( Deger & Ziegler, 2002 ; Ziegler, Kilian, & Deger, 1997 ). In a simple (delayed) reaction time (RT) paradigm, speakers were asked to produce syllable strings such as ‘dada’, ‘dadada’, and ‘daba’. A length effect (RT for ‘dadada’ > RT for ‘dada’) was interpreted as reflecting the additional time needed to initiate and ‘unpack’ an additional syllable motor subprogram from an articulatory buffer. A complexity effect (RT for ‘daba’ > RT for ‘dada’) was taken to reflect difficulty in assembling two syllables into a single motor program. Ziegler et al. (1997) reported a patient with supplementary motor cortex damage who presented with dysfluent speech but who produced no segmental substitutions or distortions. This patient demonstrated a length effect but not a complexity effect (unlike unimpaired speakers, who showed neither), suggesting that her impairment was in initiating or unpacking a sequence of syllables (regardless of the specific content of those syllables, i.e. intact assembly). In contrast, Deger and Ziegler (2002) reported that speakers with AOS demonstrated the opposite pattern: a complexity effect but not a length effect, suggesting that their impairment was in combining multiple syllables into a single program but not in initiating syllables within a sequence. In other words, together these two studies suggest a double dissociation between aspects of speech motor control, derived from the same task – a within-speech dissocation.

As another example, both behavioural and neural evidence suggests that the speech motor programming routines for producing low- versus high-frequency syllables are qualitatively different ( Aichert & Ziegler, 2004 ; Bürki, Cheneval, & Laganaro, 2015 ; Cholin, Dell, & Levelt, 2011 ). Yet both types of routines are considered part of the speech system.

In this light, a dissociation reported by Ziegler (2002) , one of the strongest pieces of evidence raised in support of a TDM, becomes less clear-cut. Ziegler reported a dissociation in speech rate between repeating a sentence containing a nonword (without rate instructions) and an alternating motion rate DDK task (with maximal rate instructions). Ziegler compared unimpaired speakers, speakers with AOS, and speakers with ataxic dysarthria matched on duration of a target syllable in the sentence repetition task. Unimpaired speakers and speakers with AOS had shorter syllable durations in DDK compared to sentence repetition, and speakers with ataxic dysarthria showed the reverse. Ziegler explained this pattern by suggesting that cerebellar pathology (as in ataxic dysarthria) affects the ability to use sensory feedback to form predictive, feedforward commands to perform a relatively novel syllable repetition task, whereas sentence repetition relies more on established feedforward commands and is thus less affected by cerebellar damage. Although Ziegler cast this dissociation in terms of different motor control systems (one speech and one nonspeech), another interpretation is in terms of feedback- versus feedforward-based control mechanisms, both of which play a role in speech motor control (e.g. Guenther et al., 2006 ; Hickok et al., 2011 ) – in other words, a within-speech dissociation.

The question here is essentially What is a system? and becomes one of granularity: At what grain size does a dissociation or difference indicate distinct systems, as opposed to components within a system (see also Folkins et al., 1995 )? Ziegler (2003b) argues that although macroscopically there may appear to be overlap in behaviour and neural substrates underlying speech and nonspeech oral motor behaviours, this is merely a matter of low resolution: a more microscopic view reveals differences. 8 However, it is not clear why a broad (and ill-defined) concept such as “speech” is the right grain size of microscopic resolution. Why do differences within speech tasks (an even higher resolution) not lead to stipulation of multiple speech motor control systems? Surely, this cannot be based on current methodological limitations (e.g. spatial resolution of neuroimaging techniques) but requires a principled justification.

According to the TDM, control processes are organised around task goals ( Bunton, 2008 ; Weismer, 2006 ; Ziegler, 2003a ). Thus, one might argue that high- and low-frequency syllables, or fast and slow speaking rates, share a similar goal (e.g. to produce an acoustic signal to communicate). However, this solution hinges on a rather vague definition of “goal” (see previous section) because at a finer grain size there are numerous differences in goals between tasks that ostensibly constitute speech. For example, the motor goals for syllables with fricatives are different from those for syllables with stops. Shaiman and Gracco (2002) reported that the compensatory response to unexpected perturbations differed depending on the target consonant, supporting the notion of task-specific functional synergies at a finer grain size. As another example, the task of consciously controlling speaking rate has a different goal than speaking for the purpose of communication, and may result in qualitative differences (e.g. Adams, Weismer, & Kent, 1993 ; Van Lancker Sidtis et al., 2012 ). Finally, recent neuroimaging research indicates that planning of syllable structure and planning of syllable sequences rely in part on distinct neural regions ( Bohland & Guenther, 2006 ; see also Ziegler et al., 1997 ), that vowels and consonants, and different types of consonants, have different neural representations ( Bouchard et al., 2013 ), and that high-frequency and novel syllables recruit different neural circuitry ( Bürki et al., 2015 ). Thus, each sound, in each context, has a different goal, or represents a different task. A radical consequence of TDM logic would be that each sound in each context – each utterance – is controlled by a different motor system, resulting in a potentially infinite number of systems ( Gick & Stavness, 2013 ; Riecker et al., 2005 ). Yet all such motor actions are nevertheless considered part of a single speech motor network (e.g. Ziegler & Ackermann, 2013 ). If motor control systems are defined by common goals, then one must define these common goals. Why are vowels and consonants, or fricatives and stops, or high- and low-frequency syllables, or speaking at a habitual versus maximal rate, controlled by one system (despite many differences in goals, kinematic patterns, acoustic consequences, neural underpinnings), and shushing someone or DDK tasks by a fundamentally distinct system? The delineation of a system appears to depend on the granularity of the definition of “goal”.

Another way to define a system might be to consider mechanisms that encompass a range of (micro-level) goals. For example, in some recent models (e.g. Guenther et al., 2006 ), vowels and consonants are produced by the same mechanisms (feedforward and feedback control), but the exact combination and micro-level goals may vary by target sound (e.g. greater contribution of feedforward control for rapid consonant gestures). Thus, a TDM might define goals at a grain size larger than individual sounds or syllables. Even here however, the different components (feedforward and feedback systems), each associated with different neural circuitry, could dissociate (see Maas, Mailend, & Guenther, 2015 , for a single dissociation in AOS; Smith & Shadmehr, 2005 , for double dissociation in limb motor function).

The point here is that it is not clear how and where, at what grain size, to draw a line between different systems, and where to reject such lines despite differences and dissociations in kinematic, neural, or other aspects. If some dissociations merely reflect different components (or strategies, Adams et al., 1993 ; or modes, Tasko & McClean, 2004 ) within a single control system (e.g. fricatives vs. stops, frequent vs. infrequent syllables, habitual vs. maximal rate), then two implications follow. First, dissociations are compatible with the IM, according to which speech is decomposable and dissociations are best understood in terms of task properties. Second, the notion of a separate speech system does not rest on the logic and presence of dissociations, but on stipulation of tasks as reflecting speech or nonspeech. That is, the TDM disregards dissociations within “speech” tasks as evidence for distinct systems, yet assumes that dissociations with tasks designated as “nonspeech” indicate distinct control systems, even when those tasks share many properties with speech, such as DDK tasks (see also Ballard et al., 2003 ).

In short, dissociations and differences between tasks exist, but they do not require postulation of distinct motor control systems, nor that such a distinction is best cast in terms of a (poorly defined) speech/nonspeech difference. To make this case using the dissociation method, one first needs an explicit, consistent, and principled definition of speech , and a principled approach for deciding which dissociations matter. To my knowledge, no such definition or criteria have (yet) been articulated that can address the complications discussed above. Once such a definition and criteria are available, it will become possible to identify the neural regions involved in speech versus nonspeech tasks, and perhaps even to induce double dissociations with virtual neural lesions (e.g. with transcranial magnetic stimulation). Even in that case however, proper controls are needed to rule out that the dissociation is indeed best characterised as a categorical speech/nonspeech task distinction rather than as a task property distinction.

Emergence of dedicated systems

An important theoretical issue relevant to this debate relates to the emergence of a dedicated speech system, which has been argued to reside in the massive overlearning of speech skill ( Bunton, 2008 ; Ziegler, 2003a , b ; Ziegler & Ackermann, 2013 ). However, this raises a number of heretofore relatively ignored questions about how people acquire speech motor skill: Which system do speakers use before they reach the level of skill (in either a native or foreign language)? Initial attempts at speaking must be supported by a different motor control system. 9 How much experience or practice triggers emergence of the speech motor system, given that speech motor control develops over a protracted period ( Hoit, Hixon, Watson, & Morgan, 1990 ; Smith & Zelaznik, 2004 )? Which system controls speech at intermediate skill levels? Are novel or infrequent syllables (which by definition have not been overlearned) controlled by a novel volitional motor control system? Can there be experience-dependent improvements in skill within a system? If so, this would obviate the need to postulate a shift toward a fundamentally distinct system. It is well-established that increases in skill are associated with changes in underlying neural substrates (e.g. Kleim et al., 1998 ; Sakai et al., 2004 ). However, such findings do not necessitate postulation of a distinct system, or at least require criteria to distinguish between- versus within-system changes. These issues must be addressed if the notion of experience-dependent plasticity is to have explanatory value for the TDM.

Reference to principles of practice-dependent neural plasticity derived from research on motor skill learning ( Bunton, 2008 ; Ziegler & Ackermann, 2013 ) implies a belief that speech motor control shares fundamental organising principles with other motor skills, rather than speech motor control being subject to its own unique organising principles. Of note, many of the ideas in current models of speech motor control are similar to, or derived from, nonspeech motor domains, and thus provide continuity with a wider scientific literature ( Grimme et al., 2011 ; Hickok, 2014 ). For instance, the notions of a hybrid feedforward/feedback control architecture, internal models, motor planning in sensory space, competitive queuing for sequencing actions, self-organisation via a babbling stage ( Bohland, Bullock, & Guenther, 2010 ; Guenther et al., 2006 ; Hickok et al., 2011 ) are not specific to speech but derive from the motor control literature ( Bullock, 2004 ; Bullock, Grossberg, & Guenther, 1993 ; Wolpert, Ghahramani, & Flanagan, 2001 ). Similarly, contrary to claims in the literature ( Ziegler, 2003b ), 10 motor equivalence and trading relations are not speech-specific phenomena ( Todorov & Jordan, 2002 ), nor are coarticulation ( Jordan, 1990 ), rhythmic organisation of sequential movements ( Sakai et al., 2004 ), or the notion of content-specific motor “chunks” that develop with practice ( Sakai et al., 2004 ; Sternberg et al., 1978 ; Verwey, 1996 ). 11

Of course, the fact that speech motor control may share principles with nonspeech motor control does not mean that speech and nonspeech motor control rely on the same system or overlapping systems. Stronger tests of whether speech and nonspeech tasks rely in part on shared control systems would require demonstration of an influence of one task on another, for example dual-task interference (e.g. Bailey & Dromey, 2015 ), priming/facilitation of one task by another, or transfer of learning across tasks ( Bunton, 2008 ; Weismer, 2006 ). 12 The logic behind the transfer-of-learning approach is that transfer would indicate improvement in some common, shared task component. For example, treatment of speech sounds can transfer to other instances of those sounds in untrained utterances, and to other similar speech sounds (e.g. Ballard et al., 2007 ). However, lack of transfer does not necessarily mean that the tasks rely on fundamentally distinct control systems, unless one accepts the notion of multiple speech control systems, because treatment of speech sounds does not transfer to all other speech sounds (e.g. Ballard et al., 2007 , showed transfer was constrained by manner class), or in some cases even to the same sound in different contexts (e.g. Rochet-Capellan et al., 2012 ).

On the whole, evidence of nonspeech-speech task influences is limited one way or the other, and as argued above, this enterprise requires clear and consistent task definitions. One interesting hypothetical example was offered by Aichert and Ziegler (2013) , who argued that overlearned nonverbal expressions (e.g. mmm , shhh ) ‘can perhaps be used as overlearned oral movements to facilitate consonantal gestures’ (p. 1194). This suggests that transfer from nonspeech tasks to speech may be possible (in essence an endorsement of the IM), although no mechanisms for such transfer are articulated, nor easily conceived, for a TDM.

IMPLICATIONS

This debate has clear theoretical interest. However, there are also practical implications that follow from each view and one’s definition of speech. In a way, this debate is about the kinds of generalisations we can make ( Tasko & McClean, 2004 ), and how to study speech motor control. Below I discuss some methodological and clinical implications.

Methodological Implications

Even if there is a speech system that developed for, and is primarily used for, producing “typical” communicative speech, a legitimate question is whether speakers engage such a system in tasks that deviate in some respects from typical speech – and thus, whether we can study this system with tasks that are not typical communicative speech. Can or do people use (parts of) this system to perform other oral motor tasks, such as producing syllable-sized sounds with the vocal tract, with or without communicative function (e.g. m-hm ; DDK)?

Proponents of a TDM express skepticism in this regard (e.g. Bunton, 2008 ; Weismer, 2006 ; Ziegler & Ackermann, 2013 ). 13 The question Why not just study speech? has been posed multiple times in response to the potential infinite regress of making nonspeech tasks speech-like ( Bunton, 2008 ; Weismer, 2006 ). Although intended rhetorically, the question presumes that we know what speech is, and what it is not. As argued above, it is not clear that we do. Thus, a reasonable answer is Because we do not know what to study, or how . A boundary must be defined to establish tasks and methods from which generalisations about speech can be made.

What is the legitimate object of study? Naturalistic conversational speech is an obvious option (see Staiger & Ziegler, 2008 , for an excellent example). But limiting study to naturalistic conversation restricts options for controlled experimentation ( Xu, 2010 ). Is any experimentation a sufficient departure from typical speech to engage a fundamentally different system, and thus uninformative about speech motor control? What is the guiding principle that delineates speech from nonspeech motor control? If the goal is to fully understand speech motor control, then some experimentation will be required, which may involve tasks that some might consider nonspeech.

Although perhaps somewhat tongue-in-cheek, this is not a trivial issue, because much of what we think we know about speech motor control and its neural underpinnings comes from tasks that are very different from naturalistic conversational speech. For instance, articulating words to a computer in response to pictures or written words lacks communicative intent (even a conversational partner). If this is not speech, then a large body of research on speech motor control and its disorders must be rejected as fundamentally uninformative about speech production. The literature on behavioural and neural aspects of speech motor control has relied extensively on tasks involving production small sets of phrases or words – or nonwords – elicited through picture naming ( Maas, Gutiérrez, & Ballard, 2014 ; Mailend & Maas, 2013 ; Wunderlich & Ziegler, 2011 ), imitation of auditory models ( Aichert & Ziegler, 2004 ; Kim, Weismer, Kent, & Duffy, 2009 ; Smith & Zelaznik, 2004 ; Ziegler, 2002 ), reading ( Bunton & Weismer, 1994 ; Tsao & Weismer, 1997 ), memory recall ( Bohland & Guenther, 2006 ; Cholin et al., 2011 ; Deger & Ziegler, 2002 ; Maas, Robin, Wright, & Ballard, 2008 ; Sternberg et al., 1978 ), or rapid shadowing ( Peschke, Ziegler, Kappes, & Baumgaertner, 2009 ). Some experimental paradigms to study speech motor control involve learning novel, non-native sound sequences ( Moser et al., 2009 ; Segawa et al., 2015 ). In all these cases, the task is explicitly not to communicate but rather to produce the sound sequences requested (sometimes modeled) by the examiner. Do such tasks engage the speech motor control system or a novel oral motor control system? That is, can we draw conclusions about speech motor control from such tasks (cf. Staiger & Ziegler, 2008 )?

Moreover, experimental tasks often involve instructions or demands that deviate from typical speaking situations, such as speaking with a bite block or transducer ( Bunton & Weismer, 1994 ; Jacks, 2008 ), with instructions to be clear/loud/slow/fast ( Darling & Huber, 2011 ; Ghosh et al., 2010 ; Tsao & Weismer, 1997 ), imitating accents or individuals ( McGettigan et al., 2013 ), speaking with a focus on fast reaction time ( Deger & Ziegler, 2002 ; Mailend & Maas, 2013 ), speaking with experimentally altered feedback ( Houde & Jordan, 1998 ; Maas et al., 2015 ; Tremblay, Shiller, & Ostry, 2003 ; Villacorta, Perkell, & Guenther, 2007 ), repeating syllables without prosodic modulation in synchrony with a metronome ( Riecker et al., 2005 ), or speaking without sound ( Wildgruber et al., 1996 , 2001 ). There may or may not be differences between tasks with and without these demands ( Tasko & McClean, 2004 ), but absence of differences does not imply a shared control system (or that this is the speech motor control system) – nor do differences imply that people engage fundamentally different systems.

On the whole, most tasks used in speech production research are quite removed from their naturalistic communicative context and often involve specific instructions that induce a task goal different from typical speech. If such tasks engage different oral motor control systems, then they cannot in principle elucidate speech motor control. The rather sobering message in this case would be that we know very little about speech production at all. All current models of speech motor control are built on data from tasks that may not qualify as speech, and such models may therefore be considered models of nonspeech oral motor behaviour.

To be fair, proponents of a TDM utilise, and draw inferences about speech motor control from, decontextualised tasks ( Bunton, 2008 ; Bunton & Weismer, 1994 ; Deger & Ziegler, 2002 ; Tsao & Weismer, 1997 ; Wildgruber et al., 2001 ; Ziegler, 2002 ), suggesting that such tasks are in fact considered speech (although the relation to conversational speech is rarely addressed; see Tasko & McClean, 2004 , and Staiger & Ziegler, 2008 , for exceptions). However, notice that this implies acceptance of the decomposability of speech: communicative intent, semantic meaning, or acoustic signal are not necessary; maximal rate tasks and repetitive syllable production tasks can still be speech, etc.. If such deviations from conversational speech are insufficient to posit separate control systems, then why are other tasks that involve some but not all components of typical speech, such as DDK, designated ‘nonspeech’ ( Bunton, 2008 : 275; Ziegler, 2003 : 20) or ‘quasi-speech’ ( Weismer, 2006 : 319)? Again, the distinction appears arbitrary and inconsistent.

Considering speech to involve a specialised system a priori may limit exploration of potentially relevant generalisations. As an example, Peter and Stoel-Gammon (2008) hypothesised that childhood apraxia of speech (CAS) might involve a central underlying timing deficit. They reported similar timing difficulties in matched speech and nonspeech (manual) tasks in children with CAS. Furthermore, timing accuracy was negatively correlated with the number of CAS diagnostic features. Although such correlational designs are suggestive rather than definitive, the point is that such possible generalised impairments may not come to light unless one looks for them beyond a predetermined narrow (ill-defined) task range.

In short, stipulation of ill-defined and inconsistent task categories complicates empirical study, as it is not clear which tasks are appropriate to study speech without veering into nonspeech territory, and may limit exploration of common underlying mechanisms. In contrast, the IM suggests that by examining systematic differences and similarities between a range of tasks with similar properties (regardless of whether they are designated “speech” tasks), we may begin to fully understand the many facets of speech motor control ( Ballard et al., 2003 , 2009 ; Tasko & McClean, 2004 ). That is, we ought to study both the parts and their interaction within the whole, in various combinations (including “typical” speech).

Clinical Implications

The two claims embodied in the TDM also have important clinical implications, both for assessment and for treatment. Regarding assessment, the TDM implies that no useful information about a motor speech impairment can be derived from using nonspeech tasks such as visuomotor tracking or DDK ( Ziegler, 2002 , 2003a ), as such tasks engage a different oral motor system. Proponents of a TDM do not deny the potential diagnostic value of tasks such as DDK for neurological purposes (e.g. cranial nerve examinations; Ziegler, 2003a ), but rather claim that such tasks do not have value for diagnosis or understanding of speech impairments ( Weismer, 2006 ; Ziegler, 2002 ). In other words, whatever function is affected by damage to such neural tissue (e.g. timing), this function is not relevant in the context of a speech task. According to the TDM, there is no overlap between the system that controls conversational speech and the system that controls articulation of speech sound sequences in a DDK task. In contrast, the IM suggests that carefully designed tasks with shared properties (e.g. DDK) can shed light on the nature of motor speech impairments, by examining the abilities and limitations of the oral motor system independent from linguistic input to this system ( Ballard et al., 2009 ).

Interestingly, DDK tasks are common in assessment protocols for motor speech disorders ( Duffy, 2005 ; Thoonen et al., 1999 ). In addition, much research continues to be conducted on DDK tasks ( Hurkmans et al., 2012 ; Icht & Ben-David, 2014 ). This may reflect in part ‘political considerations’ (e.g. the ease with which such tasks can be studied; Weismer, 2006 : 343), but often also a belief that such tasks are informative about speech ( Riecker et al., 2005 ). They allow for systematic, controlled manipulation of complexity ( Hurkmans et al., 2012 ) and relatively language-independent assessment of articulation abilities ( Icht & Ben-David, 2014 ), which may be important when assessing bilingual speakers or making cross-linguistic comparisons.

As an example, DDK tasks may be informative about the source of slowed speech rate (e.g. Wang, Kent, Duffy, Thomas, & Weismer, 2004 ). In comparing alternating motion rate (AMR) and conversational speech rate in speakers with dysarthria, Wang et al. (2004 : 79) noted that ‘For more severe subjects, the AMR syllable rate was quite similar to conversational syllable rate, perhaps indicating that speech motor capability was the limiting factor’ (italics mine). This quote suggests that the DDK task does capture some shared aspect, and that conversational speech rate is slowed because of speech motor control limitations rather than (for example) cognitive or linguistic limitations. If one were to only examine conversational speech rate, such alternative possible sources of slowing would be more difficult to disentangle.

Empirically, there is support for the utility of DDK tasks in differential diagnosis of speech disorders, in particular with respect to CAS. For example, to date the only prospectively validated diagnostic marker with adequate diagnostic sensitivity and specificity is a score derived from maximal performance tasks ( Thoonen et al., 1999 ). Murray et al. (2015) recently showed that CAS can be differentiated with high accuracy from other pediatric speech disorders using four measures obtained from two tasks, one of which a DDK task. Thus, DDK tasks emerge across studies as among the most discriminative. From the TDM perspective, the interpretation would be that CAS also involves impairment of nonspeech oral motor control, which has nothing to do with their speech impairment – and therefore cannot be used as part of the justification for (particular) clinical services. In contrast, from the IM perspective this finding might suggest that the speech difficulties in CAS also surface in DDK tasks, and performance on these tasks may help make the case for specific interventions for CAS. 14

More generally, the strong claims embodied by the TDM require criteria that delineate “speech” to devise an assessment protocol with tasks that allow conclusions about speech impairments. The issues above are relevant in the clinical context as well: Is communicative intent necessary? Are imitative tasks sufficiently speech-like? Is production of nonwords informative about speech? Do instructions to alter rate change the task into a nonspeech task? These questions illustrate that each theoretical perspective has important implications for assessment, and that indeed ‘the details make all the difference’ ( Weismer, 2006 : 315).

Similar considerations arise for treatment. For example, if nonwords are not speech, then treatment for speech disorders should only use real word targets, since no transfer would be expected from nonwords, based on the specificity of learning ( Rochet-Capellan et al., 2012 ; Segawa et al., 2015 ). Yet some evidence suggests that generalisation from nonwords to real words occurs ( Maas et al., 2002 ; Schneider & Frens, 2005 ), and may even be greater than targeting real words for some speakers ( Gierut, Morrisette, & Ziemer, 2010 ). Such findings suggest that semantic meaning and communicative intent are not necessary conditions for speech (and thus can be removed for a somewhat decomposed behaviour that is still speech).

In addition, many therapeutic techniques alter the task from typical conversational speech into a more consciously controlled task, such as rate control ( Mauszycki & Wambaugh, 2008 ; Yorkston et al., 2007 ), focus on loud speech ( Ramig et al., 1995 ), visual models and mirrors ( Brendel & Ziegler, 2008 ; DeThorne et al., 2009 ), gestural or tactile cues ( Brendel & Ziegler, 2008 ; Dale & Hayden, 2013 ), imitation of tone sequences ( Brendel & Ziegler, 2008 ), visual biofeedback ( Preston et al., 2014 ), or implicit practice (without overt articulation; Davis, Farias, & Baynes, 2009 ). Does this mean that individuals operate in a “nonspeech mode” and therefore do not actually engage their speech motor control system? If so, then the justification for such techniques is unclear, because no transfer is expected to actual speech production (despite evidence of such transfer; Brendel & Ziegler, 2008 ; Davis et al., 2009 ; Preston et al., 2014 ). Perhaps the justification is that it does not matter whether we call the behaviour speech, as long as communication improves (by nonspeech means) and we do not expect improvement in speech production. If the goal is to improve speech production with treatment, and one stipulates that speech is a categorically distinct behaviour controlled by a separate system, then the question is what range of tasks and techniques can be considered legitimate and appropriate for this purpose.

Importantly, the foregoing discussion should not be construed as an endorsement of so-called nonspeech oral motor exercises (e.g. tongue push-ups) to improve speech production. There are many good arguments against this practice ( Clark, 2003 ; McCauley et al., 2009 ), and rejection of such practice does not require the assumption that speech is controlled by a separate motor control system, or that speech is holistic. Nonspeech oral motor exercises to improve speech function are contraindicated (in most cases) by both views, contrary to occasional suggestions otherwise ( Ziegler & Ackermann, 2013 ). Although the IM predicts that transfer between some nonspeech oral motor tasks and some aspects of speech production may occur, this view still predicts greater transfer from actual speech to speech, given the specificity of learning ( Rochet-Capellan al., 2012 ). While some have argued that nonspeech motor behaviours may be a necessary precursor to speech treatment in some cases ( Robin, 1992 ), this does not necessarily follow from an IM. The claim that speech may share properties with other motor behaviours does not imply that practice on any such motor behaviour will therefore necessarily benefit speech production, much less that any such benefits would be greater than or equal to benefits from practising speech movements. The IM does not claim that a given nonspeech task uses all or only those components involved in speech production or vice versa. In fact, the central claim is that there is more or less overlap, depending on the degree of similarity between tasks. As such, greater transfer is expected from speech to speech than from nonspeech to speech – because of overlapping or shared properties, not because speech and nonspeech are controlled by categorically distinct systems.

CONCLUSIONS

Most researchers agree that speech is a special skill and that nonspeech oral motor exercises to improve speech production are contraindicated in clinical treatment. However, disagreement exists about whether or not a distinct, dedicated motor control system underlies speech production and whether speech is holistic or decomposable into primitives. A common view in the literature is the TDM, which holds that speech is holistic, categorically different from all other oral motor behaviours, and subserved by a special, separate motor control system.

This article highlighted several major challenges for this view, including the lack of an explicit definition of speech, difficulty delineating speech from nonspeech tasks, and inconsistent application of definitions and criteria. In addition, it was argued that dissociations, among the primary sources of evidence for a TDM, do not require interpretation in terms of distinct motor systems and also exist between speech tasks at a finer resolution, highlighting the lack of principled criteria for interpreting dissociations as within- or between-system differences. Further, several questions were raised surrounding the emergence of a dedicated speech motor control system. These are not trivial challenges, and they must be met for the notion of a distinct, speech-specific control system to be meaningful.

Acknowledging a gradient distinction, with overlapping properties between tasks, is not tantamount to the claim that tasks are the same, or controlled by a completely overlapping system, and does not mean that everything about typical conversational speech can be understood by studying simplified or artificial tasks such as DDK. However, it does amount to rejecting a categorical, discrete boundary and a holistic, indecomposable view of speech. Acknowledging the existence of speech-like behaviours (either explicitly or implicitly by using/endorsing certain experimental tasks to draw inferences about speech) suggests decomposability: Speech can be seen as a combination of properties, which may occur in different combinations in different motor tasks. This is the essence of the IM. Dissociations and differences may best be understood in terms of these properties rather than a stipulated, ill-defined distinction between speech and nonspeech. Our understanding of speech motor control, and motor control in general, may be enhanced if we can identify those properties, for example by comparing tasks with and without these properties (e.g. rate requirements, communicative intent; Ballard et al., 2003 ; Bunton & Weismer, 1994 ). There may be more agreement than is apparent in the literature, at least when examining the range of tasks used or cited to support a TDM, which include tasks that depart significantly from naturalistic communicative speech (e.g. without communicative intent, semantic content, syntactic structure, or even an acoustic signal).

This philosophical debate has methodological and clinical implications. If one defines speech as including only conversational speech for the task of communicating, then our methods and knowledge of speech motor control and its disorders are very limited. To the extent that clinicians and researchers rely on methods that deviate from conversational speech (e.g. word repetition, reading out loud, covert articulation, rate reduction techniques, visual biofeedback, shaping consonantal gestures from “nonspeech” gestures, DDK tasks), this either implies some degree of decomposability of speech or acceptance of multiple “speech” motor control system, thus undermining the foundation of the TDM. Of course, regardless of whether departures from typical speaking situations in experimental or clinical investigations reflect the operation and processes of “the” speech motor control system or an integrative system, a clear justification for the use and interpretation of such task is needed. Finally, Weismer (2006 : 331) wrote ‘In the absence of a theoretically motivated, clear criterion of when a task is sufficiently speech-like to qualify as representative of control processes in speech production, the concept of “control overlap” has limited scientific, and hence clinical, utility’. I agree, and would add that the same holds for a TDM: In the absence of a theoretically motivated, clear criterion of when a task is sufficiently speech-like to qualify as representative of control processes in speech production, the concept of “task-specific motor control” has limited scientific, and hence clinical, utility.

Acknowledgments

This work was supported by NIH K01-DC010216. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health. A version of this paper was presented at the 2016 Conference on Motor Speech (Newport Beach, CA, March 2016). I would like to thank Diane Bahr, Kirrie Ballard, Kate Bunton, Gayle DeDe, Gregg Lof, Marja-Liisa Mailend, Antje Mefferd, Don Robin, Anja Staiger, Wolfram Ziegler, attendees at the 2016 Conference on Motor Speech, two anonymous reviewers, and many others for insightful discussions over the years that have shaped the views expressed here.

1 These are not the only possible views ( Weismer, 2006 ), and each may represent a class of models. I focus on these two views, and these two specific claims, because they have been discussed relatively explicitly. Occasionally I will take liberties with stated positions to develop the broader discussion.

2 Similar debates about the existence of speech-specific systems versus a more general system occur in the speech perception literature (e.g. Liberman & Whalen, 2000 ; Holt & Lotto, 2008 ). The focus of the present paper is restricted to speech production however.

3 Moore and colleagues referred to variegated babbling as ‘prespeech’ behaviour ( Moore & Ruark, 1996 : 1036) and considered such babble to have no communicative intent (vocalisations were generated during self-directed play and judged to be ‘neither meaningful nor referential’; Moore & Ruark, 1996 : 1037). Although Moore and colleagues ( Moore & Ruark, 1996 ; Moore, Caulfield, & Green, 2001 ) have convincingly demonstrated significant kinetic and kinematic differences between first words and oral motor behaviours such as chewing, their work also shows considerable similarities between variegated babbling and first words (in fact, Moore et al., 2001 , grouped vocalizations, babbling, and ‘real’ speech into a single category for analysis given lack of differences).

4 Typically, the task instructions are to say pa or pataka (etc.), not make this movement pattern . That is, typically DDK tasks are presented as a speaking task.

5 No quantified and independent measures of such third variables have been proposed, to my knowledge (severity as operationalised in terms of speaking rate or intelligibility is not independent from speech). As a result, such third-variable explanations tend to be untestable.

6 For example, Ziegler (2003a : 29) refers to ‘integration of visual information with a subject’s body image’ as a non-motoric task aspect that differs between speech and imitating oral movements.

7 Caviness et al. (2006) explicitly define speech broadly as tasks involving simultaneous phonation and articulation, which includes sustained vowel production and reiterant speech, as well as two connected speech (reading) tasks. They reported differences between the two connected speech tasks.

8 Ziegler (2003b) wrote: ‘Thus, macroscopically overlapping functions are, on closer examination, broken up into specialised and segregated functions which are optimally tuned to their behavioural goals.’ (p. 101), and ‘At a low level of resolution the usual suspects, motor cortex, basal ganglia, cerebellum, and brainstem nuclei are implicated in most if not all of the behaviours at stake […]. Yet, at a higher level of resolution, the neural networks controlling motor functions turn out to be organised in a task-specific manner […]’ (p. 102).

9 I assume that initial attempts at speech are in fact considered speech. If the system that supports initial attempts is the speech system, then the origin of this system cannot be based on experience-dependent plasticity.

10 In discussing adaptive trading relations in producing rounded back vowels, Ziegler (2003b : 101) states: ‘Co-ordination here is clearly in the service of producing intelligible speech. […] the described organisational principle is speech-specific and is not useful for any other behaviour.’ I argue that the organisational principle is not speech-specific, only its application to a specific-speech pattern (rounded back vowel).

11 Hickok (2014 : 53): ‘To a first approximation, what may primarily distinguish between domains then – what distinguishes a linguistic system from a manual control system – is the representational bits that are plugged into those computational architectures.’

12 Although practice on a nonspeech task is unlikely to produce changes in speech intelligibility (e.g. Bunton, 2008 ), for proof-of-concept of an integrative system, it would be sufficient to show a change in (for example) a particular kinematic or acoustic parameter observed in a speech task, following practice on that parameter embedded in a nonspeech task. For example, does practice of a particular rhythmic pattern in the context of a human beatbox task result in greater accuracy/stability of that same rhythmic pattern in a speech task (e.g. sentence repetition)?

13 ‘There is no other natural motor activity except speech and song which utilizes the specific layout of this neural circuitry, and it is also hard to imagine any artificially designed nonspeech assessment or training task in the clinic which would specifically engage this particular network .’ ( Ziegler & Ackermann, 2013 : 59; italics mine).

14 To be clear, I am not advocating for relying exclusively on DDK-type tasks (or on any other single task) in assessment and diagnosis of speech disorders. See also Ballard et al. (2000 : 979–980): ‘ Although it is necessary to consider the impairment of AOS in the context of speech production tasks , also studying nonspeech behaviours has the potential to disambiguate which characteristics are a result of the underlying motor impairment and which are related to the interaction between the motor and linguistic systems.’ (italics mine)

Declaration of Interest

The author reports no conflicts of interest. The author alone is responsible for the content and writing of the paper.

  • Adams SG, Weismer G, Kent RD. Speaking rate and speech movement velocity profiles. Journal of Speech and Hearing Research. 1993; 36 :41–54. [ PubMed ] [ Google Scholar ]
  • Aichert I, Ziegler W. Syllable frequency and syllable structure in apraxia of speech. Brain and Language. 2004; 88 :148–159. [ PubMed ] [ Google Scholar ]
  • Aichert I, Ziegler W. Segments and syllables in the treatment of apraxia of speech: An investigation of learning and transfer effects. Aphasiology. 2013; 27 :1180–1199. [ Google Scholar ]
  • Bailey DJ, Dromey C. Bidirectional interference between speech and nonspeech tasks in younger, middle-aged, and older adults. Journal of Speech, Language, and Hearing Research. 2015; 58 :1637–1653. [ PubMed ] [ Google Scholar ]
  • Ballard KJ, Granier JP, Robin DA. Understanding the nature of apraxia of speech: Theory, analysis, and treatment. Aphasiology. 2000; 14 :969–995. [ Google Scholar ]
  • Ballard KJ, Maas E, Robin DA. Treating control of voicing in apraxia of speech with variable practice. Aphasiology. 2007; 21 :1195–1217. [ Google Scholar ]
  • Ballard KJ, Robin DA, Folkins JW. An integrative model of speech motor control: A response to Ziegler. Aphasiology. 2003; 17 :37–48. [ Google Scholar ]
  • Ballard KJ, Solomon NP, Robin DA, Moon JB, Folkins JW. Nonspeech assessment of the speech production mechanism. In: McNeil MR, editor. Clinical Management of Sensorimotor Speech Disorders. 2. New York – Stuttgart: Thieme; 2009. pp. 30–45. [ Google Scholar ]
  • Bizzozero I, Costato D, Della Sala S, Papagno C, Spinnler H, Venneri A. Upper and lower face apraxia: role of the right hemisphere. Brain. 2000; 123 :2213–2230. [ PubMed ] [ Google Scholar ]
  • Bohland JW, Bullock D, Guenther FH. Neural representations and mechanisms for the performance of simple speech sequences. Journal of Cognitive Neuroscience. 2010; 22 :1504–1529. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bohland JW, Guenther FH. An fMRI investigation of syllable sequence production. NeuroImage. 2006; 32 :821–841. [ PubMed ] [ Google Scholar ]
  • Botha H, Duffy JR, Strand EA, Machulda MM, Whitwell JL, Josephs KA. Nonverbal oral apraxia in primary progressive aphasia and apraxia of speech. Neurology. 2014; 82 :1729–1735. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bouchard KE, Mesgarani N, Johnson K, Chang EF. Functional organization of human sensorimotor cortex for speech articulation. Nature. 2013; 495 :327–332. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Brendel B, Ziegler W. Effectiveness of metrical pacing in the treatment of apraxia of speech. Aphasiology. 2008; 22 :77–102. [ Google Scholar ]
  • Bullock D. Adaptive neural models of queuing and timing in fluent action. Trends in Cognitive Sciences. 2004; 8 :426–433. [ PubMed ] [ Google Scholar ]
  • Bullock D, Grossberg S, Guenther FH. A self-organizing neural model of motor equivalent reaching and tool use by a multijoint arm. Journal of Cognitive Neuroscience. 1993; 5 :408–435. [ PubMed ] [ Google Scholar ]
  • Bunton K. Speech versus nonspeech: Different tasks, different neural organization. Seminars in Speech and Language. 2008; 29 :267–275. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bunton K, Weismer G. Evaluation of a reiterant force-impulse task in the tongue. Journal of Speech and Hearing Research. 1994; 37 :1020–1031. [ PubMed ] [ Google Scholar ]
  • Bürki A, Cheneval PP, Laganaro M. Do speakers have access to a mental syllabary? ERP comparison of high frequency and novel syllable production. Brain and Language. 2015; 150 :90–102. [ PubMed ] [ Google Scholar ]
  • Caviness JN, Liss JM, Adler C, Evidente V. Analysis of high-frequency electroencephalographic-electromyographic coherence elicited by speech and oral nonspeech tasks in Parkinson’s disease. Journal of Speech, Language, and Hearing Research. 2006; 49 :424–438. [ PubMed ] [ Google Scholar ]
  • Chang SE, Kenney MK, Loucks TMJ, Poletto CJ, Ludlow CL. Common neural substrates support speech and non-speech vocal tract gestures. NeuroImage. 2009; 47 :314–325. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cholin J, Dell GS, Levelt WJM. Planning and articulation in incremental word production: Syllable-frequency effects in English. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2011; 37 :109–122. [ PubMed ] [ Google Scholar ]
  • Clark HM. Neuromuscular treatments for speech and swallowing: A tutorial. American Journal of Speech-Language Pathology. 2003; 12 :400–415. [ PubMed ] [ Google Scholar ]
  • Coltheart M. What has functional neuroimaging told us about the mind (so far)? Cortex. 2006; 42 :323–331. [ PubMed ] [ Google Scholar ]
  • Dale PS, Hayden DA. Treating speech subsystems in childhood apraxia of speech with tactual input: The PROMPT approach. American Journal of Speech-Language Pathology. 2013; 22 :644–661. [ PubMed ] [ Google Scholar ]
  • Darling M, Huber JE. Changes to articulatory kinematics in response to loudness cues in individuals with Parkinson’s disease. Journal of Speech, Language, and Hearing Research. 2011; 54 :1247–1259. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Davis C, Farias D, Baynes K. Implicit phoneme manipulation for the treatment of apraxia of speech and co-occurring aphasia. Aphasiology. 2009; 23 :503–528. [ Google Scholar ]
  • Deger K, Ziegler W. Speech motor programming in apraxia of speech. Journal of Phonetics. 2002; 30 :321–335. [ Google Scholar ]
  • De Torcy T, Clouet A, Pillot-Loiseau C, Vaissière J, Brasnu D, Crevier-Buchman L. A video-fiberscopic study of laryngopharyngeal behaviour in the human beatbox . Logopedics Phoniatrics Vocology. 2014; 39 :38–48. [ PubMed ] [ Google Scholar ]
  • DeThorne LS, Johnson CJ, Walder L, Mahurin-Smith J. When “Simon Says” doesn’t work: Alternatives to imitation for facilitating early speech development. American Journal of Speech-Language Pathology. 2009; 18 :133–145. [ PubMed ] [ Google Scholar ]
  • Duffy JR. Motor Speech Disorders: Substrates, Differential Diagnosis, and Management. 2. St. Louis, MO: Mosby-Year Book, Inc; 2005. [ Google Scholar ]
  • Flege JE, Schirru C, MacKay IRA. Interaction between the native and second language phonetic subsystems. Speech Communication. 2003; 40 :467–491. [ Google Scholar ]
  • Folkins JW, Moon JB, Luschei ES, Robin DA, Tye-Murray N, Moll KL. What can nonspeech tasks tell us about speech motor disabilities? Journal of Phonetics. 1995; 23 :139–147. [ Google Scholar ]
  • Ghosh SS, Matthies ML, Maas E, Hanson A, Tiede M, Ménard L, … Perkell JS. An investigation of the relation between sibilant production and somatosensory and auditory acuity. Journal of the Acoustical Society of America. 2010; 128 (5):3079–3087. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gick B, Bliss H, Michelson K, Radanov B. Articulation without acoustics: “Soundless” vowels in Oneida and Blackfoot. Journal of Phonetics. 2012; 40 :46–53. [ Google Scholar ]
  • Gick B, Stavness I. Modularizing speech. Frontiers in Psychology. 2013; 4 :977. doi: 10.3389/fpsyg.2013.00977. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gierut JA, Morrisette ML, Ziemer SM. Nonwords and generalization in children with phonological disorders. American Journal of Speech-Language Pathology. 2010; 19 :167–177. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Grimme B, Fuchs S, Perrier P, Schöner G. Limb versus speech motor control: A conceptual review. Motor Control. 2011; 15 :5–33. [ PubMed ] [ Google Scholar ]
  • Guenther FH, Ghosh SS, Tourville JA. Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language. 2006; 96 :280–301. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hickok G. Towards an integrated psycholinguistic, neurolinguistic, sensorimotor framework for speech production. Language, Cognition, and Neuroscience. 2014; 29 :52–59. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hickok G, Houde J, Rong F. Sensorimotor integration in speech processing: Computational basis and neural organization. Neuron. 2011; 69 :407–422. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hoit JD, Hixon TJ, Watson PJ, Morgan WJ. Speech breathing in children and adolescents. Journal of Speech and Hearing Research. 1990; 33 :51–69. [ PubMed ] [ Google Scholar ]
  • Holt LL, Lotto AJ. Speech perception within an auditory cognitive science framework. Current Directions in Psychological Science. 2008; 17 :42–46. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Houde JF, Jordan MI. Sensorimotor adaptation in speech production. Science. 1998; 279 (5354):1213–1216. [ PubMed ] [ Google Scholar ]
  • Hurkmans J, Jonkers R, Boonstra AM, Stewart RE, Reinders-Messelink HA. Assessing the treatment effects in apraxia of speech: Introduction and evaluation of the Modified Diadochokinesis Test. International Journal of Language and Communication Disorders. 2012; 47 :427–436. [ PubMed ] [ Google Scholar ]
  • Icht M, Ben-David BM. Oral-diadochokinesis rates across languages: English and Hebrew norms. Journal of Communication Disorders. 2014; 48 :27–37. [ PubMed ] [ Google Scholar ]
  • Jacks A. Bite block vowel production in apraxia of speech. Journal of Speech, Language, and Hearing Research. 2008; 51 :898–913. [ PubMed ] [ Google Scholar ]
  • Jordan MI. Motor learning and the degrees of freedom problem. In: Jeannerod M, editor. Attention and Performance. XIII. Hillsdale, NJ: Erlbaum; 1990. pp. 796–836. [ Google Scholar ]
  • Kent RD. Nonspeech oral movements and oral motor disorders: A narrative review. American Journal of Speech-Language Pathology. 2015; 24 :763–789. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kim Y, Weismer G, Kent RD, Duffy JR. Statistical models of F2 slope in relation to severity of dysarthria. Folia Phoniatrica et Logopaedica. 2009; 61 :329–335. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kleim JA, Barbay S, Nudo RJ. Functional reorganization of the rat motor cortex Following motor skill learning. Journal of Neurophysiology. 1998; 80 :3321–3325. [ PubMed ] [ Google Scholar ]
  • Kramer JH, Delis DC, Nakada T. Buccofacial apraxia without aphasia due to a right parietal lesion. Annals of Neurology. 1985; 18 :512–514. [ PubMed ] [ Google Scholar ]
  • Liberman AM, Whalen DH. On the relation of speech to language. Trends in Cognitive Sciences. 2000; 4 (5):187–196. [ PubMed ] [ Google Scholar ]
  • Maas E, Barlow J, Robin D, Shapiro L. Treatment of sound errors in aphasia and apraxia of speech: Effects of phonological complexity. Aphasiology. 2002; 16 :609–622. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Maas E, Gutiérrez K, Ballard KJ. Phonological encoding in apraxia of speech and aphasia. Aphasiology. 2014; 28 :25–48. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Maas E, Mailend ML, Guenther FH. Feedforward and feedback control in apraxia of speech (AOS): Effects of noise masking on vowel production. Journal of Speech, Language, and Hearing Research. 2015; 58 :185–200. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Maas E, Robin DA, Wright DL, Ballard KJ. Motor programming in apraxia of speech. Brain and Language. 2008; 106 :107–118. [ PubMed ] [ Google Scholar ]
  • Mailend ML, Maas E. Speech motor planning in apraxia of speech: Evidence from a delayed picture-word interference task. American Journal of Speech-Language Pathology. 2013; 22 :S380–S396. [ PubMed ] [ Google Scholar ]
  • Major RC. Losing English as a first language. The Modern Language Journal. 1992; 76 :190–208. [ Google Scholar ]
  • Mauszycki SC, Wambaugh JL. The effects of rate control treatment on consonant production accuracy in mild apraxia of speech. Aphasiology. 2008; 22 :906–920. [ Google Scholar ]
  • McCauley RJ, Strand E, Lof GL, Schooling T, Frymark T. Evidence-based systematic review: Effects of nonspeech oral motor exercises on speech. American Journal of Speech-Language Pathology. 2009; 18 :343–360. [ PubMed ] [ Google Scholar ]
  • McGettigan C, Eisner F, Agnew ZK, Manly T, Wisbey D, Scott SK. T’ain’t what you say, it’s the way that you say it – Left insula and inferior frontal cortex work in interaction with superior temporal regions to control the performance of vocal impersonations. Journal of Cognitive Neuroscience. 2013; 25 :1875–1886. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Moore CA, Caulfield TJ, Green JR. Relative kinematics of the rib cage and abdomen during speech and nonspeech behaviors of 15-month-old children. Journal of Speech, Language, and Hearing Research. 2001; 44 :80–94. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Moore CA, Ruark JL. Does speech emerge from earlier appearing oral motor behaviors? Journal of Speech and Hearing Research. 1996; 39 :1034–1047. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Moser D, Fridriksson J, Bonilha L, Healy EW, Baylis G, Baker JM, Rorden C. Neural recruitment for the production of native and novel speech sounds. NeuroImage. 2009; 46 :549–557. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Murray E, McCabe P, Heard R, Ballard KJ. Differential diagnosis of children with suspected childhood apraxia of speech. Journal of Speech, Language, and Hearing Research. 2015; 58 :43–60. [ PubMed ] [ Google Scholar ]
  • Peschke C, Ziegler W, Kappes J, Baumgaertner A. Auditory-motor integration during fast repetition: The neuronal correlates of shadowing. NeuroImage. 2009; 47 :392–402. [ PubMed ] [ Google Scholar ]
  • Peter B, Stoel-Gammon C. Central timing deficits in subtypes of primary speech disorders. Clinical Linguistics & Phonetics. 2008; 22 :171–198. [ PubMed ] [ Google Scholar ]
  • Preston JL, McCabe P, Rivera-Campos A, Whittle JL, Landry E, Maas E. Ultrasound visual feedback treatment and practice variability for residual speech sound errors. Journal of Speech, Language, and Hearing Research. 2014; 57 :2102–2115. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ramig LO, Countryman S, Thompson LL, Horii Y. Comparison of two intensive speech treatments for Parkinson Disease. Journal of Speech and Hearing Research. 1995; 38 :1232–1251. [ PubMed ] [ Google Scholar ]
  • Riecker A, Ackermann H, Wildgruber D, Dogil G, Grodd W. Opposite hemispheric lateralization effects during speaking and singing at motor cortex, insula, and cerebellum. NeuroReport. 2000; 11 :1997–2000. [ PubMed ] [ Google Scholar ]
  • Riecker A, Mathiak K, Wildgruber D, Erb M, Hertrich I, Grodd W, Ackermann H. fMRI reveals two distinct cerebral networks subserving speech motor control. Neurology. 2005; 64 :700–706. [ PubMed ] [ Google Scholar ]
  • Robin DA. Developmental apraxia of speech: Just another motor problem. American Journal of Speech-Language Pathology. 1992; 1 :19–22. [ Google Scholar ]
  • Rochet-Capellan A, Richer L, Ostry DJ. Nonhomogeneous transfer reveals specificity in speech motor learning. Journal of Neurophysiology. 2012; 107 :1711–1717. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sakai K, Hikosaka O, Nakamura K. Emergence of rhythm during motor learning. Trends in Cognitive Sciences. 2004; 18 :547–553. [ PubMed ] [ Google Scholar ]
  • Schneider SL, Frens RA. Training four-syllable CV patterns in individuals with acquired apraxia of speech: Theoretical implications. Aphasiology. 2005; 19 :451–471. [ Google Scholar ]
  • Segawa JA, Tourville JA, Deal DS, Guenther FH. The neural correlates of speech motor sequence learning. Journal of Cognitive Neuroscience. 2015; 27 :819–831. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Shaiman S, Gracco VL. Task-specific sensorimotor interactions in speech production. Experimental Brain Research. 2002; 146 :411–418. [ PubMed ] [ Google Scholar ]
  • Smith A, Zelaznik HN. Development of functional synergies for speech motor coordination in childhood and adolescence. Developmental Psychobiology. 2004; 45 :22–33. [ PubMed ] [ Google Scholar ]
  • Smith MA, Shadmehr R. Intact ability to learn internal models of arm dynamics in Huntington’s disease but not cerebellar degeneration. Journal of Neurophysiology. 2005; 93 :2809–2821. [ PubMed ] [ Google Scholar ]
  • Square-Storer PA, Roy EA, Hogg SC. The dissociation of aphasia from apraxia of speech, ideomotor limb, and buccofacial apraxia. In: Hammond GE, editor. Cerebral Control of Speech and Limb Movements. North Holland: Elsevier; 1990. pp. 451–476. [ Google Scholar ]
  • Staiger A, Ziegler W. Syllable frequency and syllable structure in the spontaneous speech production of patients with apraxia of speech. Aphasiology. 2008; 22 :1201–1215. [ Google Scholar ]
  • Sternberg S, Monsell S, Knoll RL, Wright CE. The latency and duration of rapid movement sequences: Comparisons of speech and typewriting. In: Stelmach GE, editor. Information Processing in Motor Control and Learning. New York: Academic Press; 1978. pp. 117–152. [ Google Scholar ]
  • Tasko SM, McClean MD. Variations in articulatory movement with changes in speech task. Journal of Speech, Language, and Hearing Research. 2004; 47 :85–100. [ PubMed ] [ Google Scholar ]
  • Thoonen G, Maassen B, Gabreëls F, Schreuder R. Validity of maximum performance tasks to diagnose motor speech disorders in children. Clinical Linguistics & Phonetics. 1999; 13 :1–23. [ Google Scholar ]
  • Todorov E, Jordan MI. Optimal feedback control as a theory of motor coordination. Nature Neuroscience. 2002; 5 :1226–1235. [ PubMed ] [ Google Scholar ]
  • Tremblay S, Shiller DM, Ostry DJ. Somatosensory basis of speech production. Nature. 2003; 423 :866–869. [ PubMed ] [ Google Scholar ]
  • Tsao YC, Weismer G. Interspeaker variation in habitual speaking rate: Evidence for a neuromuscular component. Journal of Speech, Language, and Hearing Research. 1997; 40 :858–866. [ PubMed ] [ Google Scholar ]
  • Van Lancker Sidtis D, Cameron K, Sidtis JJ. Dramatic effects of speech task on motor and linguistic planning in severely dysfluent parkinsonian speech. Clinical Linguistics & Phonetics. 2012; 26 :695–711. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Verwey WB. Buffer loading and chunking in sequential keypressing. Journal of Experimental Psychology: Human Perception and Performance. 1996; 22 :544–562. [ Google Scholar ]
  • Villacorta V, Perkell JS, Guenther FH. Sensorimotor adaptation to feedback perturbations of vowel acoustics and its relation to perception. Journal of the Acoustical Society of America. 2007; 122 (4):2306–2319. [ PubMed ] [ Google Scholar ]
  • Wang YT, Kent RD, Duffy JR, Thomas JE, Weismer G. Alternating motion rate as an index of speech motor disorder in traumatic brain injury. Clinical Linguistics & Phonetics. 2004; 18 :57–84. [ PubMed ] [ Google Scholar ]
  • Weismer G. Philosophy of research in motor speech disorders. Clinical Linguistics & Phonetics. 2006; 20 :315–349. [ PubMed ] [ Google Scholar ]
  • Wildgruber D, Ackermann J, Grodd W. Differential contributions of motor cortex, basal ganglia, and cerebellum to speech motor control: Effects of syllable repetition rate evaluated by fMRI. NeuroImage. 2001; 13 :101–109. [ PubMed ] [ Google Scholar ]
  • Wildgruber D, Ackermann H, Klose U, Kardatzki R, Grodd W. Functional lateralization of speech production in the primary motor cortex: A fMRI study. NeuroReport. 1996; 7 :2791–2795. [ PubMed ] [ Google Scholar ]
  • Wolpert DM, Ghahramani Z, Flanagan JR. Perspectives and problems in motor learning. Trends in Cognitive Sciences. 2001; 5 :487–494. [ PubMed ] [ Google Scholar ]
  • Wunderlich A, Ziegler W. Facilitation of picture-naming in anomic subjects: Sound vs mouth shape. Aphasiology. 2011; 25 :202–220. [ Google Scholar ]
  • Xu Y. In defense of lab speech. Journal of Phonetics. 2010; 38 :329–336. [ Google Scholar ]
  • Yorkston KM, Hakel M, Beukelman DR, Fager S. Evidence for effectiveness of treatment of loudness, rate, or prosody in dysarthria: A systematic review. Journal of Medical Speech-Language Pathology. 2007; 15 :xi–xxxvi. [ Google Scholar ]
  • Ziegler W. Task-related factors in oral motor control: speech and oral diadochokinesis in dysarthria and apraxia of speech. Brain and Language. 2002; 80 :556–575. [ PubMed ] [ Google Scholar ]
  • Ziegler W. Speech motor control is task-specific: Evidence from dysarthria and apraxia of speech. Aphasiology. 2003a; 17 :3–36. [ Google Scholar ]
  • Ziegler W. To speak or not to speak: Distinctions between speech and nonspeech motor control. Aphasiology. 2003b; 17 :99–105. [ Google Scholar ]
  • Ziegler W, Ackermann H. Neuromotor speech impairment: It’s all in the talking. Folia Phoniatrica et Logopaedica. 2013; 65 :55–67. [ PubMed ] [ Google Scholar ]
  • Ziegler W, Kilian B, Deger K. The role of the left mesial frontal cortex in fluent speech: Evidence from a case of left supplementary motor area hemorrhage. Neuropsychologia. 1997; 35 :1197–1208. [ PubMed ] [ Google Scholar ]
  • Patient Care & Health Information
  • Diseases & Conditions
  • Childhood apraxia of speech

Childhood apraxia of speech (CAS) is a rare speech disorder. Children with this disorder have trouble controlling their lips, jaws and tongues when speaking.

In CAS , the brain has trouble planning for speech movement. The brain isn't able to properly direct the movements needed for speech. The speech muscles aren't weak, but the muscles don't form words the right way.

To speak correctly, the brain has to make plans that tell the speech muscles how to move the lips, jaw and tongue. The movements usually result in accurate sounds and words spoken at the proper speed and rhythm. CAS affects this process.

CAS is often treated with speech therapy. During speech therapy, a speech-language pathologist teaches the child to practice the correct way to say words, syllables and phrases.

Children with childhood apraxia of speech (CAS) may have a variety of speech symptoms. Symptoms vary depending on a child's age and the severity of the speech problems.

CAS can result in:

  • Babbling less or making fewer vocal sounds than is typical between the ages of 7 to 12 months.
  • Speaking first words late, typically after ages 12 to 18 months old.
  • Using a limited number of consonants and vowels.
  • Often leaving out sounds when speaking.
  • Using speech that is hard to understand.

These symptoms are usually noticed between ages 18 months and 2 years. Symptoms at this age may indicate suspected CAS . Suspected CAS means a child may potentially have this speech disorder. The child's speech development should be watched to determine if therapy should begin.

Children usually produce more speech between ages 2 and 4. Signs that may indicate CAS include:

  • Vowel and consonant distortions.
  • Pauses between syllables or words.
  • Voicing errors, such as "pie" sounding like "bye."

Many children with CAS have trouble getting their jaws, lips and tongues to the correct positions to make a sound. They also may have a hard time moving smoothly to the next sound.

Many children with CAS also have language problems, such as reduced vocabulary or trouble with word order.

Some symptoms may be unique to children with CAS , which helps to make a diagnosis. However, some symptoms of CAS are also symptoms of other types of speech or language disorders. It's hard to diagnose CAS if a child has only symptoms that are found both in CAS and in other disorders.

Some characteristics, sometimes called markers, help distinguish CAS from other types of speech disorders. Those associated with CAS include:

  • Trouble moving smoothly from one sound, syllable or word to another.
  • Groping movements with the jaw, lips or tongue to try to make the correct movement for speech sounds.
  • Vowel distortions, such as trying to use the correct vowel but saying it incorrectly.
  • Using the wrong stress in a word, such as pronouncing "banana" as "BUH-nan-uh" instead of "buh-NAN-uh."
  • Using equal emphasis on all syllables, such as saying "BUH-NAN-UH."
  • Separation of syllables, such as putting a pause or gap between syllables.
  • Inconsistency, such as making different errors when trying to say the same word a second time.
  • Having a hard time imitating simple words.
  • Voicing errors, such as saying "down" instead of "town."

Other speech disorders sometimes confused with CAS

Some speech sound disorders often get confused with CAS because some of the symptoms may overlap. These speech sound disorders include articulation disorders, phonological disorders and dysarthria.

A child with an articulation or phonological disorder has trouble learning how to make and use specific sounds. Unlike in CAS , the child doesn't have trouble planning or coordinating the movements to speak. Articulation and phonological disorders are more common than CAS .

Articulation or phonological speech errors may include:

  • Substituting sounds. The child might say "fum" instead of "thumb," "wabbit" instead of "rabbit" or "tup" instead of "cup."
  • Leaving out final consonants. A child with CAS might say "duh" instead of "duck" or "uh" instead of "up."
  • Stopping the airstream. The child might say "tun" instead of "sun" or "doo" instead of "zoo."
  • Simplifying sound combinations. The child might say "ting" instead of "string" or "fog" instead of "frog."

Dysarthria is a speech disorder that occurs because the speech muscles are weak. Making speech sounds is hard because the speech muscles can't move as far, as quickly or as strongly as they do during typical speech. People with dysarthria may also have a hoarse, soft or even strained voice. Or they may have slurred or slow speech.

Dysarthria is often easier to identify than CAS . However, when dysarthria is caused by damage to areas of the brain that affect coordination, it can be hard to determine the differences between CAS and dysarthria.

Childhood apraxia of speech (CAS) has a number of possible causes. But often a cause can't be determined. There usually isn't an observable problem in the brain of a child with CAS .

However, CAS can be the result of brain conditions or injury. These may include a stroke, infections or traumatic brain injury.

CAS also may occur as a symptom of a genetic disorder, syndrome or metabolic condition.

CAS is sometimes referred to as developmental apraxia. But children with CAS don't make typical developmental sound errors and they don't grow out of CAS . This is unlike children with delayed speech or developmental disorders who typically follow patterns in speech and sounds development but at a slower pace than usual.

Risk factors

Changes in the FOXP2 gene appear to increase the risk of childhood apraxia of speech (CAS) and other speech and language disorders. The FOXP2 gene may be involved in how certain nerves and pathways in the brain develop. Researchers continue to study how changes in the FOXP2 gene may affect motor coordination and speech and language processing in the brain. Other genes also may impact motor speech development.

Complications

Many children with childhood apraxia of speech (CAS) have other problems that affect their ability to communicate. These problems aren't due to CAS , but they may be seen along with CAS .

Symptoms or problems that are often present along with CAS include:

  • Delayed language. This may include trouble understanding speech, reduced vocabulary, or not using correct grammar when putting words together in a phrase or sentence.
  • Delays in intellectual and motor development and problems with reading, spelling and writing.
  • Trouble with gross and fine motor movement skills or coordination.
  • Trouble using communication in social interactions.

Diagnosing and treating childhood apraxia of speech at an early stage may reduce the risk of long-term persistence of the problem. If your child experiences speech problems, have a speech-language pathologist evaluate your child as soon as you notice any speech problems.

Childhood apraxia of speech care at Mayo Clinic

  • Jankovic J, et al., eds. Dysarthria and apraxia of speech. In: Bradley and Daroff's Neurology in Clinical Practice. 8th ed. Elsevier; 2022. https://www.clinicalkey.com. Accessed April 6, 2023.
  • Carter J, et al. Etiology of speech and language disorders in children. https://www.uptodate.com/contents/search. Accessed April 6, 2023.
  • Childhood apraxia of speech. American Speech-Language-Hearing Association. https://www.asha.org/public/speech/disorders/childhood-apraxia-of-speech/. Accessed April 6, 2023.
  • Apraxia of speech. National Institute on Deafness and Other Communication Disorders. http://www.nidcd.nih.gov/health/voice/pages/apraxia.aspx. Accessed April 6, 2023.
  • Ng WL, et al. Predicting treatment of outcomes in rapid syllable transition treatment: An individual participant data meta-analysis. Journal of Speech, Language and Hearing Research. 2022; doi:10.1044/2022_JSLHR-21-00617.
  • Speech sound disorders. American Speech-Language-Hearing Association. http://www.asha.org/public/speech/disorders/SpeechSoundDisorders/. Accessed April 6, 2023.
  • Iuzzini-Seigel J. Prologue to the forum: Care of the whole child — Key considerations when working with children with childhood apraxia of speech. Language, Speech and Hearing Services in Schools. 2022; doi:10.1044/2022_LSHSS-22-00119.
  • Namasivayam AK, et al. Speech sound disorders in children: An articulatory phonology perspective. 2020; doi:10.3389/fpsyg.2019.02998.
  • Strand EA. Dynamic temporal and tactile cueing: A treatment strategy for childhood apraxia of speech. American Journal of Speech-Language Pathology. 2020; doi:10.1044/2019_AJSLP-19-0005.
  • Ami TR. Allscripts EPSi. Mayo Clinic. March 13, 2023.
  • Kliegman RM, et al. Language development and communication disorders. In: Nelson Textbook of Pediatrics. 21st ed. Elsevier; 2020. https://www.clinicalkey.com. Accessed April 6, 2023.
  • Adam MP, et al., eds. FOXP2-related speech and language disorder. In: GeneReviews. University of Washington, Seattle; 1993-2023. https://www.ncbi.nlm.nih.gov/books/NBK1116. Accessed April 6, 2023.
  • How is CAS diagnosed? Childhood Apraxia of Speech Association of North America. https://www.apraxia-kids.org/apraxia_kids_library/how-is-cas-diagnosed/. Accessed April 13, 2023.
  • Chenausky KV, et al. The importance of deep speech phenotyping for neurodevelopmental and genetic disorders: A conceptual review. Journal of Neurodevelopmental Disorders. 2022; doi:10.1186/s11689-022-09443-z.
  • Strand EA. Dynamic temporal and tactile cueing: A treatment strategy for childhood apraxia of speech. American Journal of Speech Language Pathology. 2020; doi:10.1044/2019_AJSLP-19-0005.
  • Symptoms & causes
  • Diagnosis & treatment
  • Doctors & departments
  • Care at Mayo Clinic

Mayo Clinic does not endorse companies or products. Advertising revenue supports our not-for-profit mission.

  • Opportunities

Mayo Clinic Press

Check out these best-sellers and special offers on books and newsletters from Mayo Clinic Press .

  • Mayo Clinic on Incontinence - Mayo Clinic Press Mayo Clinic on Incontinence
  • The Essential Diabetes Book - Mayo Clinic Press The Essential Diabetes Book
  • Mayo Clinic on Hearing and Balance - Mayo Clinic Press Mayo Clinic on Hearing and Balance
  • FREE Mayo Clinic Diet Assessment - Mayo Clinic Press FREE Mayo Clinic Diet Assessment
  • Mayo Clinic Health Letter - FREE book - Mayo Clinic Press Mayo Clinic Health Letter - FREE book

Your gift holds great power – donate today!

Make your tax-deductible gift and be a part of the cutting-edge research and care that's changing medicine.

  • Our Mission

Vocabulary-Building Activities for Young Students

Early elementary teachers can use these fun activities to help make vocabulary lessons accessible for all of their students.

Elementary students reading books

Vocabulary building supports the development of background knowledge, which is a crucial skill in comprehension. When I talk about vocabulary, I don’t mean simply giving the definition of a tricky word we might come across in a story. Rather, there is an intentionality and process behind my teaching of vocabulary. Here is how I make vocabulary lessons accessible to all kids in my class.

Introducing New Vocabulary Words

I like to introduce new words by having students hear them orally first. Sometimes I use Sesame Street: Word on the Street videos. These videos are so fun! Catchy songs, children, celebrities, and your favorite Sesame Street characters show, don’t just tell, what different words mean. For example, in the video for the word courteous , characters act out “giving up your chair” and “opening a door for someone.” In addition, some of the videos show what the word is not, which is also important. In the same video, Grover mixes up curtsy for courteous .

To further explore the meaning of the word, I write the word on the board and ask these questions:

1. What do you notice/wonder?

2. Have you heard this word before? If yes, where?

3. What does the word mean? How do you know? (This is to get them to look at the makeup of the word, any prefixes or suffixes, etc.) 

4. I then use the word in a sentence orally to assist with the meaning—for example, “Ms. Jones was very courteous when she held the door for the entire second-grade class.”

After the words are set up in this way, these activities help students develop a deeper understanding of the words.

Engaging vocabulary activities accessible to all kids

Interactive/Virtual Word WalI: This activity combines language (via audio) with visuals and can be easily displayed for the whole class on the smart board or individually in Google Classroom. Each picture has a corresponding explanation or definition that I prerecord. This gives students independent access to hearing how the word is pronounced and what the word means. I have also had students record the explanations in their first language for their peers. 

Vocabulary Journals: Each student has a journal where words are written and illustrated. Journals can be a stack of stapled paper (much more affordable) or a purchased journal. I am a big believer in the power of handwriting, so the paper needs to have spaces/lines that supported correct letter formation.

We clap out the word, spell it, and talk about the part of speech. Students write the word in their journal and illustrate it. This is also a good time to squeeze in explicit instruction in morphology (the study of words), by looking at the letters, highlighting suffixes or prefixes, and discussing the origin of the word. This activity could easily be differentiated as needed. For example, have the full definition prewritten on paper that can be pasted in the journal. Although a somewhat simple activity, it is a powerful and effective one, nonetheless.  

Illustration/Sentence: This activity is another way to incorporate both handwriting and vocabulary because it is completed on appropriately lined paper. Once I explicitly introduce the word by reviewing the spelling and definition, and by modeling how to use the word in a sentence, students have the opportunity to create their own sentence and illustration. This activity could also be scaffolded by providing a prewritten sentence frame, where students could fill in the blank. For an extension, students could be asked to expand on their sentences by writing complex sentences and/or writing multiple sentences.   

Which One Doesn’t Belong?: Give students a group of words, and have them decide which one doesn’t belong and why. Sesame Street also has videos of this activity . They are simple and use objects, not words, but I found that it was a good way to model for my kids what “Which one doesn’t belong?” means. This is such a great way to promote oral language. I have seen this activity push kids to reason and explain their choices to one another. Differentiate this activity by adding pictures to the words if needed, as well as limiting the number of words in each group.  

Guess My Word: This activity works best for words that have already been introduced so that they are familiar. This is especially true if a picture is not included with the word, because for our beginner readers, these words are not decodable (remember they are our Tier 2 and 3 words). Give students a stack of index cards with a vocabulary word written on each one. Cards are face down. Students take turns selecting a card. They explain to the group what the word is by giving clues as to what the word means. For example, a clue for the word courteous might be “holding the door for someone.” This is a great way to build oral language among peers, while building vocabulary. This activity could be differentiated by adding a picture to the word card.  

Fill-in Passages/Matching: This activity includes word cards and a passage from a text we read in class. The students are given a list of vocabulary words. They write one word on each card (all cards can be laminated to make it easier to wipe off). Next they place the card where it belongs in the passage or next to the definition. This activity could be differentiated by having the words prewritten and/or by limiting the number of sentences for them to complete.

Word Origin Dictionary: I cannot say enough about the book Once Upon a Word ! This is such an engaging way for students to learn about word origins, etymology, and definitions. I make it a game by having a student pull a random letter tile from a bag of mixed letters. Then they flip to that section of the book to choose a word to learn all about. For fun, I supply whiteboards and markers for them. If there is time, they can transfer their writing to paper.  

Vocabulary-building Outcomes

To my surprise, implementing these simple activities for building vocabulary has yielded impressive results. It has broadened their background knowledge: Students discuss the topics they are learning about. They use vocabulary words outside of the classroom. I also have seen an improvement in writing ability. They write more because they have knowledge about the topics they were being asked to write about. Learning sticks because it is meaningful!

IMAGES

  1. 😊 Speech and oral communication ppt. 12 Principles of Effective Oral

    definition of oral speech

  2. Presentation speech

    definition of oral speech

  3. What is Oral Communication? Definitions, Importance, Methods, Types

    definition of oral speech

  4. Oral language: what is it and why does it matter so much for school

    definition of oral speech

  5. Types of Speech Context

    definition of oral speech

  6. Types of speeches according to delivery| Manuscript reading & Memorized Speeches| Oral Communication

    definition of oral speech

VIDEO

  1. Unit 3 Oral Speech

  2. Chapter 9 Oral speech

  3. Oral Speech in Speech Communication (Advocacy Speech)

  4. ELC590 ORAL SPEECH

  5. Preparing An Oral Speech

  6. SA#2 Oral Speech

COMMENTS

  1. What Is Speech? What Is Language?

    Speech is how we say sounds and words. Speech includes: How we make speech sounds using the mouth, lips, and tongue. For example, we need to be able to say the "r" sound to say "rabbit" instead of "wabbit.". How we use our vocal folds and breath to make sounds. Our voice can be loud or soft or high- or low-pitched.

  2. Speech

    Speech is a human vocal communication using language.Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if they are the same word, e.g., "role" or "hotel"), and using those words in their semantic character as words in the lexicon of a language according to the syntactic ...

  3. SPEECH Definition & Meaning

    Speech definition: the faculty or power of speaking; oral communication; ability to express one's thoughts and emotions by speech sounds and gesture. See examples of SPEECH used in a sentence.

  4. Oral Communication Definition, Skills & Examples

    Oral communication is the verbal transmission of information and ideas used regularly in many different fields. For example, a student may deliver an oral presentation to their peers, while making ...

  5. Speech

    Human speech is served by a bellows-like respiratory activator, which furnishes the driving energy in the form of an airstream; a phonating sound generator in the larynx (low in the throat) to transform the energy; a sound-molding resonator in the pharynx (higher in the throat), where the individual voice pattern is shaped; and a speech-forming articulator in the oral cavity ().

  6. Oral Definition & Meaning

    oral: [adjective] uttered by the mouth or in words : spoken. using speech or the lips especially in teaching the deaf.

  7. Oral communication

    oral communication. Human interaction through the use of speech, or spoken messages. In common usage loosely referred to as verbal communication, particularly face-to-face interaction, but more strictly including mediated use of the spoken word (e.g. a telephone conversation), where, in addition to spoken words, there are still also vocal cues.

  8. Oral Language

    Oral communication is more than just speech. It involves expressing ideas, feelings, information, and other things that employ the voice, like poetry or music, verbally.Because so much of human life is dominated by speech and verbal communication, it would be difficult to fully express oneself without an oral language. Language involves words, their pronunciations, and the various ways of ...

  9. How to prepare and deliver an effective oral presentation

    Delivery. It is important to dress appropriately, stand up straight, and project your voice towards the back of the room. Practise using a microphone, or any other presentation aids, in advance. If you don't have your own presenting style, think of the style of inspirational scientific speakers you have seen and imitate it.

  10. SPEECH

    SPEECH meaning: 1. the ability to talk, the activity of talking, or a piece of spoken language: 2. the way a…. Learn more.

  11. Orality (Communication)

    Orality is the use of speech rather than writing as a means of communication, especially in communities where the tools of literacy are unfamiliar to the majority of the population. Modern interdisciplinary studies in the history and nature of orality were initiated by theorists in the "Toronto school," among them Harold Innis, Marshall McLuhan ...

  12. Speech Definition & Meaning

    speech: [noun] the communication or expression of thoughts in spoken words. exchange of spoken words : conversation.

  13. ORAL Definition & Meaning

    Oral definition: uttered by the mouth; spoken. See examples of ORAL used in a sentence.

  14. What Is Oral Language?

    Oral language is the system through which we use spoken words to express knowledge, ideas, and feelings. Developing ELs' oral language, then, means developing the skills and knowledge that go into listening and speaking—all of which have a strong relationship to reading comprehension and to writing. Oral language is made up of at least five ...

  15. oral adjective

    Synonyms spoken spoken oral vocal These words all describe producing language using the voice, rather than writing. spoken (of language) produced using the voice; said rather than written:. an exam in spoken English; oral [usually before noun] spoken rather than written:. There will be a test of both oral and written French. spoken or oral? Both of these words can be used to refer to language ...

  16. Types & Examples of Oral Communication

    Examples of oral communication are conversations with friends, family or colleagues, presentations and speeches. Oral communication helps to build trust and reliability. The process of oral communication is more effective than an email or a text message. For important and sensitive conversations—such as salary negotiations and even conflict ...

  17. What is Oral Communication? Definitions, Importance, Methods, Types

    Oral communication implies communication through the mouth. It includes individuals conversing with each other, be it direct conversation or telephonic conversation. Speeches, presentations, and discussions are all forms of oral communication.. Oral communication is generally recommended when the communication matter is of a temporary kind or where a direct interaction is required.

  18. Oral communication

    oral communication: 1 n (language) communication by word of mouth Synonyms: language , speech , speech communication , spoken communication , spoken language , voice communication Examples: Strategic Arms Limitation Talks negotiations between the United States and the Union of Soviet Socialist Republics opened in 1969 in Helsinki designed to ...

  19. 13 Main Types of Speeches (With Examples and Tips)

    Informative speech. Informative speeches aim to educate an audience on a particular topic or message. Unlike demonstrative speeches, they don't use visual aids. They do, however, use facts, data and statistics to help audiences grasp a concept. These facts and statistics help back any claims or assertions you make.

  20. Delivery

    The main purpose of delivery is to enhance, not distract from, the message. In order to help you avoid distracting from your message, we've created a document about what not to do while delivering a speech. We consider several aspects of delivery: controlling speech anxiety, vocal variety, body language, and practice.

  21. SPEECH

    SPEECH definition: 1. the ability to talk, the activity of talking, or a piece of spoken language: 2. the way a…. Learn more.

  22. Speech and nonspeech: What are we talking about?

    However, assuming clear, consistent, and agreed-upon task definitions can be formulated, two considerations limit the value of dissociations to distinguish speech from nonspeech motor control: (1) dissociations need not reflect motor system distinctions, and (2) they also exist between different speech tasks.

  23. Articulation

    articulation, in phonetics, a configuration of the vocal tract (the larynx and the pharyngeal, oral, and nasal cavities) resulting from the positioning of the mobile organs of the vocal tract (e.g., tongue) relative to other parts of the vocal tract that may be rigid (e.g., hard palate). This configuration modifies an airstream to produce the sounds of speech.

  24. The role of oral language in the dialogic primary classroom

    The definitions of this type of pedagogy and close analysis of the interactions between teachers and children within it have ... The role of oral language is perhaps more implicit than explicit in these older theories, but at the heart of a pedagogy that values the sharing of ideas and collaborative sense-making, lies the means by which we ...

  25. Childhood apraxia of speech

    Childhood apraxia of speech (CAS) is a rare speech disorder. Children with this disorder have trouble controlling their lips, jaws and tongues when speaking. In CAS, the brain has trouble planning for speech movement. The brain isn't able to properly direct the movements needed for speech. The speech muscles aren't weak, but the muscles don't ...

  26. Vocabulary Activities for Young Students

    This is a great way to build oral language among peers, while building vocabulary. This activity could be differentiated by adding a picture to the word card. Fill-in Passages/Matching: This activity includes word cards and a passage from a text we read in class. The students are given a list of vocabulary words.