You Don't Say
We have looked at alphabets, manuscripts, printing, and coding, but we have not looked at the earliest and possibly the most important way we move information—speech. Our ability to move air from our lungs through our vocal cords, palate, tongue, and lips and turn it into communication is uniquely human. To be sure, other animals can make sounds that convey some information, but it is almost all at an emotional level (fear, anger, satisfaction) and deals only with the present. We can turn complex thoughts into words that deal with the present but also with what has happened and what might happen, and speech allows us to move those ideas from one brain to another, and for almost all of human history that was the only way we could to do that.
Finding exactly how speech originated is impossible. In the 1950s Noam Chomsky hypothesized that language started 100,000 years or so ago when one of our lucky ancestors spontaneously developed a language gene. Chomsky made a career on that theory, but the majority of those who study linguistics now lean toward language having gradually evolved over something more like a million years and dating as far back as Homo erectus. That said, there is still an active search for a genetic basis for speech.
Proof for any theory of how speech started is hard to come by since there is no direct record going back that far. Unlike with writing, we cannot study language development in primitive societies that do not talk because there aren’t any. As far as we know, there has never been a culture without speech. Indeed, there are something like 7,000 languages currently in existence and many more than that have gone extinct.
We can learn something from the way children learn language since no one is born talking. Starting at age two, children learn an average of two to four new words a day and have an average 10,000-word vocabulary by age five. They go from simple names for things to words for what things do to words for characteristics of objects or activities and then to complex grammar that allows a handful of sounds (forty in English) to express an infinite number of thoughts.
Where in the brain that learning takes place is not completely clear, although we do know some things. In 1861 French physician Paul Broca encountered Louis Victor Leborgne at Hôpital Bicêtre in Paris. Leborgne’s nickname “Tan” came from the fact that it was the only syllable he could utter. When Tan died, Broca obtained the brain and found a specific lesion at the base of the left frontal lobe in an area which now carries his name. [1]
In 1874, German neurologist Carl Wernicke described a different speech deficit with a different anatomy. His patient could speak fluently but could not understand a word of what was said to him. Broca’s patient could not speak but could understand; Wernicke’s could speak but could not understand. A variety of aphasia variants have subsequently been described, but all are associated with lesions around the Sylvian fissure that divides the temporal lobe from the frontal and parietal lobes, and virtually all are on the left side.
The right-left thing is interesting. Roughly 90 percent of humans of whatever era and culture are right-handed, and in almost all right-handed people speech comes from the left peri-Sylvian region. It does so in about 70 percent of left handers as well. [2] Language does not even have to be verbal to be lateralized; deaf people with left sided brain damage exhibit deficits in signing identical to verbal aphasia.
One would think that anything as important as handedness or speech would have been explained, but that is not the case. At least 60 species of animals exhibit handedness. Chimpanzees exhibit clear right-handedness in their tool use. Human fetuses as young as 15 weeks of gestation show a preference for right thumb sucking, and infants as young as two months old preferentially respond to speech from their right side. Left brain communication is not unique to humans. Zebra finches have to be taught song patterns, usually by their fathers. New chicks babble like human infants until they learn their songs, and their ability to sing is lateralized to a specific area in their left hemispheres. Handedness and left dominance for communication seem to be genetically programmed across species, and no one knows why. It might make more evolutionary sense to spread something that important around so single injuries could not take it away, but that is not how it works.
One more thing—when we speak or listen, we do not do it one word at a time. We become fluent because we use our previous experience with language to fill in gaps where sounds are unclear and to predict what will come next before we say it or hear it. Does that sound familiar? It should because that is exactly how AI’s large language models create answers. They take a word or part of a word and find the most likely next sequence based on text on which the model has been trained. In that sense generative AI acts just like a brain. Well like a left brain anyway.
References:
Chomsky, Noam, Language and Mind. New York: Harcourt Brace Jovanovich, Publishers, 1972.
Everett, Daniel L., How Language Began: The Story of Humanity’s Greatest Invention. New York: Liveright Publishing Corporation, 2017.
Geschwind, Norman, Selected Papers on Language and the Brain. Boston: D. Reidel Publishing Company, 1974.
Pinker, Steven, The Language Instinct: How the Mind Creates Language. New York: Harper Perennial Classics, 1994.
[1] In 1836 French physician Marc Dax may have related speech abnormalities to left brain damage, but his observation, if it was made, was largely ignored.
[2] Although if the left hemisphere is damaged or surgically removed before the patient is six years old, speech can develop from the right side.




Each chapter has been interesting to read and very informative. Looking forward to what I expect may be a book.