Perri Klass in the NYT Science Times on October 11th, on “Hearing Bilingual: How Babies Sort Out Language”, reports on research on bilingual babies by Patricia Kuhl (University of Washington), Janet Werker (University of British Columbia), and Ellen Bialystok (York University in Toronto), focusing on Kuhl’s work. The background for the current research is this well-known finding about the development of “phonemic hearing” in monolinguals:
… the researchers found that at 6 months, the monolingual infants could discriminate between phonetic sounds, whether they were uttered in the language they were used to hearing or in another language not spoken in their homes. By 10 to 12 months, however, monolingual babies were no longer detecting sounds in the second language, only in the language they usually heard.
Here, Klass is struggling to talk about an important technical concept — phonemic distinctness — but without using the technical terminology or explaining the concept in a way the reader can understand correctly.
(I’m writing here only about these background results, not the research on how things work for bilingual infants.)
Klass is hobbled here by the fact that her readers can’t be expected to know anything about the relevant concepts and terms of linguistics, this despite the fact that they were developed in the last quarter of the 19th century, were widely disseminated in the first half of the 20th, and have been part of virtually every introduction to linguistics since then. The efforts of linguists seem to have been for nought.
The problem starts with “phonetic sounds”. I’m not at all sure what readers will make of this (though well-disposed and informed readers might understand it to refer to language sounds rather than other noises), but in the context of a discussion of language, “phonetic sounds” are just sounds (or more technically phones).
Pressing on: the passsage above reports that at 6 months (monolingual) babies exhibit phonetic discrimination, regardless of the language they are presented with. (What’s somewhat new here is the use of EEGs to explore perceptual discrimination in infants. Other techniques have been around for a long time.) Ok so far, but what’s crucial here is that the sounds in question occur in both of the languages presented to the babies — to choose a much-studied example, unaspirated vs. aspirated voiceless stops in English and in Hindi. (In the adult languages the difference has a different status: it’s phonemic in Hindi, but not in English.)
Simplifying matters somewhat, what happens in monolingual babies is that the basis for perceptual discrimination shifts from a purely phonetic one to a phonemic one: babies with English as the ambient language come to treat aspirated vs. unaspirated stops as perceptually the same, regardless of which language they come from, while babies with Hindi as the ambient language continue to be sensitive to the difference in Hindi but eventually cease to attend to it in English.
(For some pedagogically-oriented discussion of sounds vs. phonemes, see the phonemes section of my paper “Phonemes and features”, here.)
What changes is that the babies develop “phonemic hearing”, which is called into play when they are listening to material as part of a language and not mere noise. (That is, their hearing doesn’t literally decline; instead, their categorization of what they hear adjusts to the characteristics of the language they’re learning.)
For bilingual babies? Here’s the summary from Klass’s article, in “phonetic sound”-talk rather than phoneme-talk:
the bilingual infants followed a different developmental trajectory. At 6 to 9 months, they did not detect differences in phonetic sounds in either language, but when they were older — 10 to 12 months — they were able to discriminate sounds in both.