ScienceDaily features The surprising grammar of touch: Language emergence in DeafBlind communities:

A new study demonstrates that grammar is evident and widespread in a system of communication based on reciprocal, tactile interaction, thus reinforcing the notion that if one linguistic channel, such as hearing, or vision, is unavailable, structures will find another way to create formal categories. There are thousands of people across the US and all over the world who are DeafBlind. Very little is known about the diverse ways they use and acquire language, and what effects those processes have on the structure of language itself. This research suggests a way forward in analyzing those articulatory and perceptual patterns — a project that will broaden scientific understanding of what is possible in human language.

This research focuses on language usage that has become conventional across a group of DeafBlind signers in the United States and shows that those who communicate via reciprocal, tactile channels — a practice known as “Protactile,” — make regular use of tactile grammatical structures. The study, “Feeling Phonology: The Conventionalization of Phonology in Protactile Communities in the United States” by Terra Edwards (Saint Louis University) and Diane Brentari (University of Chicago), will be published in the December, 2020 issue of the scholarly journal Language.

The article focuses on the basic units used to produce and perceive protactile expressions as well as patterns in how those units are, and are not, combined. Over the past 60 years, there has been a slow, steady paradigm shift in the field of linguistics toward understanding this level of linguistic structure, or “phonology” as the abstract component of a grammar, which organizes basic units without specific reference to communication modality. This article contributes to that shift, calling into question the very definition of phonology. The authors ask: Can the tactile modality sustain phonological structure? The results of the study suggest that it can.

A very interesting development; there are more links, including one to a video discussion of the study done in Protactile, at the MetaFilter post.


  1. David Eddyshaw says

    Seems odd to talk about “phonemes” in this context. “Haphemes”? On the other hand, I gather that “phoneme” has already been generalised to include the analogous elements of signed languages, so yeah, why not?

  2. Yeah, that bothered me a bit (as of course did the Chomsky-adjacent stuff), but hey, it’s interesting despite the irritants.

  3. From what I’ve seen, ASL linguists either use “phoneme” for minimal contrastive features (without regard for the mode), or they ditch the whole concept in favor of things I can’t speak to, but they don’t seem to see much value in parallel terms to tag non-voice features.

    The layer that might make more sense to have a parallel term is phonology, the physical utterance. But I’m not sure that split of the P layers is used for ASL.

Speak Your Mind