MICRONESIAN ORTHOGRAPHY.

Joel at Far Outliers has two posts on a very interesting subject: what is the best way to write a language? Specifically, should you stick to the “scientific” method of one symbol per phoneme, or should you use a “messier” method that may suit the speakers better? I have long thought that “one symbol per phoneme” was a needless goal that has resulted in excessively elaborate alphabets, frequently requiring special symbols that make it difficult to write the language using normal keyboards and printers, and Joel agrees:

People could write fewer vowels and consonants than would be optimal in isolation, while relying instead on sentential, semantic, or social context to reduce ambiguity. But this approach would make linguists feel rather less useful.

See his posts on Marshallese Spelling Reforms and Yapese Spelling Reform: “That Damn Q!”:

A simpler, underspecified writing system would allow more Yapese to write their own language without having to run everything by someone with sufficient linguistic training to understand the New Orthography. It would take literacy out of the hands of experts and give it back to the people who need it most.

Comments

  1. The case with the archaic forms of Chinese is unspeakably horrible. To begin with, many transcriptions do not really understand phonemics, and waste a lot of effort trying to represent the actual sound of each phoneme. Second, different scholars use different transciptions, so you can’t tell whether they’re actually disagreeing with each other or just using different graphic forms. As a result, I now have available to me six or seven different transcriptions of Archaic Chinese, and no way to choose between them.
    I am in favor of using the Roman alphabet with the minimum of diacriticals and the use of double letters as needed.

  2. I am in favor of using the Roman alphabet with the minimum of diacriticals and the use of double letters as needed.
    Me too. Where do people get this idea that using double letters is somehow cheating?

  3. What’s wrong with the IPA? (Okay, some agreement needed on which phone to choose to represent a phoneme.)
    (And people would have to actually use the IPA.)

  4. Do you have an IPA typewriter? Does anybody? Does your computer handle IPA? IPA is useful only for scientific purposes. Besides, you’re missing the whole point of phonemic transcription — IPA is for sounds, regardless of what role they play in a language; what’s needed for a writing system is a representation of the distinctive units of the language, which is what phonemes are. An English vowel might have half a dozen allophones that would be represented differently in IPA, but since they don’t make any difference to the English language, it would be silly to write them.

  5. Digraphs can make the words really long if you have a lot of them.
    eg:
    ingoorroongoorribina
    ng, oo, rr are digraphs in Bardi; rn, rl and rd are used for retroflexes; ny, ly for palatals.
    In general I’m in favour of the don’t mark all the distinctions if the speakers don’t care, but I’ve been blamed for kids not learning to speak Bardi right because oo covers both short and long /u/ (done to avoid using and having people pronounce it schwa instead of /u/). Unfair to blame me, incidentally, I was only just in high school when the orthography was devised.
    There is, however, a lot to be said for an orthography that’s easy to type and that’s highly portable. There’s still more to be said for suggesting a writing system but not minding when people don’t follow it.

  6. My computer handles IPA increasingly well, although this is partly because I wrote software to make it do so.
    It is not so much a concern for phonemicness that causes a gulf between linguiste-friendly and other-user-friendly systems, so much as the desire of linguistes to use the same set of symbols across lots of languages, which is efficient if you are a linguiste, but irrelevant otherwise. (And the one phoneme one-symbol dogma is an American structuraliste thing – the IPA no longer includes the /tS/ ligature for Americaniste /c^/ (c hacek) for English orthographic <ch> as I never tire of pointing out on sci.lang, and this is not a rejection of its phonemicity.)
    In any case, post-SPE phonology (in principle post-Halle’s Sound Pattern of Russian phonology, but this doesn’t seem to have had the same clout at the time) typically rejects the phonemic layer, and SPE can be only slightly mischeviously read as a study (and defence) of Engleesh orthography.

  7. With archaic Chinese there’s a special problem because archaic literary Chinese (the only language recorded) was “always already” archaic, i.e. Confucius used archaic forms in 500 BC based on forms from about 800 BC which were probably archaic then, and so on later. Except that there were of course wrong guesses as to what the old forms were. (The degree to which dialect difference played a major role I don’t know; standard Ch’u dialect was probably different than standard northern dialect, though neither was the same as the colloquial language).
    My guess is that the scholars who use difficult transcriptions in the name of Science probably aren’t being scientific either.

Speak Your Mind

*