Saqer Almarri has a post [scroll down to “The Arabic and Eurabic scripts/Posted by Saqer Almarri on January 13, 2012”] presenting a talk by the typographer Thomas Milo about “the difference between the Arabic script, and one created in Europe to imitate it (but failed to do so) which he calls Eurabic. He mentions that Eurabic does not include the script grammar that is required by the normal Arabic script in order to differentiate similar but distinct combinations of characters.” He says “I highly recommend you spend the next half-hour watching this very interesting and highly informative talk,” and I join him in the recommendation; it starts off slow but once Milo gets going, he pulls no punches, saying things like “Great scholar; couldn’t write Arabic” and “A century later, Dutch scholars are still writing fantasy Arabic.” He shows a Yale inscription that has “butter” instead of “your Lord” because of a basic misunderstanding of how written Arabic works. It’s fun and educational too!

As lagniappe, here’s something that’s pure fun, with no educational value, The Virtual Academic: a random sentence generator: “To see a random sentence, just click the “generate” button below, and Pootwattle, our Virtual Academic(TM), will write one for you.” I just got “The expropriation of system allegorizes the linguistic construction of romantic inwardness.” (To which Smedley, the Virtual Critic(TM), responds: “Pootwattle’s carefully researched summary of the relationship between the expropriation of system and the linguistic construction of romantic inwardness may seem impressive to the uninitiated.”)


  1. If you like the Virtual Academic, you’ll love the Postmodernism generator, which generates a whole essay at a time, complete with section titles and bibliography.
    I was also disappointed to see that the sidebar of Virtual Academic (inside the “Why are the sentences so hard?” section) had some fiddle-faddle about nouns being “harder to understand” than verbs.

  2. And of course I forgot to include the Postmodernism Generator link: http://www.elsewhere.org/pomo/

  3. Aaron beat me to it while I was beavering away at a bit of historico-linguistico-philosophical research on this subject. So I will throw my customary caution to the winds and jump right in.
    In the internet I have found several statements to the effect that the Dada Engine, a precursor of The Virtual Academic and the Postmodernism Generator, was created in April 1996 by Andrew Bulhak. The Postmodernism Generator does use the Dada Engine, and what The Virtual Academic says under “how it works” (in the sidebar on the right of the linked page) suggests that it also uses it.
    It may not be coincidental that Alan Sokal published his “Transgressing the Boundaries” hoax in the same year, in the spring/summer 1996 number of Dissent. In connection with the Dada Engine, a blog at the Semantic Programming blogsite points us to Ronald Langacker and cognitive linguistics. There the name of Lakoff comes up, whose book Women, Fire and Other Dangerous Things has been mentioned at languagehat several times.
    Conclusion: there seem to be ley lines running between between The Virtual Academic, other techniques for generating and parsing noise (aka cognition) and cognitive linguistic work from the ’70s. I bet John Cowan could tell us more about this.

  4. What I mean by “not coincidental” is that in the ’90s the topic of solemn, syntactically correct bullshit seems to have been on many people’s plates. Sokal was only one of those who decided it was time to start flinging the mashed potatoes back at the cooks.

  5. Aaron, I don’t think what the Virtual Aco says is fiddle-faddle, not when you read it in context. She’s talking specifically about verbal nouns such as “the analysis” and merely points out that in pursuit of concision they eliminate information about agent,time-period, object as opposed to a sentence with a verb e.g. “Julia will analyze the lunar dust…”
    These fun programs interestingly connect up with some postmodern poetry processes that were deliberately designed to admit impersonal chance in place of personal whim , e.g Jackson Mac Low’s diastic method, which you can trial here:

  6. I agree with Michael – what she (?) writes makes sense, although I initially took it to be ethereal irony at that website.
    A widespread kind of German prose, as you have for instance in the “better” newspapers, or when someone is trying to write, may seem to an English speaker to be pervasively passive and over-nouned. It’s hard to translate into natural-sounding English, which is not pervasively passive except in the kind of pseudo-academic prose being fried at the Virtual Academic. However, in speaking German I don’t generally have a feeling that details are missing.
    Different languages often are tied together with different Denkstile and lifestyles. That’s one reason why automated translation usually produces garbage.

  7. “… or when someone is trying to write proper-like …”

  8. j. del col says

    Years ago I read an article by Owen Gingerich ((no relation to the Great Eft) about astrolabes that included pictures of faked antique astrolabes with nonsensical “Arabic” inscriptions on them.

  9. There are medieval coins from Christian kingdoms that have nonsensical Arabic on them because they were attempting to imitate the prestige coins of the region, which were issued in places like Damascus and Baghdad.

  10. That video is a fascinating demonstration of how people think that when it comes to language, there can be either no rules or lots of rules, as suits their needs. The fact of the matter is there are rules.

  11. Grumbly Stu, that is a very interesting history. Thanks.
    Michael et al., I didn’t say that the claim was fiddle-faddle because I know it (or believe it) to be false. Rather, it is a just-so story told without any evidence. The author (I didn’t see any evidence indicating their gender) is too quick to dismiss the idea that the unfamiliarity of the words in these sentences leads to difficulty in understanding. One might want to compare the mean of the (log-transformed) lexical frequency of the words in these sentences with sentences from a corpus of non-academic language.
    The notion that “the words that express action are verbs” is somewhat simplified — what about statives? What about auxiliary verbs, which serve to express no meaning at all, but rather to mediate between the “action” expressed by a non-verb (or non-tensed verb) and the syntactic requirement that the sentence have a tensed verb?
    It could be true that “more nouny” sentences are more difficult to process; it could also be true (as suggested here) that this difference is absent for speakers of other languages. But people are notoriously bad at introspecting about these sorts of things, as evidenced by the reams of posts at Language Log about language authorities who advise avoiding the passive to increase intelligibility (or dynamicness or whatever), while in fact using it heavily themselves. So actual evidence would be needed to answer the question. (I tried looking for some studies on the question, but the phrase “noun verb parsing difficulty” and close variants are a good match to every psycholinguistics paper in existence, according to Google.)
    As Smedley might say, “The Virtual Academic’s cavalier assertion of anecdotal observations undermines the fundamental epistemological paradigm of the academic community.”

  12. @Aaron: To be fair, there’s a difference between:
    1) The words that express action are verbs.
    2) Verbs are words that express action.
    Your examples of non-action verbs (statives, auxiliaries) address claim 2. Counter-examples to 1 could include nominal forms like “the court decision was hurried”, where “decision” is a noun expressing an action; it could be reworded as “the court decided hurriedly”.
    But I’m just nitpicking for sport; of course I agree with you.

  13. nonsensical Arabic
    I believe art historians call this pseudo-Kufic.

  14. Ladies and gentlemen, let’s not forget the Chomskybot.

  15. Aaron: I added “natural-sounding” to the end of a slightly modified version of your Google search items: “noun verb parse difficulty” *[1], and found a number of interesting papers. One describes something called OntoNotes (a corpus annotated in special ways), the other a “Parser for real-time speech synthesis of conversational texts” *[2]. The following excerpt from the “Parser” abstract appeals to my I-don’t-know-linguistics-but-I-know-what-I-like sense of propriety:

    The parser prepares TDD [Telecommunications Device for the Deaf] texts for synthesis by (a) performing lexical regularization of abbreviations and some non-standard forms, and (b) identifying prosodic phrase boundaries. Rules for identifying phrase boundaries are derived from the prosodic phrase grammar described in Bachenko and Fitzpatrick (1990). Following the parent analysis, these rules use a mix of syntactic and phonological factors to identify phrase boundaries but, unlike the parent system, they forgo building any hierarchical structure in order to bypass the need for a stacking mechanism; this permits the system to operate in near real time. As a component of the text-to-speech system, the parser has undergone rigorous testing during a successful three-month field trial at an AT&T telecommunications center in California. In addition, laboratory evaluations indicate that the parser’s performance compares favorably with human judgments about phrasing.

    *[1]: in my search, I put “natural-sounding” between quotes in the Google ‘all these words’ field, to prevent Google from eliminating the “-” and doing other things that it thinks best but I don’t. I changed your “parsing” item to “parse” so that texts with “parse” would be found (listed) before those with “parsing”. Google seems to search for result words containing the search items as initial substrings, and orders the result words by (increasing) length.
    *[2]: The parser paper may be from the ’90s. In many such-like research reports in PDF format that I have encountered in the internet, no date of publication is given. Is this an attempt to make the reports seem to be of timeless importance ?

  16. John, the Chomskybot FAQ says this:

    foggy’s [“a program … that had been circulating underground at IBM which artfully made fun of pompous administrators and their jargon”] most interesting effects are in the mind of the beholder, especially since its output not infrequently induces a strong feeling of inferiority in the unsuspecting, a sense of “I just don’t get it, so I must be dumber than I’d thought.” This is the Turing Test in reverse, and humans should resist allowing themselves to fail.

    I would have said that “inducing a strong sense of inferiority” is an important aspect of the Turing test. A machine producing such output would seem all-too-human, and so should pass the test as having successfully imitated an administrator.

  17. In any case, things have moved on since Turing and his test. I would say that most people today are interested in computers, devices and humans that can be interacted with in useful, reliable ways. Whether these interlocutors are what they seem to be, or seem to be what they aren’t, is a dusty theological question.

  18. MMcM — actually not. It’s specifically about the typography. His use of the word “grammar” and focus on type faces (not something I think much about) confused me for a moment, but once I understood his point, it was absolutely fascinating.

  19. The medium is the message.
    While the content of Milo’s presentation is fascinating (I’ve watched it once and will return to it again), I was very much taken with the mode of presentation.
    But a few years ago such research would have been presented at a conference with a few hastily-contrived “overheads.” The paper itself would have been published in an obscure journal with some clumsy diagrams and perhaps a few photographs.
    Today, what was presented at a conference in Iceland is posted on the internet for the entire world to see, including a visually eloquent PowerPoint (or similar) presentation that does much to elucidate an inherently visual phenomenon.
    We live in fortunate times.

  20. Indeed.

  21. marc, I actually only meant objects like those referred to in the preceding comments by j. del col and languagehat, which I directly quoted, not the topic of the presentation.

  22. a visually eloquent PowerPoint (or similar) presentation that does much to elucidate an inherently visual phenomenon.
    The pictorial kinds of visual medium now enable complex, carefully controlled effects that have no analogy in the textual kinds of visual medium. “Visually eloquent”, however, is derivative and reductive, since it characterizes those effects in terms of text and speech. Borrowing from a Johnny Mercer song, I’d say we need an expression that accentuates the negatives, and eliminates the affirmatives – meaning photographic negatives and propositional affirmatives.

  23. You could just as well say that to call some text “eloquent” is derivative and reductive, since it characterizes its effects in terms of speech.

  24. “Visually eloquent” struck me as a rather nice turn.

  25. You could just as well say that to call some text “eloquent” is derivative and reductive, since it characterizes its effects in terms of speech.
    Not really, since texts can be spoken and speech can be written down. They are to a large degree interconvertible, so descriptions of them also tend to be interconvertible.
    “Visually eloquent” is a familiar expression that attempts to characterize the effects of non-textual visual media in terms of text/speech. But paintings, movies, PowerPoint presentations (that are not 99% text, as they too often are in IT), Star Wars films and so on are very different from texts. Music, tastes, odors and the feel of nylon stockings are nothing like text, and yet we usually go at them all with the old logocentric thought-habits.
    My point was: there must be better ways of talking about visual media artefacts than comparing them to texts – without having to become an art critic. Even a stocking frotteur who whistles while he works would sniff at that.

  26. I myself am as logocentric as a bear in the woods, so I don’t have much of an idea of how to break away from text comparisons – except by mixing “metaphors”.

  27. visually eloquent
    The American Heritage Dictionary provides two definitions for eloquent:
    1. Characterized by persuasive, powerful discourse: an eloquent speaker; an eloquent sermon.
    2. Vividly or movingly expressive: a look eloquent with compassion.
    And for eloquence:
    a. Persuasive, powerful discourse.
    b. The skill or power of using such discourse.
    2. The quality of persuasive, powerful expression.

  28. Paul: there has been a certain empressement to interpret what I wrote as a criticism of your use of “visually eloquent”. That was not my intent. It’s a perfectly appropriate expression in terms of how visual media are generally characterized. My point was that these general characterizations are logocentric. Nobody need be miffed at such a point, whether directly or as proxy.

  29. I greatly enjoyed Milo’s talk but was frustrated not to be able to find anywhere where his views of the incorrect ligatures can be found. In effect, his red notations cover the errors and he doesn’t explain the differences between the correct and incorrect forms. Of course that wasn’t his subject but I wonder whether he isn’t a little too prescriptivist. I could recognize some of the things he noted, others I’m not sure. It has been well-known for over a century that the Leiden Arabic fonts were crude and inelegant; I am not sure that is fully tantamount to being “fake Arabic.”

  30. Yeah, I had some of the same reactions, but I have to cut the guy some slack in terms of overstatement—he’s trying to hack through deep layers of ignorance and indifference. Those Euro typefaces are ugly enough I don’t think “fake” is too harsh a qualifier, especially if they can render the text illegible. But I do wish he laid out his views in writing somewhere accessible to us lazy denizens of the net.

  31. It might be hard to understand if you can’t read Arabic. He does describe the correct and incorrect forms in a way, using the misleading term “grammar.”
    For example, the upside-down v refers to the upstrokes in a letter like siin س, which is not written with strokes of equal height throughout. Even in my browser the first stroke is taller than the second one, and the space between the first and second strokes is smaller than between the second and third.
    What he’s saying is that Europeans missed that kind of thing, and reduced the number of shapes that go into writing the characters, (my guess is the motivation was to simplify type-setting), because they didn’t understand what the “grammar” of the script, which just means the rules of how to write the characters.
    I think an example in English would be how we connect, say, an O and an L in lower-case cursive, vs. an A and an L. The ligature enters the L at a height corresponding to the top of the O in the first case, but at the bottom of the A in the second case, creating a situation in which there are, in fact, two different lower-case cursive L’s, one for combining with O’s and U’s etc. and one for combining with A’s and D’s etc. Imagine if someone wrote “ol” in cursive using the A-combining L — you wouldn’t be sure if they were writing “ol” or “al”, right?
    Now multiply that over every letter, and you can see where the difficulty in deciphering comes in. That’s why he says the Turks couldn’t read the Eurabic script.

  32. @marc: I think he referred to GRAMMAR-grammar too. For example, in a situation like bbbbk, from what I can gather in this talk, no consecutive “b”‘s can be horizontal pans from one another; so either you should use cascading (to vary the y-axis a bit) or alternating height for the minims etc. Things like this abound in phonology.

  33. minus273: I must have missed that part.

  34. Regarding Eurabic – thanks for all the helpful and considerate comments and criticisms.
    Some comments need correction IMHO. E.g., Eurabic is not the same as pseudo-Kufic, because the latter is purely ornamental and not supposed to read as Arabic. Secondly the comment “the upside-down v refers to the upstrokes in a letter like siin” is incorrect. I did not discuss the shape of Seen, but the shaping rules for sequences of the lookalikes of the non-final letter Beh (disregarding dot patterns). Thirdly, I do not speak of “grammar” but of “script grammar”. The part of grammar alluded to in this metaphor is of course phonology. The best term for the phenomenon under scrutiny therefore would be graphetics/graphemics, analogous to phonetics/phonemics. I may even decide to use those instead of “script grammar”. I found support for such a choice in the Cambridge Encyclopedia of Language, which has the entry graphetics/graphology for similar contextual issues in writing. However, I bowed out because of the potential for confusion in the word “graphology”. But if “graphemics” can replace it without invoking a legacy of misleading associations, that might remove the unclarities.
    I would appreciate your competent comments and suggestions.
    As for the Reykjavik talk itself, it was extremely compressed: I got a slot for 20 minutes, took 30 – but needed 75.
    I will try to post a 75 minutes version that I later recorded in LA. Please check out http://www.decotype.com in a few weeks.
    I am working on a written version of this talk. Some aspects of “script grammar” have already been published in various articles. A particularly focused article about the underlying analysis that leads to the concept of script grammar is soon to appear here:
    In my underlying analysis I see two fundamentally different mindsets behind Arabic typography:
    1. Fully-fledged Arabic typography for propaganda by the “Catholic Caliphate” in its polemics with Islam and the dissimilar Christian minorities inside the world of Islam. This required slick typefaces that can pass as Islam-related script and that are legible for the Middle Eastern target audience. Hence the ambition (naskh-thuluth hybrid with limited script grammar) displayed in Granjon’s typefaces. Granjon did not try to simplify or reduce the Arabic script, to the contrary, he tried to make as accurate as possible a design in order not to be between the text and the reader. But apparently nobody around him had the know-how to alert him that the classic styles of Islam are basically not hybridized, not even naskh and thuluth, but certainly not when they are practiced in geographically widely separated regions.
    2. Reduced Arabic typography as a tertiary auxiliary discipline (after Hebrew and Syriac) for Protestant theologians in their polemic with Catholicism. Here the visual appearance of the script is utterly irrelevant to the extent that the use of Hebrew script instead of Arabic is propagated. The eventual simplification appears to have been triggered by Syriac, for which, unlike Arabic, the 4-fold pattern with little or no additional script grammar does apply. This explains the alien script esthetics better than the supposed influence of “Kufic”. For the European target audience the resulting typography was not offending, since they proved to be happy with any vehicle for Arabic.
    In Europe the winner was the latter: through the Dutch connection, “Syriac-based” reduced Arabic became the standard type. The shapes were developed and perfected, but not in relation to Islam-related script esthetics and graphetic details, but based on Latin concepts. As for the occasional distant semblance of “naskhi”: it’s documented that Raphelengius clobbered Granjon’s hybrid typefaces.

  35. I forgot to cover one comment: “his red notations cover the errors and he doesn’t explain the differences between the correct and incorrect forms”.
    Please note that in the majority of the examples I showed in light green the correct forms in the naskh style (and nasta`liq where appropriate). This is representative for the “script grammar” of most Middle Eastern calligraphic styles, as well as the style the European typographers claim to reproduce.
    I tried to let the superimposed caricatures L, V and Z fade away again to expose the underlying original. Hopefully the 75 minute version solves this for you.
    Thanks for your observations.

  36. A couple more Milo links:
    A talk he recently gave as a keynote at the Leyden University Day of Scripts:
    Images of the forthcoming publication by Archetype Cambridge UK):

Speak Your Mind