A.I.: “Hers” Isn’t a Pronoun.

Cade Metz wrote for the NY Times last month about a problem that’s been in the news lately:

Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk. But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents. […]

On a recent afternoon in San Francisco, while researching a book on artificial intelligence, the computer scientist Robert Munro fed 100 English words into BERT: “jewelry,” “baby,” “horses,” “house,” “money,” “action.” In 99 cases out of 100, BERT was more likely to associate the words with men rather than women. The word “mom” was the outlier. […]

In a blog post this week, Dr. Munro also describes how he examined cloud-computing services from Google and Amazon Web Services that help other businesses add language skills into new applications. Both services failed to recognize the word “hers” as a pronoun, though they correctly identified “his.” […]

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Dmitry Pruss, who sent me the link, wrote:

AI models isn’t a typical matter of experience at LH but I need some grasp of the issues in the famous paper which led to the Google researcher firing, beyond the very basic explanations [in the Times story]. Maybe we can put the recent findings on the language model flaws to a discussion, and perhaps even learn something new / positive from it??

So: any thoughts?

Comments

  1. Speaking as someone who has done academic work on AI decisionmaking: the Gebru debate at Google was not about this problem. Everyone understands that if you train with a dataset with a certain bias (in the statistical sense), that will be reflected in the output. So if, for instance, you train an AI to write sports articles that “look like” U.S. newspaper sports reports, they will both present more male names and cover less cricket relative to baseball. Likewise, if you train an AI to replicate judicial bail decisions, and judges are biased against black folks by assuming they are more likely to commit crime if released on bail than they actually are, the AI will of course replicate that bias.

    Gebru’s argument (and that of other “AI ethics” folks) is that the problem goes beyond just biased training data, to choices made by the developers, and further that if certain content the AI learns from shows bias (in the discrimination sense), we should manually correct it. On the first point, the data does not appear to suggest this problem to me – almost all well-known examples of AI going wrong have to do with bad training data rather than researcher degrees of freedom. On the second point, it is tough. In a classic example, if you searched “unprofessional haircut” on Google, you would see, among others, pictures of black women with big “natural hair”. The reason the search algorithm picks up on this is because people, not AIs, write online that they find this hairstyle unprofessional. Should Google’s developers code the algorithm to suppress this information or not? What of “unprofessional beard”, which if you search it right now, you will find almost exclusively pictures of white men with long beards? This strikes me as a pure judgment call rather than an obvious source of discrimination.

  2. This strikes me as a pure judgment call

    And it’s a call that anyone with any judgment should be able to make without too much trouble in this year of our lord 2020. If your search algorithm is showing pictures of black women with “natural hair” for “unprofessional haircut,” you need to remedy it. Similarly if it’s analyzing words in a pattern that in a human would be called sexist. It’s not good enough to shrug and say “Hey, it’s the algorithm, man” (not that I think you’re doing that! and I’m grateful for your explanation), you need to fix it. Like getting rid of hate-spewing commenters, it takes human intervention and costs money, but them’s the breaks, and when you’re making billions off your search engine you can certainly afford it.

  3. Martin Langeveld says

    Yes [you should fix it and you can afford it] but all this illustrates the incredibly difficult challenges of AI development beyond simple tasks and into realms such as content generation, translation, text analysis and interpretation, etc. Algorithms written by humans won’t work for these things, ever. A great deal of AI software is actually written by computers rather than people, based on massive data-crunching by systems that resemble neural networks, and the algorithms (if that’s what they are) developed this way are often not even comprehensible by humans. All this is fine (or not, depending on what keeps you up at night) when you’re asking the AI system to learn to play chess. Other than the incidental genders of a few of the game pieces, gender bias doesn’t enter into the creation of killer chess strategies by computers whose ways we can no longer grok. But if, as a next step, you ask an AI to absorb all known literature and then write another Shakespeare play, you’re liable to get gender (and other) bias. What you need to do then is not write an algorithm for the AI preventing bias or to “suppress” it, but to thoroughly school the AI on the problems of bias so it will avoid it. And, if it does, you won’t be able to figure out how it did that.

  4. With computer-designed algorithms, there may or may not be a good way to figure out how the algorithm even learned to do something wrong. For example, when a new version of the Mathematica kernel is released, there are often a few complaints that expressions that it could previously evaluate can no longer be evaluated by the updated version. Sometimes, the reasons are easy to track down, sometimes even related to human errors in the programming. On the other hand, sometimes the automatically optimized algorithm has stopped doing calculations in a certain kind of way, presumably because that was never the best way to solve any of the problems that were used to retrain the algorithm. Sometimes, the problem in the algorithm’s decision tree—where it abandons a calculational strategy that is sometimes needed for real-world problems—can be found, and the problem can be fixed directly to ensure that branch is not automatically pruned off. Other times, since the training data is itself generated by a trained algorithm, it can be hard to find out what was wrong with the data that caused the failure.

  5. J.W. Brewer says

    I can’t read the Times article because it’s paywalled, but the Munro blogpost that it links to notes that the same AI’s that failed to correctly parse “hers” as a pronoun also failed to correctly parse “mine” (as a pronoun). Which suggests that an apolitical strategy aimed at fixing that glitch might well fix the “hers” glitch along the way. FWIW if you ask the google n-gram viewer about the relative frequency over time of “mine” and “hers” but then add “theirs” and “yours” you may notice some interesting shifts over time.

  6. This is partly to Brett’s point about how you know what’s changed.

    I’m curious, as some of the work i do is language model adjacent, but i’m not an expert. two issues seem to be data selection: how do you pre-validate data for biases? is there a test suite? a set of criteria? does it depend on human judgment?

    and then a similar issue at the other end: the example about natural hair rightly gets noticed, but how do we notice these biases other than by developers, people or consumers observing them, surfacing them to the correct place etc?

    of course the question of what constitutes morally correct or incorrect biases is fraught with a different set of questions, but i’m more interested in the technical nature of the QA process at the data and output ends.

  7. Lars Mathiesen says

    Yes, I started out thinking there must be some bias in training material or similar, but thinking about it again I realized that his and hers are not directly comparable, because his corresponds to both her and hers. I’m not even sure mine and so on are always called pronouns, at least not their cognates in languages where they decline. Possessive adjectives maybe?

    Then I read the blog entry where that is all covered except for using the category “pronoun” uncritically — but from the short exposure I had to AI language parsing 30 years ago, the point is really that the AI needs to discover anaphoresis in all its forms and it will miss it for hers and mine, so the label they use doesn’t matter.

  8. What should a search for “unprofessional hairstyle” result in, or do you perhaps hold that this query itself needs to be censored as unacceptable? If you need more of a general intuition pump, you can also consider the same question with various other queries for judgemental descriptions of people, say “ugly noses”, “bad skin”, “fatsos”, “terf bangs”, “people who look like pedophiles”…

    I get the feeling there’s some strange confusion here where web search results are now held to be somehow normative; that if you search for a given phrase, the results would supposedly indicate not how it’s actually used but what’s the “right” way to use it. I would hold that queries on Google do not mean “please define this concept for me”, they mean “please list webpages using or relevant to this search term”; and that providing descriptions or examples of discrimination, when asked, is not itself discrimination.

    (An additional part of this problem, however, I think comes from Google’s habit to append some image search results even when you only searched for web pages.)

  9. providing descriptions or examples of discrimination, when asked, is not itself discrimination.

    Of course not, but it can aid and abet discrimination. Is this really so hard to understand? I mean, a while back essentially all the important/respected/powerful people you saw and heard in mass media were white males; women were relegated to segments on style, cooking, or whatever (which men didn’t listen to/watch), and people of color were athletes, musicians, etc. You could perfectly well have said “Naturally all representation of important people in ads and book covers and so on are white men, because that’s who those people are! No discrimination involved!” If anything is to change, people have to step in and make it change. (Of course, to white males it tended to seem as if nothing needed to change, and they got very shirty about it.)

  10. Hat, perhaps I’m misunderstanding you, but it seems you are asking AI to solve problems that our poor human brains haven’t yet figured out to how solve.

  11. No, not at all, I’m saying humans have to step in and adjust what AI comes up with rather than sitting back and saying “Sorry, it’s the algorithm.”

  12. I get the feeling there’s some strange confusion here where web search results are now held to be somehow normative; that if you search for a given phrase, the results would supposedly indicate not how it’s actually used but what’s the “right” way to use it.

    Sometimes they certainly do. Of your list, I had never heard of “terf bangs” and was skeptical that anyone actually used it: not that I thought you were deceiving me, of course, but just that it was a bogus entry you had added for satirical reasons. A quick and superficial google left me realizing that people do say that. Indeed it is a spectacular instance of how a stereotype can catch on with nearly zero basis: on average, Jewish nose size is no different from non-Jewish nose size, and yet.

  13. I wasn’t familiar with TERF bangs either, although I figured it must be a real stereotype, and I could infer what it referred to. However, my strongest reaction was that the very existence of the term seemed to play into sexist stereotyping: that in the midst of disagreement about the role of transwomen in the feminist movement, women were responding by making fun of each other’s hair.

  14. I’m saying humans have to step in and adjust what AI comes up

    Gotcha, although I suppose that from the pure research perspective, the AI people like to devise algorithms (or rather, devise systems that create algorithms) in order to see what they come up with and thus understand how the process works.

    But yes, I agree that releasing such systems as usable public tools is highly premature. But I don’t have any idea how one could ‘adjust’ them in a way that would make people happy. No matter what you do, some group or other will object. Look at the arguments swirling around Facebook’s admittedly modest attempts to label posts as false or debatable or misleading or whatever.

  15. making fun of each other’s hair

    And why not? Serious face is not a mark of a deep mind as Yankovsky should have said, but didn’t. But we love it as it is.

  16. David Eddyshaw says

    While this is not so much off-topic as total free-associating, “making fun of hair” reminded me irresistibly of Danish’s demolition of a pickup artist in

    https://xkcd.com/1027/

  17. J.W. Brewer says

    The Munro piece noted that one issue was using a corpus of newspaper stories for the training, because they are skewed (both thematically and in terms of use/non-use of particular lexemes or syntactic constructions) in various ways and do not represent a fair cross-section of naturally-produced-texts-in-the-language.

    Many (not all) actual humans become reasonably aware as they grow up of the ways in which news coverage characteristically overemphasizes various X’s and underemphasizes various Y’S (sometimes for reasons that could be labeled the results of structural social inequality, sometimes just because the novel is more newsworthy than the non-novel, which can lead both to underemphasis of the commonplace and to trying to recharacterize/mischaracterize something commonplace as if it were something novel) and learn how to discount it appropriately. An AI algorithm that could do that would be quite useful, but how is it that humans learn to do that? Not by training on a corpus of news stories, but by also learning experientially about the world in other ways that enable the mismatches between the news-story corpus and the actual external world to be sensed.

  18. In the Bay Area the mullet (before that term became widespread) was known as “the KFOG haircut”. KFOG was (maybe still is) a radio station specializing in easy-listenin’ commercial soft rock. The association made instant sense to me, just as “TERF bangs” do now. Instant judgment of people is a sad thing, but being aware of the chains of associations hidden in my mind is marvellous.

  19. I agree with J.W. Brewer and would go even further. The users of AI products might have different preferences as to whether they want information about something mundane or something new and exciting and a reasonably advanced system should be able to give the appropriately skewed version of reality. It also should give possibility to view “Internet as she is”, racism and all and a more weighted version were the most loud speakers are muted and less prominent elevated. Maybe this can be even done through appropriate algorithm training. It might be an interesting problem to distinguish between relatively rare crank views present in a dominant group and less represented genuinly minority group. How that could be conceptualized?

  20. While this is not so much off-topic as total free-associating, “making fun of hair” reminded me irresistibly of Danish’s demolition of a pickup artist

    Needless to say, free-associating is fine, and I for one enjoyed that demolition.

  21. David Marjanović says

    I’m not even sure mine and so on are always called pronouns, at least not their cognates in languages where they decline. Possessive adjectives maybe?

    Possessive pronouns, as opposed to personal pronouns.

  22. Trond Engen says

    Lars is right. I’ve seen “possessive adjectives”.

    Before we decide what the algorithm should do, or how the results are presented, we have to be clear about what it’s for. If we want the algorithm to present an ideal picture of human culture, we train it to do so. If we want it to show human bias without reinforcing it, we equip it with the means to tell its users that the question is likely to produce results that say more of human cultural bias than objective truth. If we actually want to do research on human bias, we can’t have an algorithm that corrects the results, and even the warning will get tiring pretty soon. If we want to learn about what obstacles black women face on the job market, we don’t want the search engine to hide it.

    But this is AI. We could probably train algorithms to recognize human biases and systemic discrimination that no human observer would see and suggest countermeasures that no human would think of.

  23. David Eddyshaw says

    In English, at any rate, “mine”, “hers” etc are exactly not adjectives; you can’t say “mine brother”, “hers bicycle.” CGEL (p426) calls them “independent genitive” personal pronouns.

  24. David Eddyshaw says

    I notice that Gareth King’s (rather good) Modern Welsh grammar does call Welsh fy “my” etc “possessive adjectives”, but actually they aren’t at all like Welsh adjectives. It’s a clear misnomer as far as Welsh is concerned.
    For “mine”, you need to say something like (f)yn un i “my one”; there’s nothing corresponding to the English “mine” series.

    Just to be different, Glanville Price’s French grammar calls mon etc “possessive determiners” (commenting “more frequently but less satisfactorily known as ‘possessive adjectives'”) and calls (le) mien etc “possessive pronouns.”

    Kusaal uses identical personal pronoun forms as verb subjects and as possessors: m zin’i “I sit”, m biig “my child”, with the latter being completely parallel in structure* to dau la biig “the man’s child”, so there is no reason to set up a distinct possessive pronoun category at all. Like Welsh, Kusaal needs a dummy head to express “mine”: m din “mine”, again exactly parallel to e.g. dau la din “the man’s [one.]”

    *Actually, there is a difference in tone sandhi. I propose to ignore this awkward fact. All grammars leak.

  25. Dmitry Pruss says

    The bias in the datasets isn’t something possible to avoid, or evento correct well. We trust the outcomes of medical statistics even though the input data suffer from severe selection biases. We are surprised when political surveys end up being a few percentage points off, even after the pollsters attempt a correction for the known biases based on the fairly recent results. Medical statistical data is generally decades old before the analyses are turned into public policy, by the way. They are generally designed to answer questions of relative nature, like what helps, what hurts, how much. The inevitably biased sets still give ok relative estimates. But of course we then use the predictions to apply hard thresholds and to draw absolute conclusions, like who gets statins or cancer chemotherapy. Usually, the stakes aren’t super high, and around the threshold value, not much bad would happen if one or the other course if action is taken. In comparison, we often expect an absolute answer with high stakes from the political and legal algorithms, like will our candidate win, or will the defendant reoffend.

    Are the stakes high in language AI? If we don’t even expect our fellow humans to understand our verbal queries with any precision, then perhaps it’s ok if a search result isn’t always spot on?

    The more important issue for me are the flaws in the models themselves, including the ones caused by a misguided hope to avoid biases. For example, courtroom AI excludes race from the input data, but more than “makes up” for it by adding numerous social and behavioral correlates of being an African American. These metrics correlate with each other as well, but end up being treated as independent variables due to the flaws of the models. So instead of hoped-for bias correction, we may actually augment biases by double counting all the hallmarks of blackness against the defendant. This is what I worry about language AI models, that they may be prone to amplifying biases if they double count various correlated metrics as independent variables. Or something of this sort. It’s usually a bigger problem for the less common situations, because in most common cases the AI may base its decisions on one or two hallmarks at a time, and train itself to be on target. But rare cases where many hallmarks are stacked up in the same direction may be simply too rare to matter in training…

  26. Dmitry Pruss says

    TL;DR it may be possible for an AI to improve its winning score in the more common scenarios, at the expense of doing even worse in rare special circumstances.

  27. PlasticPaddy says

    @de
    Is liomsa/leithíse é = It’s mine/hers
    Mo cheann/a ceann-sa = mine/hers
    sa or the lenited form se is emphatic. The first form with le indicates possession and would be only used for a pen and not a child.
    In the second form ceann (“one”) would be replaced by the more specific word so you don’t really say, e.g. “mine was tired” or “mine was in the wash” but “my son was tired” or “my shirt was in the wash”.

  28. For example, courtroom AI excludes race from the input data, but more than “makes up” for it
    The AI does NOTHING. People who write the code for it do.

    The more important issue for me are the flaws in the models themselves
    AND
    The bias in the datasets isn’t something possible to avoid, or evento correct well.
    And this here is the problem people generally fail to grasp: a model is the reflection of the data. You can’t make a distinction between the two the way Dmitry does here.

    These metrics correlate with each other as well, but end up being treated as independent variables due to the flaws of the models.
    This should not happen. Any halfway competent practitioner of ML (I refuse to call it AI), even a two-bit n00b like me, should now this. And this is a large part of the problems under discussion here: the BERT model (which is the main topic here) is good because it uses an incredible amount of data and as every ML system, it is largely a black box. The data is so big and messy that it can’t even be analyzed in proper statistical manner and so people don’t even attempt it. Ironically, only black box ML systems can make sense of it, but by doing so, they expose the biases inherent in the data. This was one of the points Dr. Gebru made in her controversial paper. Except there was nothing controversial about it and she is absolutely right.

  29. David Eddyshaw says

    ML (I refuse to call it AI)

    LInked from there is the interesting

    https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

    which concludes with the should-have-been-obvious-without-needing-to-be-spelt-out remark

    The Turing Test is not for AI to pass, but for humans to fail.

  30. Athel Cornish-Bowden says

    Modern Welsh grammar does call Welsh fy “my” etc “possessive adjectives”,

    As a Welsh speaker you can perhaps shed light on something I was thinking a few days ago while addressing a Christmas card. Someone I know lives in a street in Helston, Cornwall, called Gwarth An Drae. Google Translate doesn’t do Cornish (and despite my name I don’t, either), but it does do Welsh, and says it means The Disgrace, a rather improbable name for a street. Do you agree with Google Translate? As for Cornish, someone suggested it means The Summit: what about that (in Welsh)?

  31. The Turing Test is not for AI to pass, but for humans to fail.
    As anyone who’s ever been on Twitter knows very well 🙂

  32. David Eddyshaw says

    Gwarth is certainly “shame” in Welsh, but there’s also a gwarth “shore, wharf”, which might be more plausible. Welsh gwarthaf is “summit”; Cornish “gwartha” seems to mean “higher.”

    https://www.cornishdictionary.org.uk/

    If the name uses English spelling conventions, the second bit perhaps could be the Cornish equivalent of y dref “the town” (“an” is definitely the article “the.”)

    My best (wild) guess would be “Town Wharf.” Helston seems to be inland nowadays, but apparently was not always so; I don’t know if the location of the street is compatible with the “wharf” interpretation.
    My second guess would be “Upper (part of) Town”; actual geography might well differentiate the two possibilities quite neatly.

    [EDIT: Google Maps seems to rule out any wharvish interpretation. Oh, well.]

  33. Lars Mathiesen says

    As I read the Munro piece, the problem with the news stories wasn’t a gender bias, but that newspaper style very rarely uses the independent possessive pronouns at all — so the neural net learned about her just fine (and him and his) but not hers or mine.

    (In Danish and Swedish, and I guess Norwegian, possessive determiners and possesive pronouns are identical and decline as strong adjectives, but unlike regular adjectives they don’t take articles and they are inherently determinate. Adjective isn’t as far off as for English, though. I think meus and so on in Latin were even more “regular” as adjectives, but I don’t remember for sure).

  34. Lars,

    but that is precisely the problem 🙂 The data is skewed in some way and this needs correcting. And this is a simple and relatively straightforward case with an easy fix.
    Remember, the issue is that the language model assigns an incorrect part-of-speech tag, which happens all the time, for example with proper nouns which often get tagged as nouns or even verbs. I would simply add what we call a fix list, i.e. a list of wordform – tag pairs that overrides the model selection. And presto.
    The problem is that some biases are more insidious, more complex and not that easy to identify.

  35. Pet peeve alert:
    possessive determiners and possesive pronouns are identical
    So why the two labels?

  36. Lars Mathiesen says

    @bulbul, to expand that, Danish has a word min/mit/mine that functions like my (possesive determiner) and mine (possesive pronoun) in English. I never memorized the term for that word class in Danish school grammar, but I can’t imagine that there are two different ones.

    (Didn’t English used to be more like Danish with this, mine host and all that? I can see how my (and thy) come from loss of the nasal in unstressed position, pre-consonantal at first, which is then grammaticalized — but where the -s comes from in hers, yours and theirs is not so obvious),

    As to correction the bias, Munro suggests finding some juicy sentences with possessive pronouns and simply adding them to the input.

  37. David Eddyshaw says

    @ACB:

    Looking up Welsh gwarthaf in the Geiriadur Prifysgol Cymry to see if it was cognate with Latin vertex (apparently not), I notice that it actually cites Middle Cornish gwartha as having the same meaning, so Gwartha ‘n Dre would indeed be equivalent to Welsh Gwarthaf y Dref “Summit of the Town.” Your informant evidently knows what they are talking about.

    Didn’t English used to be more like Danish with this

    Yes. Old English mīn etc worked like the corresponding forms in German and Danish. I think Robert Monro has probably got it backwards with “his”, “hers” etc: “his” has not lost an additional “s” by haplololology; rather, “hers”, “theirs” etc have gained one by analogy with “his” and with noun genitives. (Incidentally, possessive “its” is a relatively recent analogical formation; the old Authorised Version of the Bible still uses “his” as the possessive of both “he” and “it.”)

  38. Athel Cornish-Bowden says

    My second guess would be “Upper (part of) Town”; actual geography might well differentiate the two possibilities quite neatly.

    I asked my sister about it when I talked to her on the phone yesterday: she said that Gwarth an Drae is indeed in the higher part of Helston. She doesn’t live there, but she has been there.

    Anyway, thanks for all of your analysis.

  39. just to jump in on “TERF bangs”:

    it’s not about making fun of people’s haircuts. it’s just that it makes no sense outside of its original subcultural context – specific US/Canadian white queer counterculture spaces of the mid/late 20-teens. in those spaces, fashions for particular haircuts moved quickly. each wave tended to leave clusters of people who retained that particular haircut for quite a while after the larger numbers who’d adopted it had moved on. predictably, those clusters weren’t purely random, (especially in a particular city), so it could be possible to talk (in nyc) about the “jewish social justice haircut”* or (more widely) “TERF bangs” based on who had most visibly held onto those cuts. only a few of those labels had any lasting public presence, but i could say “gone-a-do” or “enby oogle mullet” to hundreds or thousands of people and they’d know exactly what to picture…

    * short sides, slightly longer on top, with a pompadour puff in front

  40. David Marjanović says

    That’s amazing about the haircuts. And horrifying – having had the same haircut all my life (cut 3 × a year, with no change to the overall shape), living in an environment where every haircut is a detailed political/cultural statement would be a deeply scary prospect! But it’s definitely more interesting than my guess (which was that it was an extrapolation from 2 or 3 cases, like “Karen”).

    Didn’t English used to be more like Danish with this, mine host and all that?

    At that stage, though, it was just h-dropping: the Authorised Version has mine host, mine hand, mine eye, an hundred.

    Old English mīn etc worked like the corresponding forms in German and Danish.

    I’m too lazy to look up if that included the two adjective declensions. German:

    das ist mein Mann / meine Frau / mein Kind “this is my husband/wife/child”
    das ist meiner/meine/mein(e)s “that’s mine”

    (Both of these forms are Possessivpronomen or besitzanzeigende Fürwörter in German school grammar. Except for meiner × person & number, they’re distinct from the rather obsolete genitive forms of the personal pronouns.)

  41. I had to look up both enby and oogle. I didn’t think they’d wear mullets. Still in th dark about “gone-a-do”.

  42. enby = n-b = nonbinary
    oogle = panhandler

    (or so Google tells me)

  43. an environment where every haircut is a detailed political/cultural statement

    o! that’s not it. it’s that once a haircut’s days of widespread popularity are past, they acquire associations to the folks who’ve kept wearing them. and that’s just a general dynamic of fashion – it’s why a mullet or a DA now has particular (class, region, subculture) connotations that they didn’t have in their heydays…

    and yes: “enby” <- nb <- nonbinary; "oogle" less about panhandling specifically than a broader traveling lumpen counterculture.

    "gone-a-do" has to do with a the name of a particular barber; "(hair)do" is the only non-idiosyncratic thing involved.

  44. David Eddyshaw says

    I’m too lazy to look up if that included the two adjective declensions

    Nah, Old English mīn and the rest are simply declined with the strong adjective endings throughout.
    I’ve always admired the Germans for making adjective flexion even more complicated since MHG. It’s up there with the French achievement of making their irregular verbs even more irregular than Latin. I applaud this resistance to linguistic entropy.

  45. 1. “Hers isn’t a pronoun” is clickbait. It’s nothing to do with discrimination, racial, sexual, or otherwise. It’s just a failure of the system.

    2. Yes, it depends on the data that it’s fed. And you’re not going to fix that so easily. You are focused on racial, sexual discrimination, etc. It’s another bias.

    Recently I called someone in Australia and when I pointed out that China had covid under control but it was rising again in Japan and Korea, I got the response, “I hadn’t heard about those. The media here don’t cover them very much.”

    Bias writ large. Much of the Western press shows its bias by what it chooses to cover (in Australia, Europe and North America) and not to cover (the rest of the world — unless there is something tragic or shocking, which then reinforces the bias against ‘shithole countries’. Russia doesn’t do too well, either). People from shithole countries really should join together and write a letter to Google and ask them to correct their bias.

    (Incidentally, I would be interested to see the results if an ML black box were fed Language Hat as its sole source of data.)

  46. David Eddyshaw says

    I would be interested to see the results if an ML black box were fed Language Hat as its sole source of data

    It would inevitably lead to the Singularity.

  47. I like “traveling lumpen counterculture”. Are oogles roughly the same as crusties? Maybe less specifically punk?

  48. John Emerson says

    The Classifying Authority has officially added the US to the list of shithole countries, so that meme is up in the air now.

  49. David Eddyshaw says

    It is important to avoid confusion between a [travelling lumpen] counterculture (on the one hand) and a travelling [lumpen counterculture] (on the other.) A fortiori, lumpen travelling counterculture and countercultural lumpen travelling (which themselves have often been confused in the literature.)

  50. Lars Mathiesen says

    Isn’t hern (and hisn) a thing in some forms of English? Analogy with mine instead of his, perhaps.

  51. David Marjanović says

    simply declined with the strong adjective endings throughout.

    …oh, that explains why doing that (m/n form only) is an archaism in German:

    Die Rache ist mein, spricht der Herr “Vengeance is mine, saith the Lord”
    …und schanghaien die Mannschaft! Dann ist die Kohle unser! “…and shanghai the crew! Then the moolah shall be ours!”

    and not to cover (the rest of the world —

    Austria’s TV news once took the cake by opening an item with “For weeks there has been an uprising raging in northern Algeria”.

  52. @Lars Mathiesen: Yes, his’n and her’n do occur in some dialects. I have occasionally, since coming to South Carolina, used the former myself. However, I think the latter is blocked for me by the existence of hers. (I don’t know whether hers’n also exists, but if it does it must be much more restricted.)

  53. China had covid under control but it was rising again in Japan and Korea, I got the response, “I hadn’t heard about those. The media here don’t cover them very much.”
    ….
    Bias writ large. ….‘shithole countries’. ….

    It is our (I mean us, the discriminators) problem, not theirs (the discriminated).

  54. I’ll explain the above.
    First, Korea is a more developed country than many in the West. Second, it is the first time the West (Russia included here) is behind. In March it looked as an exam with a passing score 80, where Korea has 90, China 80 (second attempt), EU 20 and Russia was planning to have 30 and be ahead of EU. It is sheer madness.

    A reasonable topic to discuss for society would be: detailed analyzis of success stories. It was not what they were doing here: our responce was modelled after civilized countries, and it were civilized failures (Italy) rather than civilized successes (Germany) featured in the news.

    What a pity that it is Koreans who are civilized here.

  55. That’s amazing about the haircuts. And horrifying – having had the same haircut all my life (cut 3 × a year, with no change to the overall shape), living in an environment where every haircut is a detailed political/cultural statement would be a deeply scary prospect!

    Your haircut almost certainly is a political/cultural statement, but you’ll have to ask someone else (a woman is probably the best bet) what it states.

    Yes, his’n and her’n do occur in some dialects.

    On shorting stock:

    He who sells what isn’t his’n
    Must buy it back or go to prison.

  56. The lists of nouns predicted to co-occur with “hers” and “his” but not both are picturesque

    ability account bar bathroom blonde book castle decision easily food girls heat home kitchen leg lips – curves daughter dress perfume – methods mind music necklace phone sister sound speed spirit talent thighs toy voice wife women

    arm ass barrel bid bottles butt cash coat crown deals fault fresh fun future inside key knife length line lord nose now pain piece price reputation sale shaft smell top wound

  57. our responce was modelled after civilized countries, and it were civilized failures (Italy) rather than civilized successes (Germany) featured in the news.

    I have no idea where you’re getting your news, but I’ve been reading and hearing a lot about South Korea, Taiwan, and other successful countries. As for Italy vs. Germany, of course news features failures more than successes; “man successfully deposits large sum” is not news, “man loses large sum on way to bank” is.

  58. Lars Mathiesen says

    @drasvi, curves daughter dress perfume mom — that’s the part of the list that the BERT system thinks is more likely to be hers than his. The blog post doesn’t say how the Venn diagram was derived in the first place, but it can’t simply be “most likely to co-occur” because then there would be no overlaps. “Predicted in more than x% of cases” is my guess, but it will cost 48 dollars to find out what x is.

    EDIT: Actually it’s a mystery, the Venn diagram must be derived from some other test than just letting the system predict the pronoun — Munro says that “The book is his” is predicted more often than “The book is hers,” but book is still listed in the “hers but not his or theirs” part.

  59. Lars, more details here, but not enough:( I think it is similar to his

    Step 3: Mask attributes to predict new ones. Using the new dataset, mask attributes like “car(s)” sothat BERT predicts the most likely tokens for where“car(s)” is masked in a sentence like “The [MASK]is hers”

    He first fed “the X is hers” to BERT, and obtained a list of nouns. Then he repeated it with “..is his”. Made a single list (not sure how exactly: was it like in the Venn diagram, only items predicted for both pronouns?). And then he was trying “the jevelry is X”.

    The Venn diagram must have been obtained in a similar, but slightly different way: “shit” is in the list, but in the Venn diagram it is nether male nor female. “ass” is not in the list at all.

    His dataset is about cars. E.g.

    “What did Alex’s [MASK] do? Hers accelerated.”.
    Then:
    “What did Alex’s mom do? […] accelerated.”
    “What did Alex’s ass do? […] accelerated.”

    Meanwhile: “Figure 2 compares the model predictions to the training data which is Wikipedia and BookCorpus.
    :/

  60. For example “mom” is 7.4 times more likely to make BERT predict “hers” than “his”, when averages across all contexts and “money” is 23.0 times more likely to make BERT predict “his” than “hers”.

    What is “0.28 his”? Is not it the same as “3.57 hers”?

  61. @Y: yes, exactly! “oogle” used to mean specifically young/new/inexperienced/inept crusties (there was a band called Oogle Orphanage), but has expanded to cover a wider range. and i don’t know if it’s ever been used outside punk/punk-adjacent subcultural spaces.

    @DE: and in this case, both a [traveling lumpen] counterculture and a traveling [lumpen counterculture], which only muddies the waters more…

  62. An in-depth look at New Orleans oogles (bonus: illustrations by Ben Passmore, a great up-and-coming cartoonist).

  63. David Marjanović says

    Your haircut almost certainly is a political/cultural statement, but you’ll have to ask someone else (a woman is probably the best bet) what it states.

    I mean, I decline the obvious alternative to mine in part because it’d look fascist on me, so there’s the avoidance of such a statement. Take that as a very vague negative statement if you like…

  64. An in-depth look at New Orleans oogles

    Great piece, thanks.

  65. David Marjanović says

    (Can’t edit anymore: my hair looks rather medieval, especially when it’s longer, but currently there’s no ideology or subculture associated with the Middle Ages, fortunately.)

  66. @John Cowan: It was discovered some years back that, due to the electronic systems that now manage securities trading, it was possible to short stock without actually having a share to sell. This was termed “naked short selling.” It worked because the systems used to track stock sales by institutional investors did not keep track of who possessed what in real time; the transactions only had to balance out in the end, with the reconciliation process taking days to complete. That made it possible to sell a stock, buy it back when it went down, then borrow and return it, rather than having to borrow the stock from a market maker first.

  67. I’ve been reading and hearing a lot about South Korea, Taiwan, and other successful countries.

    I’m not sure if you’re typical, Hat. Or maybe things are better in the US.

  68. languagehat, I mean Russian news. American public was discussing who’s to blame: Trump or China, when I checked (in March, then in September). I think they are still discussing it. The most meaningful thing to do is public discussion and analyzis of success stories. It is not what I see in the news.

    My main point was though: when you dismiss a country as a “shithole”, it harms you, not them.

  69. Quite right.

  70. John Emerson says

    Well, as I have explained, since Katrina and COVID, the US has officially entered the shithole category. And yes, when I say that I am one of the ones hurt.

  71. Back to Munro, I wonder if an ‘ideal’ BERT trained on an ‘ideal’ dataset is supposed not to introduce such biases.

    Consider an infinite dataset including all possible contexts, and in each possible context the pronoun is 50.01% of times is “hers”, otherwise it is “his”. Is an ideal system supposed to always predict “hers” (because girls are majority), or should it also be 50-50?


    In English, at any rate, “mine”, “hers” etc are exactly not adjectives; you can’t say “mine brother”, “hers bicycle.” … For “mine”, you need to say something like (f)yn un i “my one”; there’s nothing corresponding to the English “mine” series ….. Like Welsh, Kusaal needs a dummy head to express “mine”: m din “mine”, again exactly parallel to e.g. dau la din “the man’s [one.]”

    DE, but you find words with meanings like “red” in exactly these positions: A red dress. The dress is red. Give me the red one.
    You can analyze my and mine, red and red one as two forms. The second form is NP.

    It is still connected to “red” semantically (the operation “refer to an item by her distinctive in the context attribute” is defined for every adjective and must somehow reflect adjectiveness – in this sence forming adverbs is a much less trivial operation), and many langauges have morphology for this. “Mine” is a witness.

  72. I dont’ know why item is ‘her’:/

  73. currently there’s no ideology or subculture associated with the Middle Ages, fortunately

    I have wondered about that. Medieval fairs are very popular in Germany, Austria and the Czech Republic, and they generally seem pretty wholesome. On the other hand, there is an entire oddly popular music genre of German language rock/metal bands singing faux medieval songs (eg. Faun, Feuerschwanz, Schandmaul), who seem politically over on the the right to me. Or at least popular with a demographic (no university degree, „native“ stock) that tends to vote right wing.

  74. Salafism.

  75. Of course there are numerous subcultrues associated with the Middle Ages.I mean, there are many fans of this period, and they group together: from university deparments to youth groups. These two interact. University departments is a valid example. It is not by chance Rosamond McKitterick is “Rosamond” I think. One professor imressed me with her grey hair: I think the decision not to dye it has to do with her interests.

  76. John Emerson says

    “Renaissance faires” are a 50 year old hippie tradition. Of course the renaissance
    Is not exactly medieval, but these faires are not fussy about details. The Society for Creative Anachronism is that old, and they have jousts.

  77. I think “ideology or subculture associated with the Middle Ages” was intended to imply “subculture associated with something else that uses the Middle Ages as a front/symbol”; obviously there are people interested in every historical period, but there’s a difference between dressing in pseudo-medieval garments because you think they’re cool and dressing in CSA gray because you think The South Shall Rise Again.

  78. John Emerson says

    Well.”subculture” is a much weaker term than ideology, and the Rennfaire / SCA people are definitely a subculture, sort of a messier preRaphaelite kind of thing,

  79. Sure, I’m just trying to clarify what I suspect David M meant.

  80. David Marjanović says

    Salafism.

    Ooh, good point. But that goes with much longer beards. 🙂

    “Renaissance faires”

    That kind of thing is much less widely known over here than in the US, in my impression anyway.

  81. David Eddyshaw says

    my hair looks rather medieval, especially when it’s longer, but currently there’s no ideology or subculture associated with the Middle Ages

    Tonsures. Church vestments* (especially if you do the Cambridge Mediaeval History intellectualler-than-thou thing of starting the Middle Ages with the accession of Augustus.)

    Standard Hasidic kit is basically posh 18th Century Polish mainstream stuff (OK, rather too late for the Middle Ages. But it’s the same principle.)

    *Essentially, standard Late Roman civil service uniforms. It’s as well they didn’t have pinstripe suits and bowler hats in those days. The Nation of Islam has a parallel sartorial outlook, perhaps.

  82. @David Eddyshaw: Elijah Muhammad started wearing bowties consistently in the 1950s—after the peak of the bowtie’s popularity in America, but before wearing one became seen as an obvious affectation. It quickly caught on with men in the Nation of Islam, with whom it continues to be popular today. It was never required of NOI members—Malcolm X hardly ever wore a bowtie, for example; however, it did become part of the standard 1950s–1960s uniform of the Fruit of Islam (the NOI’s all-male paramilitary wing)—black suit, white shirt, red bowtie.

    I wonder now whether the Fruit of Islam members who murdered Malcolm X were dressed in that characteristic FOI style when the assassination was carried out. Even if they were, they might not have been particularly recognizable among Malcolm X’s sincere followers. After Malcolm X left the NOI and formed the Organization for African-American Unity, many of the people who attended his meetings were current or former NOI members, and many of them still wore bowties.

  83. Strelkov was into “historical reconstruction”: 1, 2.

    And there were numerous neo-pagans in Donbass war on both sides, especially* on Donetsk side (up to villages with attempts to recreate tripartite caste division).

    And God knows what subcultrues contributed into ISIS.

    P.S. maybe not “especially”, if we speak about neo-pagans in general. I spoke about a particular movement, Rodnovery. These are more common in Donetsk

  84. John Emerson says

    Per FS Fitzgerald, TS Eliot’s weird hairdo (greased and parted down the middle) was the height of Ivy League hipness ca. 1915-1920. Those with this hair were called “slickers”.

    It seems to have survived among the American strategic elite, which still tends to be Ivy — Robert MacNamath was a slicker. TSE seems to have had some kind of military connection — around 1950 he gave a speech on a platform with 4 or 5 American generals / admirals.

  85. John Emerson says

    What I think I have noticed about both Amish and Hasidic dress is that, besides the anachronism, 1). they seem to make clothing oversized and let their skinny kids grow into them, and 2). they very effectively grow into their clothes and become quite portly.

  86. I remember my father pointing out that one of the reasons that Monty Python found both bishops and barristers to be easy pickings for mockery was that they both continued wearing costumes that had gone out of fashion long before. Barristers’ wigs are hundreds of years out of date as a fashion, while a bishop’s regalia is closer to two thousand years out of date.

  87. I don’t like how Munro presents it. He is trying to impress*, and to impress with the bias as well. Two things must have contributed:

    – “Hers” (as opposed to “her”) is a rare pronoun, and as he said, it is especially rare in the texts used for BERT’s training data.
    – His dataset was all about cars – and he didn’t mention it. “hers accelerated”, “The dealer gave hers fresh paint”, “The dealers liked that hers cleaned easily”, “The dealers liked that hers sold quickly.” I see how it is fun with “butt” instead of “car”, but the bias? Dealers are all over the place:/

    And then he presents it as an example of bias in general. Naturally, it is his post that draws attention of the journalist, naturally, the jouranalist and I, the reader, do not understand what exactly is going on.

    On the other hand, analyzing worst-case scenarios makes sense.


    * “Only 16.7% of short papers submitted to EMNLP 2020 were accepted, so we are grateful to the reviewers of this conference for accepting our paper.”
    “If you use BERT-via-a-Google-search for “Top NLP Conferences”, the first result should be this image from my PhD:”
    “Robert Monarch (formerly Robert Munro)”

    Could juts have said that his penis is big. This is, after all, what are men’s and women’s jobs: whenever some job can be equated to having a big penis (in terms of money or prestige), men compete for it.

  88. J.W. Brewer says

    @Brett: Thanks to the glorious combination of capitalism and the internet, you can now purchase from the privacy of your own home bow ties with a star-and-crescent logo, aimed at the NOI market. The pricing suggests that they are probably not made of pure silk, but perhaps the target audience is assumed to be budget-conscious.

    https://noishirts.com/shop/ols/categories/crescent-bow-ties

  89. Once I looked up Thailand (not in Google) and the search engine offered me hundreds of pictures wiht beaches, without a single person on several pages of result. It was so unisual, results without people at all, that I showed it to my freind and he told a joke. A Roman Catholic priest, to a Jew: “I can picture your Jewish paradise! Noise, yelling, children all over the place, diapers hanging…”. The Jew: “and I quite imagine your Catholic paradise: flowers, trees, birds… and nobody [out there]”.

    In a classic example, if you searched “unprofessional haircut” on Google, you would see, among others, pictures of black women with big “natural hair”.

    I tried it in Yandex, a Russian competitor of Google. There are only men, the first lady is at the position 166, and the image is subscribed: “when black women wear their hair curly/ in its natural state, people criticize them for not properly grooming themselves & say it’s “unprofessional”.” The next one is #288

  90. “Renaissance faires”

    That kind of thing is much less widely known over here than in the US, in my impression anyway.
    I dunno. During the warm season, in non-COVID times, there always seems to be a Mittelaltermarkt ongoing at some place in a 50 km radius around Bonn on any given weekend. Our quarter had one three years ago as part of its 800-year celebrations, and many of the exhibitors were professionals who made a living of the activity.

  91. PlasticPaddy says

    @hans
    The exhibitors also may have particular crafts (dressmaking, blacksmithing, food preparation and soapmaking) which they demonstrate in suitable venues other than the fairs (e.g. castles or folk parks), using replicas of the original tools and processes.

  92. Sure. Although my impression is that at such fixed locations, the exhibitors are there constantly; and if they’re not because the showcasing is only a part of special events, then I’d count those events as part of the market circuit. For some exhibitors, the circuit is certainly wider than Medieval or Renaissance themed events, e.g. there are places, events and markets for traditional handicraft without any medieval pageantry.

  93. A freind of mine earns for a living this way. He is not a professional, but he is impressive and a good actor. City governments began to sponsor such things here, and a few years ago he was invited to an event set in 1914, just as a part of the crowd (for small money). Next time they invited him for a minor role in a movie, and since then he earns for living running slave markets, robbing people and taking away Christmas from children.

  94. Lars Mathiesen says

    I only know of one around here, by name, but it’s more generally historical reenactment by groups that really take it seriously, not many professionals in sight. One group has a Roman army camp and practice the shield fortress thing. (The Latin commands were a bit off. Maybe the decurion was a Vandal). And there were hunter-gatherers in deer-hide tents, and US Civil War-themed horseback tournaments. A quarter mile takes you through 5000 years. I’m hoping it isn’t cancelled this year too, it’s in the Pentecost long weekend, late May.

    There’s a smaller Viking market circuit, it seems, and every historical castle in Sweden seemed to have one too. All season or select weekends.

  95. There was a huge influx of young people after the fall of USSR, forming several interacting and overlapping communities: Tolkien fandom (boys and girls who would gather in forests, fight with swords, compose songs and speak Sindarin), role-playing crowd (boys and girls who would gather in forests, fight with swords, play AD&D otherwise, but wouldn’t speak Sindarin) and historical reconstruction/reenactment crowd. All with loose connetions to the university crowd, to the musical scene, and, in case of historical reenactment to military veterans too.

    In USSR any youth movements were seen with hostility and suspicion by both the authorities and mainstream society. It continued in 1990s. Historical reenactment folks were not treated with hostility as, say, hippies (hippie girls were often raped by police), but they were not integrated in mainstream either.

    Recent commercialization is interesting. Commercialization is legitimization: money is a means of dialogue. When you are profitable, you are accepted, otherwise you are a ”subculture”.

  96. David Marjanović says

    Dungeons and Dragons, or attention deficit disorder? Or both at the same time?

  97. it can aid and abet discrimination. Is this really so hard to understand?

    It’s not enough to suggest an armchair philosophy argument that something can abet discrimination before it’s reasonable to conclude that it should be intervened in. (“Blogging can abet discrimination, therefore blogging should be banned”?) You have to show that it actually does so to a sufficiently major degree. And in any case I continue to wonder what would a remedy even look like. What do you expect should happen upon someone googling the phrase “unprofessional haircut”?

    Representation BTW is a complete non sequitur since a single search phrase is not “mass media”. Nobody is going regularly around googling this particular phrase, let alone for the sake of expecting balanced and unbiased picture.

    If it’s not obvious, note that I am not protesting the suggestion that word-association engines should probably be monitored for bias and not deployed prematurely, only the specific claim that ‘if your search algorithm is showing pictures of black women with “natural hair” for “unprofessional haircut,” you need to remedy it.’ (I assume it’s an oversight, but as worded this even suggests that any appearence of black women among such search results would be unacceptable, no matter how minor.) Kevin is right that some problems exist in the outside world and you cannot fix them by tinkering with Google’s search or any other piece of software.

  98. David Marjanović says

    Who does google for “unprofessional haircut”?

    If I were afraid of being discriminated against for my haircut, I’d google “professional haircut” instead, because the results for “unprofessional haircut” still wouldn’t tell me what to do. If I were a (presumably pointy-haired) boss and looking for an excuse to discriminate against employees or applicants, I’d also google “professional haircut” and throw out everyone who doesn’t match…

  99. Athel Cornish-Bowden says

    You can become Prime Minister of the UK without bothering with a professional haircut.You can become President of the USA without bothering with natural-looking hair.

  100. “Prime Minister of the UK without bothering with a professional haircut.”

    It is natural hair. Not “natural hair”. Natural hair. Not even remotely as natural as mine though.

    Who does google for “unprofessional haircut”?

    An article in the Guardian, with some links.

    And an archived version of the original Twitter post.

  101. John Cowan says

    Highly unprofessional, indeed downright unspeakable, haircuts are now a thing, because cut by the victim themselves or some cohabitant.

  102. David Marjanović says

    BoJo tousles his hair every time he steps in front of a camera. He wants to be underestimated.

    Trump reaches a similar result from the opposite starting point: as a narcissist, he’s completely convinced his complex combover is perfect.

    An article in the Guardian, with some links.

    Doesn’t answer the question beyond “one MBA student”…

  103. David Eddyshaw says

    Not sure that Johnson wants to be underestimated precisely; I think it’s more that he wants people to mistake him for “loveable” Boris rather than dead-eyed scheming ratbag Alexander. In the common-humanity stakes, he wants to be overestimated.

    It works, too, alas. The fall in his popularity seems to be because the sort of people who voted for him now think (reasonably enough) that he is weak and indecisive*, rather than because they have suddenly noticed that he is utterly unprincipled and doesn’t give a rat’s arse about them anyway.

    *But he is not incompetent. The assessment of the current UK government as “incompetent” is based on a radical misunderstanding of what that government’s actual objectives are. Those, they are achieving very nicely.

  104. Let us assume that people want to google “professional hair”, and do not want to google “unprofessional hair”. Likely it is correct. What if it can be generalized:

    1) requests that people want to make systematically produce “white only” results (unless the search string is “basketball”, that is the referent is associated with a race) and can not produce “black only” results.
    2) requests that people do not want to make – random transformations of the above – can sometimes produce “black only” results.

    I think one can say that #1 is symptomatic of a bias: a white person is a model. I am less sure about #2.

    P.S. I think it can’t be generalized this way. But in this specific case #2 follows from that black ladies are insecure. Most pictures come from articles by black women, against discrimination.

  105. “Doesn’t answer the question beyond “one MBA student”…”

    Well, the author of the proposal she followed was earlier in the chain. But she and her readers contributed a lot in popularizing it. For her we can obtain a number of answers:

    1) people who were asked to do so in a tweet.
    2) black people, who want to (unexpectedly) find themselves a majority.

    But, seriously, the picture in her tweet is funny. Each time the algorythm introduces an unexpected relation (“thailand=>beaches” is expected, “thailand=>empy” is not), it is interesting and (visually) impressive.

  106. Stereotypes (“I see many A and they are predominately B” => “B are A”, “A is B’ish”) are absolutely a part of our cognition. They also easily become shared knowlege. The algorythm here may illustrate or model their formation.

    I could predict that “haircut” is more often said about men and “hairstyles” is usually said about women, but when Yandex offers me 300 pictures of men and 2 women, I process that differently.

    A gendered distitbution of a word across contexts is extracted from language, exaggerated, and then fed into my image-processing brain. The result doesn’t match any of my pre-existing stereotypes. I am surprised.

    A white person from a segregated society would hardly think about black women, when asked about unprofessional hair. He would have applied the idea to himself and his freinds.
    But Google is not white, it is also (some 10%) black. It changes its colour and sex, and I, again, am surprised.

    This is what I find interesting here. Why am I surprised, and whether “funny” effects produced by computer algorythms can be analogous to similar effects produced by similar algorythms in our brains.

  107. John Emerson says

    A co-worker once applied for a bank teller job (then always a woman’s job). It was ~30 items long and was incredibly detailed: no panty lines or nipple bumps, of course, but ((for example) glasses frames could not be darker than hair. Totally innocuous blandness was spelled out in great detail.

    I worked for minimum wage at a McDonalds in 1967, and we were expected to behave professionally at all times. That basically meant the complete suppression of individual personality.

  108. David Marjanović says

    I meant “underestimated” as in “oh, he’s Mostly Harmless and won’t be All That Bad…”, but yes, that’s not the whole story.

  109. John Emerson says

    THE DRESS CODE FOR THE JOB was ~30 items long and was incredibly detailed:

  110. John Cowan says

    The first necessity of any government is to keep its people(s) alive. Badenov seems to be failing at that, so I double down on “incompetent”.

  111. @John Cowan: When he was running for president in 2011, Mitt Romney explicitly made the argument to a crowd in the Granite State that the most basic job of the government was to keep people alive, so it was not a big deal to accept restrictions on our freedoms in the name of safety. On The Daily Show, John Stewart mocked Romney for saying this in New Hampshire, where the state motto is “Live Free or Die.”

Speak Your Mind

*