James Somers has a good analysis of “it turns out,” beginning by saying that Paul Graham knows how to use the phrase: “He works it, gets mileage out of it, in a way that other writers don’t. That probably sounds like a compliment. But it turns out that ‘it turns out’ does the sort of work, for a writer, that a writer should be doing himself.” He goes on to explain convincingly what he’s talking about, concluding:
In other words, because “it turns out” is the sort of phrase you would use to convey, for example, something unexpected about a phenomenon you’ve studied extensively—as in the scientist saying “…but the E. coli turned out to be totally resistant”—or some buried fact that you have recently discovered on behalf of your readers—as when the Malcolm Gladwells of the world say “…and it turns out all these experts have something in common: 10,000 hours of deliberate practice”—readers are trained, slowly but surely, to be disarmed by it. They learn to trust the writers who use the phrase, in large part because they come to associate it with that feeling of the author’s own dispassionate surprise: “I, too, once believed X,” the author says, “but whaddya know, X turns out to be false.”
Readers are simply more willing to tolerate a lightspeed jump from belief X to belief Y if the writer himself (a) seems taken aback by it and (b) acts as if they had no say in the matter—as though the situation simply unfolded that way.
It turns out, though, that (as pointed out by a couple of commenters) Douglas Adams expressed the same thought in The Salmon of Doubt:
Incidentally, am I alone in finding the expression ‘it turns out’ to be incredibly useful? It allows you to make swift, succinct, and authoritative connections between otherwise randomly unconnected statements without the trouble of explaining what your source or authority actually is. It’s great. It’s hugely better than its predecessors ‘I read somewhere that…’ or the craven ‘they say that…’ because it suggests not only that whatever flimsy bit of urban mythology you are passing on is actually based on brand new, ground breaking research, but that it’s research in which you yourself were intimately involved. But again, with no actual authority anywhere in sight.
(Via Geoff Pullum at the Log.)
Some of the semantic aspects of ‘it turns out’ are present in the Chinese word 原来 yuánlái. On the surface this means ‘originally’. But in many cases, it means ‘actually’, or ‘originally, before my incomplete understanding got it wrong’. 原来是这样!Yuánlái shi zhège yìsi – ‘Originally it’s this meaning!’, or more idiomatically, ‘So that’s what it means/you meant’.
The ‘turns out’ meaning can emerge in some contexts. For example, 他原来没去 Tā yuánlái méi qù ‘Actually, he didn’t go’ or ‘It turns out he didn’t go’.
That should have been (first sentence): 原来是这个意思!
Somewhat off the topic of the original post, but Chinese has a number of nice expressions meaning roughly “I wrongly thought, but”. The verb “yiwei” (I can’t type Chinese from my phone, I’m afraid) is another good example.
This is a very sensible analysis from Somers and an interesting read. Bathrobe and afinetheorem bring up an interesting point that deserves personal reflection. I’ve been studying Mandarin for a few years now and both of these terms 原来 (yuanlai) and 以为 (yiwei)do in fact convey a mistaken belief or thought. While pondering this, though, I feel like I am more inclined to put trust in the phrase that is said after the English “it turns out” versus the Chinese phrases. To me, the Chinese phrases translate into more a change of opinion or situation rather than a mistake in thought. In other words, the English phrase could very well manipulate me, but hearing the Chinese phrases used, I more often than not think that it’s the speaker’s opinion, not the inevitable truth.
This could also very well be due to my level of fluency.
I think that in French “it turns out that X” would be “il se trouve que X”, implying that (contrary to the speaker’s previous opinion or lack of opinion) the truth of the matter (which could be discovered by anybody) is that X (X being any statement).
I am surprised that the quoted writer seems to think that these “turns of phrase” are limited to the written style: they sound like everyday speech to me, as in “Everybody thought that Tiger Woods had the perfect marriage, but as it turns/turned out, ….”
I don’t understand the fuss. It seems to boil down to a preference in peacocks. Somers, Pullum and others who agree with them would like to see more “I”s in the fantails, more explicit “this is me talking” in the pronouncements. Note that “it seems to boil down to” is itself very like “it turns out”, with an extra dash of the demonstrative demureness here in demand. Does “seems” make a more honest man of me ?
The corresponding German expression is es stellt sich heraus. I don’t use it at all (don’t know “why”), and I can’t remember encountering it (or il se trouve or “it turns out” either) in my own extensive reading in philosophy, sociology, history of science, biology – in German, French, English. Perhaps the gentlemen above are associating with, and reading, the wrong sort of person ? Perhaps the problem, if it is one, is peculiar to academics who must plod through certain notoriously muddy fields of waving hands ? This would explain why the authors gripe about writing and lectures – a point acutely brought out by marie-lucie. Somers has two examples involving a roastbeef sandwich and a movie, but says about them merely “so far so good”. The remainder of his short article is about universities and experts. Pullum’s examples are “unscrupulous theorist” and “famous linguist at a university in California”.
My own feeling is that there is enough hand-waving out there to satisfy anyone’s appetite. Adding more “I”s is not going to make it easier to see what’s going on. I like to save space where I can. A footnote in Watt puts this most wonderfully (the footnoted passage is quoted in a previous post here at hat):
I though I knew the answer but… it turns out that/in fact/apparently/it seems that/I was wrong, arguably.
Evidence,lad. Where is your evidence?
Evidence,lad. Where is your evidence?
Yes, that’s the usual reply when someone says something whose justification you don’t see. A Q&A exchange begins. That’s civilized discourse. But Somers, Pullum & Co. seem to want an Endlösung. They want the talking to stop. They want everything handed them on a silver platter, so all they have to do is absorb the fully metabolized and certified knowledge displayed there.
Let me put my point differently. Pullum describes his experience with the famous linguist in these words:
The first thing that strikes me about this is its resemblance to the classic grad student gripe: “I can’t understand this guy. I just know he’s wrong, but I just can’t pin down what it is”. I say grad student gripe, but in fact that is only the initial phase of an academic career, when the realization of what you are getting yourself into starts to bite. It gets much, much worse.
The world within and without academia teems with eely-thinking people who you know are wrong in some sense, but often can’t figure out why. The difference to a grad student is that he sees that his life support, his income, is going to be dependent on learning to deal with this situation. I can well imagine that Pullum is chapped by the famous linguist. I feel that way too occasionally about certain contributors to the theme-oriented essay collections that Suhrkamp publishes (Suhrkamp Taschenbuch Wissenschaft) and that I read regularly. Thank God I didn’t go for philosopher or sociologist (although I’m sure I could whup ’em all …).
It’s not “it turns out” that gripes me in any particular way. My own Endlösung would be to institute something similar to the three-months-in-the-fields regimen imposed on intellectuals during the Chinese “Cultural Revolution”. For three months each year, intellectuals would be forbidden to use binary categorizations such as “objective/subjective”, “descriptivist/prescriptivist” and so on. They would be forced to think things through without all the conceptual make-up they usually plaster onto their persona.
Or rather the conceptual padding, girdles, raised heels etc. with which they prop up their failing, ailing minds.
I am surprised that the quoted writer seems to think that these “turns of phrase” are limited to the written style: they sound like everyday speech to me
Of course they’re not limited to the written style, but his point is not that they’re bad, evil phrases that no one should ever use, his point is that they are occasionally used (perfectly innocently—he obviously has a great deal of respect for Paul Graham) to give an unearned dash of authority to an argument. He is not laying down the law, just pointing out an interesting fact that often goes unnoticed.
Stu(art): See above. (My, you’re grumbly today!)
Yeah. I should be content with bunny rabs and blue skies. They always turn out right (absent myxomatosis and volcanic ash).
Stu, it never ends. An old fart can find himself saying to a smart young whippersnapper, “Slow down, lad(y), what do you mean ‘the answer turns out to be …’? I don’t even understand the question yet, or maybe the point of view behind it. You may be right, but I need context.”
It allows you to make swift, succinct, and authoritative connections …without the trouble of explaining what your source or authority actually is.
I expect “it turns out that” to be accompanied by the source of the information. It’s explicitly saying that it’s not only the speaker’s opinion and that it’s an implicitly verifiable statement. If, as it turns out,* the speaker doesn’t verify it, then it’s the speaker who’s at fault not the phrase.
Yet the meek will not inherit the earth. For fear of interfering, they will let it deteriorate until it isn’t worth a bean when time comes round to open the testament. Famous linguists will be promoted to the right hand of God, and nobody will like it one little bit.
*Never mind.
empty: it never ends
Crown: then it’s the speaker who’s at fault not the phrase
Right. The barely concealed issue here is the continuation of discourse by discursive courage. It is not authority-sneaking, nor “softening up” of interlocutors, nor peccant locutions.
I use a lot of these phrases — there are also “as it happens”, “ultimately….”, and if you really want to go all the way, “of course”. Caveat emptor.
The future is uncertain and our knowledge of the past is imperfect, so we’re always adjusting our opinions. It doesn’t necessarily have to have anything to do with a prior error. For example “We took a lot of raingear, but that turned out to be unnecessary”. Or “Pauling was invetsigating proteins, but DNA turned out to be a nucleic acid.” (Pauling hadn’t made a mistake; of the reasonable possibles known at the time, he had chosen the wrong one. Only after Watson and Crick had finished their work did Pauling’s hypothesis become wrong).
I don’t see the objection at all. Readers should always be reading critically and taking what is said with the appropriate quantity of salt. PG’s judgment that there was no Cambridge in NYC is pretty evidently subjective, and the normally competent reader will either think “That fits/doesn’t fit my experience” or else “That’s something to think about”. Would anyone take PG’s statement at face value and go around saying “It has been shown that there’s no Cambridge of New York”?
Rather than merely making statements autobiographical and part of one person’s personal experience, “it turns out”, etc., are a sort of reminder that our knowledge is the outcome of a discovery process involving the replacement of initial imperfect understandings with better later misunderstandings. I suppose that some might take that kind of statement to be asserting that the later understanding, unlike the former, is infallible, but you don’t have to read it that way, and a good reader won’t, and I don’t think that it can be assumed that authors want it to be read that way.
I also use “they say” or “it seems” or “apparently” when I want to bring statements into the argument about which I am still generally uncertain. “I had expected to see a lot of migrating swans, but it apparently they hadn’t arrived yet”.
and if you really want to go all the way, “of course”.
Yes, “of course” is the ultimate snooty appeal to one’s own authority.
better later misunderstandings.
Um…… you know what I mean.
Better and later misunderstandings ? Better misunderstood than late ? Belated misunderstandings of your betters ?
the continuation of discourse by discursive courage
Depretentioused, that means: dialog keeps going only when both parties return the ball. No news there. But it also makes clear who’s to blame when, in a dialog between A and B, A wusses out and starts whining that B is running subtly annoying rings around him.
That is Ressentiment. It turns out that Nietzsche had good stuff to say about that topic.
I realise that the thing I find so appealing when I hear this phrase “it turns out” is that it’s sort of comforting. It’s as if there is one absolute truth about everything in the world that’s there to be discovered and it’s only to go out and find out what it is. It’s the same sort of neatness that you often find in a children’s story. We know life’s not really like that, but it might be nice if it were.
In older stories, such as the Grimm fairytales, children were forever being turned out by their parents – because there wasn’t enough food, or in order to make them seek their own fortune, or because they were in the way. Neat solutions to age-old problems – is that the sort of thing you had in mind ?
Of course things usually turned out OK – which might not have happened it the kids had been allowed to hang around the house and watch illustrated almanachs 24/7.
What’s an illustrated almanach? Is it just an almanac(h) with illustrations? Some children are very fond of suduko.
I forgot: the German Almanach is padded with an extra h for safe handling by children. Now you’ve got me confused. Are you suggesting that almanacs usually have illustrations anyway ?
Apparently the Grimm brothers were both dead before sudoku was invented “in the late 19th century”.
Interesting, because I use the German and the English version all the time. (Except I don’t write much in German anymore.)
The dialect shortens it to what would be kommt heraus, which conveys a whiff of whatever comes next being the result of an objective analysis or of it having crawled out of some hitherto hidden hole. It’s a very common expression.
I didn’t know about the French equivalent, however.
~:-|
Am I hanging out with the wrong people? Surely it can’t all be due to “the closer you get to humans, the worse the science gets”?
Nitpick alert!!! Flee if you can!
DNA being desoxyribonucleic acid, it was already known to be a nucleic acid; Pauling thought that the material of heredity was not a nucleic acid but a protein, and that seemed to make the most sense at the time. For instance, there are 20 amino acids in proteins*, but only 4 bases in DNA, so proteins could in principle code for a lot more, and proteins can have extremely complex shapes while DNA can’t do much more than curl up, suggesting the same conclusion.
Then DNA turned out to be the template that is inherited, and turned out to code for proteins rather than the other way around.
* That’s more complicated, but never mind.
Oops. That’s deoxyribonucleic acid in English, without the s.
Always fascinating how technical terms of Latin/Greek origin differ between languages.
“Pauling was investigating proteins, but *genes* turned out to be ^made of DNA^, a nucleic acid.”
In principle neither proteins nor nucleic acids are superior: a genetic material with just two alternants would be able to hold all the same information, though it would require more bases for any given gene. Consider the CD, which captures Beethoven’s 9th Symphony (substitute your own idea of a great musical composition if need be) in a language that uses only alternating runs of Pit and No Pit.
If eely-thinking people didn’t exist, Stu would have to invent them.
Well-observed, empty. When you’re looking for trouble, as I am, you really do have to arrange things so that you actually find some now and again. It’s the counterpart of “look on the bright side of life”. David for instance likes to pick nits, whereas I like to prick twits.
in a language that uses only alternating runs of Pit and No Pit.
If Platonov had lived just a few years longer (d. 1951), he could have explained the fundamental principles of computer science and genetics in a book entitled The Foundation Pit and No Pit.
David: DNA being desoxyribonucleic acid, it was already known to be a nucleic acid; Pauling thought that the material of heredity was not a nucleic acid but a protein, and that seemed to make the most sense at the time. For instance, there are 20 amino acids in proteins*, but only 4 bases in DNA, so proteins could in principle code for a lot more, and proteins can have extremely complex shapes while DNA can’t do much more than curl up, suggesting the same conclusion. Then DNA turned out to be the template that is inherited, and turned out to code for proteins rather than the other way around.
What I as a layman have gathered, is that understanding of these things has turned out several times since the early ’50s, each time very different. As John remarks: “In principle neither proteins nor nucleic acids are superior”, but that has to do with much more than “codes”. “Gene” (now a very nebulous notion itself, it appears) proximity contributes only to a limited extent. The curling-up of DNA is essential, since segments are sliced out and inserted elsewhere, far-distant segments appear to be “consulted” occasionally etc. And the 90%+ “superfluous, non-functioning” segments, as they were initially regarded, turned out to play a crucial role in the process.
The very notions of “code”, “code for” have become more of a hindrance than a help, since there is no “program”. Why is there no “program” ? Because there is no CPU (processor), and the entire molecular apparatus modifies itself, so it’s not anything like a program in computer science*. DNA affects the proteins that are made, the proteins affects the enzymes that come into play (I’m probably getting the details wrong here) and enzymes affect which parts of the DNA are spliced and turned off and on.
David will probably have to nit about all this, but so much the better.
* For decades hardware and programming languages have been designed so as to prevent self-modifying programs from being executed or even written, since they are extremely difficult to get to work right and even understand. Also, self-modifying programs can destroy themselves, and are easy prey to viruses … which takes us back to biology.
DNA being de[s]oxyribonucleic acid, it was already known to be a nucleic acid.
John Emerson has since corrected his own text, but for what it’s worth David Marjanović’s nitpick receives the noetic seal of approval. Nucleic acids were known (and so-named) before Crick and Watson’s discovery of their role in genetics, and a quick scan of the (epistemic and assorted other) possibilities reveals no way to finesse good sense into John’s original formulation.
I am intrigued by David’s further points:
For instance, there are 20 amino acids in proteins [caveat in a footnote], but only 4 bases in DNA, so proteins could in principle code for a lot more, and proteins can have extremely complex shapes while DNA can’t do much more than curl up, suggesting the same conclusion.
I doubt that the numeric difference should ever have counted in favour of protein, and against DNA, as the vehicle for the genetic code. Easy to say this in retrospect, of course; and information theory had not long emerged from infancy at the time they made their discovery. We are thoroughly familiar with codes and encoders being dead simple, compared to the complexity of the information they encode. Francis Bacon was a voice in the wilderness when he instaurated talk of binary encoding (as a kind of steganographic encryption), but we, mere epigonoi of the great, take the idea for granted.
As for the numeric difference, so for the difference in spatial conformation. We expect the map to be simpler and more constrained in its variations than the mapped terrain, ugye? And the blueprint is simpler (so less prone to accrual of defects) than the edifice it encodes.
I do not, therefore, accept John Emerson’s later assertion:
In principle neither proteins nor nucleic acids are superior: …
It turns out that DNA’s elegant turning in on itself makes for far greater structural integrity of the sort needed for preserving genetic information than those mushy, deformable proteins. At least when DNA degrades it typically gives up its capacity to transmit the encoded information. Still, I like John’s continuation well enough.
But I never look for trouble. I just blithely risk trouble for satisfying my SIWOTI syndrome, which is something similar to obsessive-compulsive disorder.
(Don’t forget to mouse over the picture and read the alt-text.)
That’s not part of the normal processes and happens very rarely.
Sequences to which regulatory proteins bind (enhancers and silencers) can be very far away along the sequence from the gene they control, because the DNA can curl up so that the proteins all touch.
Oh no. Very, very little of it has a regulatory function, and little more could have one even in principle. That still leaves over 80 % trash, and this cannot be stressed often enough. More than half of the human genome consists of retrovirus corpses in all stages of decay. We all carry 34,000 still recognizable retrovirus genes (mostly transposase) around, yet at most 25,000 of our own genes! Most of the rest consists of very short sequences that are repeated thousands of times – slips of the replication or repair machinery that have got out of hand –, and then, 10 % of the human genome consist of recognizable pseudogenes = genes that are still recognizable but have been destroyed by mutations, like the one for an enzyme that would make vitamin C.
Again: the vast majority of junk DNA really is junk.
Plus, enzymes are proteins themselves.
=8-)
In other words, they acquire a life of their own…!
Dr. Noe, the later assertion was by John Cowan.
In those times, heredity was thought to be insanely, insanely complex, so the most complex biochemistry might perhaps be just good enough for it. It took some time till it really sank in that heredity is digital and that the bits are really small and trivial.
But the conformation of DNA isn’t inherited. DNA is completely unwound* when it’s replicated. What I just said about enhancers and silencers relies on the vagaries of Brownian motion.
* Not all at once. There’s no space for that.
Also, the “elegant turning in on itself” of DNA is not the helicity (a mere blameless incurvatio in se ipsum), but a contortion of the helix itself. As I have understood it, anyhoo.
David, what I was calling “crucial” about (parts of) junk DNA is referred to in this WiPe article. But that is not where I got the idea:
Correct.
not the helicity […], but a contortion
You elided what seemed irrelevant to the discussion, but it was not junk DNA – its function was to give Noetica a giggle.
Stu:
Dr. Noe, the later assertion was by John Cowan.
A Noe tic, not to notice such a shift. But be so good, Stu, to distinguish cognominally yourself, as in your introduction to the very remark I examined:
As John remarks: …
David M:
In those times, heredity was thought to be insanely, insanely complex, so the most complex biochemistry might perhaps be just good enough for it.
Quite. As I suggested, our own view is clearer, being from the shoulders of giants. Still, Mendel’s observations (mendaciously rigged or no) might have steered things toward an otherwise unsuspected simplicity and discreteness.
But the conformation of DNA isn’t inherited.
E dopo? Care to explain the relevance of this point, if point it be?
DNA is completely unwound* when it’s replicated.
Nu?
David M, and Stu:
Also, the “elegant turning in on itself” of DNA is not the helicity (a mere blameless incurvatio in se ipsum), but a contortion of the helix itself.
Well, the turning in to which I advert is the helicity itself, which is a sine qua non of the thing being DNA at all, and which constrains the deformational possibilities mightily (and therefore keeps the blueprint uncreased and unblotted). Why do you mention the other sort?
The “greater structural integrity of the sort needed for preserving genetic information” isn’t inherited and doesn’t carry genetic information on its own. As long as it doesn’t actually break, DNA can be deformed at will and stay “uncreased and unblotted”; the information is in the sequence and only in the sequence.
David M:
The “greater structural integrity of the sort needed for preserving genetic information” isn’t inherited …
I don’t know how to interpret “isn’t inherited”. Sure, that structural integrity and that helicity are not inherited as DNA-encoded heritable features are; but they are a precondition for heritability via encoding in DNA. Brevi manu, if the parent has and uses DNA – with its structural integrity, with its helicity – the offspring will have all this, too. Analyse heritability out of that, an thou wilt.
… and doesn’t carry genetic information on its own.
It “carries genetic information” in a context of interpretation (or expression, perhaps) of that information, sure. And your point?
David M, perhaps your continuation is the point:
As long as it doesn’t actually break, DNA can be deformed at will and stay “uncreased and unblotted”; the information is in the sequence and only in the sequence.
Yes, allow that. But again, so what? The large-scale deformations are surely nothing compared to the involutions, mergings, and partings (creasings, blottings) of the parts of a complex protein. “The” information is in the sequence alone, let us grant straight away; but the structure (broad and narrow) of the DNA is both the embodiment of that information and the guarantor of its integrity and stability – so long as that structure holds, and the information is gracefully and usefully lost if the DNA breaks. A good “design”, what? Hinc illa abundantia animalculorum plantarumque.
ac planetarum, in a larger surmise.
As it turns out, this entire blog entry is wrong. Of course, some people will still believe it to be true but ultimately they will realize their error.
It has been shown that there’s no Cambridge of New York
Well. If New York is Oxford, then Boston is the Cambridge of New York. But if New York is Boston, then New York is the Cambridge of New York.
Cambridge being the Cambridge of Boston.
Cambridge is both Cambridge and Oxford of Boston.
Harvard and MIT are of course the Cambridge and Oxford of Cambridge (not necessarily in this order).