Or so Sarah Butcher reports:
If you’re wondering which coding language to learn for a software engineering job in banking, Goldman Sachs’ CIO Marco Argenti seems to be aligning himself to the people who suggest an advanced knowledge of the English language and an ability to articulate your thoughts clearly and coherently in it, is now up there alongside Python and C++.
Writing in Harvard Business Review, Argenti says he’s advised his daughter to study philosophy as well as engineering because coding in the age of large language models is partly about the “quality of the prompt.”
“Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer,” says Argenti. In the future, he says the most pertinent question won’t be “Can you code?,” but, “Can you get the best code out of your AI by asking the right question?”.
Asking the right question will partly depend upon being able to articulate yourself in English and that will depend upon, “reasoning, logic, and first-principles thinking,” says Argenti. Philosophical thinking skills are suddenly all-important. “I’ve seen people on social media create entire games with just a few skillfully written prompts that in the very recent past would have taken months to develop,” he adds.
I know nothing about coding, but Stu Clayton, who sent me the link, does, and since he thinks this is of interest lächerlich, I’m passing it along. Anything that places value on “advanced knowledge of the English language and an ability to articulate your thoughts” is probably a good thing.
And I thought really rich, ignorant people were only opinionated about politics.
Perhaps investors, such as his employers, should study English instead of economics. I hear that there are videos on social media showing people constructing entire investment portfolios with just a few skillfully written prompts.
I would love to see any one of the games he found on social media.
Politics is the one thing really rich people are not ignorant about.
They use their money to capture regulators, buy up legislators to ensure that anti-trust legislation is not enforced and to ensure a legal framework favourable to their interests.
Correctly perceiving that genuine democracy is a threat to their interests, they undermine it by bankrolling far-right parties and supporing right-wing demagogues.
I’d say that they display quite a sophisticated understanding of politics.
They have also rightly determined that other skills are of minor importance; this is just as well, as they do not in reality possess the supposed unique abilities which they like to claim as the true basis of their billions.
I would love to see any one of the games he found on social media.
I hope you mean what AI think you do. The quote says “create entire games with just a few skillfully written prompts that in the very recent past would have taken months to develop”. I expect these games were (dis)simulated versions of tic-tac-toe and space shooters. Apart from that, the articles suggests that it takes months to develop such prompt-writing skills, quite apart from the games – months spent in acquiring a Bachelor’s degree in Queer Studies, say. What you spend your time doing – well, it’s choose one, lose one.
It is so cool to see people coming round to realizations without realizing it. Look at this:
#
Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer
#
Now replace “the AI” by “your interlocutor”. Good description of human conversation, no ? Maybe that’s because we’re all machines (and other things too). “Intelligence” does not inhere, it is ascribed. Starting in the 15-16C many people were upset at the idea that “our” bodies are machines . Now we just have to get past being upset at the idea that “our” minds are machines.
AI am accused of finding the article “interesting”. Serves me right for not already larding my email with snark.
Steering Automated Plagiarism Engines toward lucrative thefts may indeed require special training.
suggest an advanced knowledge of the English language and an ability to articulate your thoughts clearly and coherently in it, is now up there alongside Python and C++
Presumably the author (perhaps an “AI”?) of these perceptive words imagines that an ability to articulate your thoughts clearly and coherently is not needed when programming in Python or C++.
(However, as we have recently learnt from another thread, it is in fact necessary to learn Latin before you can articulate thoughts clearly and coherently.)
“Sì, abbiamo un’anima. Ma è fatta di tutti piccoli roboti”
Anything that places value on “advanced knowledge of the English language and an ability to articulate your thoughts” is probably a good thing.
I puzzled for a while over why you wrote “probably”. I think it is pretty clear that those are not always good things. They constrain your ability to talk with reg’lar folks, which requires a different set of skills. You don’t learn these in the echo chambers of sweet reasonableness.
Sì, abbiamo un’anima. Ma è fatta di tutti piccoli roboti.
… “was the headline of an article about him in the Italian newspaper Corriere della Sera, and Dennett endorsed it with amusement.” [Guardian obituary]
This is essentially the view expounded by Lucretius. I think he should be given a credit, at least.
Do you mean this ? Die Seele ist ein Teil von uns, genauso wie unsere Körperteile. Sie besteht ebenfalls aus Atomen, allerdings aus sehr feinen und beweglichen. Looks to me as if he was a tad too radically atomizing. He missed the idea that there can be intermediate levels of organisation – even if only “emergent”. Big fleas, little fleas upon their backs … Molecules within molecules.
You say sehr feine und bewegliche Atome, I say piccoli roboti. Or possibly vice versa.
The key question is, of course, whether these are atomic robots (the only really cool kind.)
https://en.wikipedia.org/wiki/Giant_Robo%3A_The_Day_the_Earth_Stood_Still
Getting would-be coders to write like philosophers is a great strategy. It would keep the AI machines occupied for centuries, while doing little of any practical impact.
AI am accused of finding the article “interesting”. Serves me right for not already larding my email with snark.
Fine, I’ve changed it. Anything for you!
Thanks, you’re such a
sweetie-pieSchnuckelputz !The best coder I ever met was originally a clarinetist who’d studied at Juilliard. So can we say “where words fail, music speaks”?
It sounds like the writer of the article is thinking more about the kinds of skills required by technical writers. In my experience, the nerd squad (and I’m using that phrase with love) is already sufficiently anal-retentive about these things.
Reading recent dissertations in philosophy or English lit would quickly disabuse most people of the idea that these areas of study produce people who are able to articulate their thoughts clearly and coherently.
Schnuckelputz is my new favorite word!
From the article: “Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer (a phenomenon that’s often referred to as “hallucination”).”
This is not what causes “hallucinations” (I prefer the technical term “brain farts”). You could have the most logical sentence ever written and the AI could still produce a brain fart. It’s in the nature of Markov chains, which you don’t study in English literature courses or philosophy courses. But I guess if you’re writing for HBR it always has to be the fault of the worker, not the wonderful technology.
Isn’t it a tad ironic? First, computers have to be coded with zeros and ones (ok, in hexadecimal, and it might have preceded with soldering wires in a desired configuration, just roll along with me), but smart people created high-level programming languages to make it easier for people. At which moment people had to learn those languages (instead of Latin). Now some smart people created a system looking for answers to the questions in natural language(s) (-s is very questionable, for now, maybe if coupled to automated translators…) and we are asked to … learn a natural language to better talk to this wundermaschine.
That is, programming becomes far less accessible to speakers of most languages?
What is “an advanced knowledge of the English language”? Knowing a lot of seven-syllable words? Being able to nest subordinate clauses five levels deep?
I wish he had recommended how this essential skill can be learned. I’m still struggling through life with only an ordinary knowledge of the English language.
Ask AI for help !
Speaking as a coder (or at least someone who spends a significant time of my employers time working on code), I must grudgingly agree with the premise. A very important part of coding is to write down a specification, but I’m not sure if that qualifies as English. I once worked on some code that ended up in an airplane, and long before the actual code was written, there was a specification running to hundreds of pages, written in a language only vaguely related to English.
Converting from that specification into actual C code was comparatively trivial, and could probably be done by a trained monkey – or LLM.
That was in the days before ‘agile’ programming, and making changes to the original spec was forbiddingly complicated.
long before the actual code was written, there was a specification running to hundreds of pages, written in a language only vaguely related to English. Converting from that specification into actual C code was comparatively trivial
I suspect that specification was written by a C programmer. The choice of a “language only vaguely related to English” was a ruse allowing both a specification and an implementation to be billed. That’s what my motives would have been at any rate.
Either you spend months writing a spec that monkeys can implement in one hour, or you write in one hour a spec that monkeys need months to understand and implement. It’s all non-trivial. Ultimately what counts is who makes how much money.
as Ook says – the best programmers I knew in my forty years (and counting) were respectively a classically-trained musician, and an English major who taught herself to code after college.
One of the most useful classes I took was the semester in Philosophy 101 that was dedicated to logic and Boolean algebra. This required semester was distressing to all the BAs who’d taken Philosophy in order to have deep conversations about what love is, the meaning of life etc.
Grady Booch recently observed on twitter,
If you ever think that philosophy classes are useless in this day and age, remember that Claude Shannon took an elective class on logic at the University of Michigan, and because of that he made the connection between Boolean algebra and electrical circuitry, a discovery that made possible modern digital computing.
Philosophy and cogent thinking (in English or any other language) have always been useful in coding, but I guess insights like this take some time to penetrate to the recesses of C-suite thinking.
The future certainly lies with Derridan programming.
? What do you make of this, M Lacan :
https://homotopytypetheory.org/book/
Seminal!
Yes, GET WITH THE PROGRAM sheeple !
Convince me the whole book wasn’t just written for the opportunity to call it The HoTT Book.
Formal logic is its own weird little niche and definitely not the sort of philosophy class that prepares undergrads to work in the tech industry; instead you need the sort of philosophy (or other humanities) class where there are hundreds of pages of fairly dense reading assigned per week, which you sometimes don’t actually do, so you are incentivized to get good at the art of bluffing and BS-ing your way through a seminar or discussion section without overtly admitting that you haven’t done the reading. The skill set thus developed turns out to be transferable and very valuable for a number of lucrative career tracks in the tech industry, as well as in many other industries.
@ D M,
The IAS doesn’t itself produce many things like this. Anyway, persons will be persons and hott is as hott does.
so you are incentivized to get good at the art of bluffing and BS-ing your way through a seminar or discussion section without overtly admitting that you haven’t done the reading. The skill set thus developed turns out to be transferable and very valuable for a number of lucrative career tracks in the tech industry, as well as in many other industries
It’s always been that way, and a good thing too. If everybody had to work equally hard at doing the reading, nobody would be available to build and run central heating systems and hotdog stands. The BS-ers keep the infrastructure running for my convenience, as I do the reading.
HoTT stuff.
I was hoping that was what you were linking to! Still HoTT after all these years.
There’s life in these old guys yet ! I can just see the young peeple cringing. It makes my day.
Who is Taylor Swift ??
Shannon? Many of us math students in the 60s were interested in taking the logic class; in my case it was because the class was given by a prof in the Arts faculty and we were required to take a certain number of “Arts” courses. I don’t regret taking it, though, it turned out to be useful in my later life in the computer biz.
The me from twenty or thirty years ago would be shocked by how little time I spend actually coding. People tend to think I am too busy with other, more critical and incisive, things (and I can’t say I discourage that viewpoint; like Machiavelli, I have the failing that I am sometimes happy to let people believe untrue things about me if they make me appear badass). However, yesterday afternoon, I led several colleagues through figuring out why I particular piece of code was behaving as expected on some data sets but not others. We figured it out in a few hours, and the relevance to “AI”-generated code was that three slightly idiosyncratic choices in the implementing the main algorithm had conspired to make the program malfunction. None of the implementations was problematic on its own; they were just not quite the most obvious ways of running those parts of the program. However, if you fed the whole program a data set with certain properties, the slight oddities of implementation could conspire to produce total nonsense as output.
To solve this, I had to sit down the people who had written the various components of the code. It turned out one person had assumed X and Y, and another had assumed Y and Z. X, Y, and Z were almost always all true, but if they were not, a couple weeks of uninterpretable nonsense could result. If the code had been written by “AI,” god knows how we would have tracked down the assumptions it had made.
My undergraduate logic class had a lot of students who needed a philosophy credit and thought logic would be a breeze, because they could think already. It was fun to watch them squirm and grind their teeth.
“like Machiavelli, I have the failing that I am sometimes happy to let people believe untrue things about me if they make me appear badass”
Here Brett is letting us think that he is “like Machiavelli” (the badass Italian).
I assume the The HoTT book was written to expose Homotopy Type Theory to the people that needed it exposed to them, and when the desire arose to make a pronounceable acronym, retaining the lowercase o was decided on because it made the topic seem hot. Probably before the idea of writing the book was formed. (My nephew is doing his PhD in an allied subject, and /hot/ seems to be the spoken name of the theory in general).
A major virtue (IMO) of the HoTT framework is that a proof of a theorem is literally a path in the Universe of Discourse between the hypothesis and the conclusion; two proofs can be considered to be the same if there is a path between them in the space of paths in the U of D between the two proofs . . . und, as we say, so weiter. All nice and rigorous from first principles, at the beginning of the subject. Spaces, toposes, stacks, categories allooksame
All is One.
The section about induction over equality types broke my will. I’ll come back to it some year.
All is One.
Hen to pan! *sizzle*
two proofs can be considered to be the same if there is a path between them in the space of paths in the U of D between the two proofs … allooksame
The section about induction over equality types broke my will.
I read the first 20 pages or so with ease and amusement. These homo types are bending over forwards and backwards to make it all sound so reasonable, almost natural (or is it the other way round?), even for the amateur mathematician. As it happens I was reading Hoffmann’s Die Elixiere des Teufels at the same time, which also begins harmlessly (monks in peaceful cloisters, birds singing, flowers abounding), gradually turning into heavy Gothic.
I’m all agog as to how they will talk their way around “path” as a continuous mapping. Probably by just redefining “path” so that alllooksame.
It’s the type theory and proof business that I may get something out of, and maybe not. Ideas for me, not the official mumbo-jumbo, to the acquisition of which I have no desire. I ordered the book (20$).
I encountered again this week an example of software that one would naturally expect to operate in a certain fashion, except it doesn’t. I discovered that stretching a figure in Paint, then inserting it into a LaTeX document produced better PDF output. There is no logical reason why this should happen. I certainly did not expect it to. (I was just trying to sneak something by an obnoxious editor, who complained my figures were too low resolution.) But it nevertheless did happen. I asked a question about it on the LaTeX Stack Exchange, and I only got responses saying it could not have happened or responding a different question from what I actually asked.
responding a different question
Is this a new instance of the now-fashionable practice of prep-drop ? I only recently steeled myself to remain calm at “couple bottles” .
@Stu Clayton: I thought you might be interested in another comment in this thread (if you are not too busy with homotopy type theory). However, I had hoped for something more substantive in your response what I wrote.
I have long since given up on trying to see any relationship between the size of a document in OpenOffice and the PDF produced by exporting it. Cut text from the ODT file, get a bigger PDF file. Or not, depending on … who knows what?
@Brett: I had hoped for something more substantive in your response what I wrote.
I had hoped for something clearer than your original “stretching”. Do you mean enlarging while maintaining the same aspect ratio ? Or stretching along an arbitrary axis ?
You’re certainly going to lose resolution when you shrink an image. Pixels have to be discarded because they don’t all fit in the smaller bounds. Enlarging an image will lead to interpolation of additional pixels, unless a brainless algorithm is used.
(if you are not too busy with homotopy type theory)
I read 3 pages in the pauses between my Jane Fonda workouts. So far it’s been the same three pages each time. I don’t want to overdo it.
@ Stu Clayton,
`induction over equality types’ : take that Bertrand Russell…
continous shmontinuous : a way that can be spoken is not the proper way.
Nor did anyone say anything about continuity, the Universe of Discourse is no bed of roses, cf Barthelme’s Mr Quistgaard.
@jack: Nor did anyone say anything about continuity
Well, the HOTT authors use the word “continuous” several times at the front of the book. It’s easy to see that they’re trying to hold the reader’s attention while preparing to throw continuity into the garbage.
a way that can be spoken is not the proper way
Then that’s not a proper way to speak about such ways.
@Stu Clayton: A journal editor left a comment on my article proof that the paper’s figures did not have enough dpi for publication. I figured I would need to recreate them in Mathematica, but they existing ones were certainly legible, albeit with slight fuzziness to the smallest axis labels. I thought maybe I could get away with just increasing the number of pixels without actually doing any work, so I just opened a PNG file in Paint and rescaled the image from. 100 percent to 200 percent.
I increased the size by exactly a factor of two, because I was sort of thinking in the back of my mind that it could just increase each pixels of raster into a two-by-two block. However, I subsequently realized that was absurd. Paint did would not (or should not, logically, for whatever that’s worth) have a special check to see whether an image was being scaled up by an integer factor. It would just resize the image using the same algorithm (bicubic resampling?) as if I had specified 187 percent.
I stretched the image this way, then saved it. I reran LaTeX, and the output (which was the same size on the displayed page) was slightly sharper! No new information had been introduced, but the output resolution was better. That I found very puzzling, which prompted my Stack Exchange question, which no one responded useful information.
@ Stu
Egg-Zackly
But sometimes (for ex graphs & trees) there are thingswhich have sensible paths without talking much about continuity. But there’s an important separate issue in these things called witnessing which IIUC requires evidentiary trails, maybe something like legal citations or double-entry bookkeeping, to make sure your sets aren’t empty or something.
I apologize, I’m really over my head here; these are people I want to keep up with but…
Of the way of which there is no way in which it is proper to speak, thereof one must be silent.
one can keep trying though. no blame as the wilhelm translation says
Incoherent babble* probably comes closest to expressing reality. Syntax merely constrains our understanding (as Kant has told us. OK, maybe not told us. But I think we all know what he meant.)
* Practicing what I preach since 1453!
I’m afraid I can’t buy that Dave …
sincerely Jack
sucking entropy since 1944
@ Jerry Friedman,
The critical path may not be smoothly parametrizable?