This is a post from Robin Sloan’s lab blog & notebook. You can visit the blog’s homepage, or learn more about me.

All that is solid melts into code

November 24, 2025

It’s been wild to see the arc of AI bend so pow­er­fully towards soft­ware devel­op­ment this year. I’m not sure that, in the summer of 2024, any­body was sit­ting around saying, wow, one of the chief prob­lems facing the world today is the dif­fi­culty of pro­ducing lines of code. (It was not.) Yet lan­guage models (1) are nat­u­rally very good at oper­ating inside this magic circle, and, more importantly, (2) can very effec­tively be trained to become even better.

The second point in par­tic­ular makes it feel, in retrospect, inevitable: code matches formal(ish) verification, “answers you can check”, to broad application, “answers you care about”, in a way that might be unique. Let ’er rip!

Oxbo 6430 olive harvester
Oxbo 6430 olive harvester

That’s an over-the-row olive harvester. Most olive oil pro­duc­tion at medium-or-greater scale depends on machines of this kind; they trundle over trees planted in long rows, almost like con­tin­uous hedges, and col­lect the fruit with vibrating fingers. Machine-har­vested olives (1) are cheaper, and (2) arrive at the mill in better shape than olives har­vested by hand.

One catch: most olives can’t be cul­ti­vated in this configuration, called super high-density; the trees don’t thrive so close together. Only a handful of vari­eties will tol­erate it … so those handful have been planted in huge numbers … and the flavor of global olive oil has changed as a result.

Automa­tion never meets a task in the world and simply does it. There’s always negotiation — the inven­tion of some new zip­pered relationship. Trains don’t run without long, con­tin­uous tracks; cars don’t drive without smooth, hard roads. Not to men­tion huge parking lots!!

The fact that lan­guage models are great at code means there is sud­denly a strong incen­tive for more things to become code. Like: very strong. The majority of valu­able work can’t be refor­mu­lated in this way; see: the olive harvest. But plenty can be, and plenty is on the cusp, and the cuspy work will feel intense pres­sure to go 100% code.

If AI is the super­fast harvester, then code is the high-density olive variety: there will be more of it now.

It’s not that all work needs to be broken down into flow charts; lan­guage models can totally handle a mess. A large part of the excite­ment here emerges from the under­standing that this round of dig­i­ti­za­tion won’t be like the last one, wedging ambiguous human processes into rigid data­base schemas, being surprised, every time, when they don’t fit.

But lan­guage models do prefer their mess to be symbolic — a stream of tokens — and they do handle it better when they are granted the leverage of code. Both of those things seem nat­ural to soft­ware developers — “Yeah, that’s … my whole job?”—but, again, there’s a big uni­verse of cuspy work out there, con­nected to education, healthcare, government, and more, for which people will dis­cover, or decide, there are huge ben­e­fits to going with the grain of the models, rather than the grain of, well, reality. Com­pa­nies will rise, doing this trans­la­tion and reformulation.

So it’s paradoxical: lan­guage models are some of the most organic tech­nolo­gies ever produced, totally capable of coaxing com­pu­ta­tion out into the realm of the human … yet instead they’ll pull a vast field of human activity deeper into the domain of code. This is just my prediction, of course — but I believe the whole his­tory of automa­tion backs me up here.

In the late 2020s, I think a lot of people are going to dis­cover that their job has become: “Trans­late your work into code. Trans­late yourself, while you’re at it.”

As with most of the AI stuff, I’m ambivalent, in the sense of having many thoughts and feel­ings at once. I do think the “path not taken”, of using this technology, in all its flexibility, as a lever to prise our­selves OUT of dig­ital systems, AWAY from the internet, is a tragic one to miss.

There are poten­tial remedies, secret roads — about which, more later.

P.S. AI con­tinues to be a spec­tacle of strange cause and effect. In another uni­verse without a strong cul­ture of open source, there’s not an enor­mous pile of freely avail­able code — in fact there’s hardly any at all — and the models get good at some­thing else. Or maybe they don’t get good at anything. It’s been suggested, and cer­tainly feels plau­sible to me, that training on the super­struc­tured if-then of code makes models better at the if-then of lan­guage, better at logic and entail­ment generally.

To the blog home page