The problem is not culture
I recently came across a very frustrating blog post. Here are some assorted thoughts on why I think it's wrong.
General theory
While I don't know much about Dan's background, Simon has a Wikipedia article about him, so with a little snooping we can find that he qualified in computer science, has received Y Combinator funding and worked for at least one Silicon Valley luminary, and currently sits on the board of the Python Software Foundation.
I think this is a very disingenuous way of framing Simon's life and work. Why not emphasize that he is a member of the Python Software Foundation, and as such a pillar of maintaining the open source community, or even mention that he was the co-creator of Django, or even his work on a data journalism framework, Datasette? I think emphasizing his brief stint, once, at a startup tha was funded by Y Combinator, is tantamount to character assassination.
This distinction is most in evidence in the heroes that our two distinct cultures choose: the people that we believe worthy of emulation and praise and the behaviours and achievements that they exhibit. The technological culture tends to praise risk-takers, iconoclasts and people who exhibit cunning and cleverness to build new things, disrupt old things and usually become rich in the process. Example figures might be people like Steve Jobs, Marc Andreessen and, unfortunately, Elon Musk: Simon himself might well count as a minor hero in the culture considering his contributions. The key virtues being expressed tend to be novelty, independence, ambition, a bias towards action and building something rather than nothing. The key is to throw time, energy and resources into creating something new and brilliant that changes the world, no matter how many lives or anything else are thrown away in the process. This is, in short, an honour culture, where engineers compete for glory on the field of open-source software, aiming to be elevated in the eyes of their peers and the industry. It's a culture that would be recognisable to Achilles or Beowulf almost immediately once you got them caught up on the context: the goal is to make a name for yourself that will be remembered for ages to come.
I don't think this is a false characterization; however, I think a lot of the things she'll go on to attach to it are problematic, including her evaluation of the worth of that culture. But we'll go on to see that later.
In my part of the world, none of this really flies: iconoclasm in bridge design, for example, tends not to be associated so much with wealth, fame and changing the world in new and exciting ways. Rather, it's associated with messy, expensive disasters. Similarly, cutting corners (which the use of a coding agent fundamentally is and will be for the foreseeable future) when launching things into space tends to lead to the kind of TV footage that's liable to kill your career, along with six astronauts and a primary school teacher. We just don't really have the kind of culture that encourages the energetic, world-reshaping kind of hero, and inasmuch as we ever had them, they mostly live in the past: the Newcomens, Watts and Faradays of the world all live quite a lot further in the past than the heroes of Silicon Valley. Hell, for civil engineers the hero-cult goes back so far that we likely couldn't even identify that kind of hero: even Sneferu's architects were primarily concerned with maintaining existing buildings and correcting previous mistakes, as we can see in the Bent Pyramid
Again, this is probably not a false characterization, but the relative valuation of these culture's she's going to layer on later in the article is where my problem lies. Notice here also the seeds of her undercutting herself: these domains are different: iconoclasm or corner-cutting in civil or mechanical engineering has an objectively higher direct risk than in software; likewise, there's a lot more room for experimentation and creativity in the kind of software those of the "honor culture" she describes do than in the kind of engineering she's describing.
Additionally, note that even in her own account, some original, creative heroes, who came up with brilliant new ideas and tried them — and learned from their mistakes — were necessary; it's just that her kind of engineering is a much more settled science.
In contrast to the honour culture of tech, our culture is heavily influenced by the mediaeval church and at times can be almost monastic in nature: our task is to contribute to the long work of salvation, which no one person will ever complete. Individual heroism is thus less important than piety and the willingness to suffer for our principles, and while tech culture encourages you to make a name for yourself, engineering culture encourages you to work, quietly and diligently, for your salvation and for the salvation of the world.
I think this is an extremely telling half-paragraph. This shows that, for all her claiming to have escaped Christian thinking (when it is convenient for her), she is far more on the side of slave-morality, Catholicism, and no-saying to life, an ethics of self-effacement for the Greater Good, than the opposite — which would be the exact sort of Nietzschean/Arendtian/Stirnerian honor culture she so dispises. I think this is one of the fundamental cracks you can use to cause her whole ediface to crumble if you wanted to:
You simply don't have to accept her value system, and her attempts to make it look like the only good, valuable, material, or practical value system are fundamentally rooted in a Christian spook.
In the odd kind of honour culture that is tech, where you gain fame and social status by doing great deeds of technical heroism and innovation, coding agents make a lot of sense. … Secondly, the culture stresses production over the work of maintenance and reproduction: the person who first creates something is honoured and gains much status, while the dozens of people who quietly work for years or decades on keeping it working, updating it to keep up with times changing and developing new uses for the thing are largely forgotten, despite the fact that they're the ones that actually make the thing valuable to people.
I think this is a disingenuous way to frame things. The thing of value was created by the person who created it! It wouldn't exist without them, at least not in the way and at the time it did — if at all — so of course, they're the one honored for it. Those that come later and maintain the thing arent't the ones "actually mak[ing] the thing valuable to people," the are at best keeping it valuable. Additionally, it should be noted that a fair few of the people using LLMs are using them to actively maintain their ongoing projects, making them more performant, more dependable, simpler, and more reliable — see the Redis creator — or just to give them features that are useful to people — see the Ghostty creator — and some of the people using thse technologies are in fact the creators of the very maintenance and infrastructure technologies she later praises so highly: for instance, the creator of Terraform!
Additionally, even though creating a new thing for the first time is more highly valued than maintaining it, that doesn't mean that creating a thing that is reliable, well-architected, maintain/able/, and clean, is not valued very highly. As a general rule, software developers care a lot about their craft. See for instance another user of AI for coding, Chris Lattner, the creator of LLVM, Clang, and Swift, who explicitly talks the most in this interview about software craftsmanship and creating software that is maintainable and lasts long term… and then talks about how he uses AI to help him do that.
Given that new breakthroughs are how you gain status in the tech world, though, being embedded in tech culture means that coding agents start seeming remarkably useful: after all, you clearly can create new things with them, which you can use to gain glory and social standing in the eyes of your peers. And ephemerally, they will work, which by the standards of the culture of tech, means that coding agents work "well": they allow for the accumulation of glory and social standing exceptionally effectively.
This is a clever but underhanded rhetorical move, by treating "creating new working technology, even if it is a proof of concept" as literally just boiling down to, being equivalent to, and only meaning "allow[ing] for the accumulation of glory and social standing," she's artificially making this "honor culture" she's strawmanning seem more vain and uninterested in "real things" than it is. The reality is, most people who create new things are doing it becuase they love doing it for its own sake, because it's interesting to them, or because they saw a need for something — either in themselves or in the general ecosystem — and decided to make it. Boiling all that down to some kind of vainglorious pursuit of only honor is extremely condescending and disingenuous.
It also treats doing this as having no value, when the much-vaunted maintainers she's talking about would have nothing to maintain, and also have no tools to maintain anything with, were it not for this culture she's trying to tar-and-feather. Hell, she wouldn't even have Terriform itself if it weren't for someone who now sings the praises of AI agents, and so would probably be categorized under this umbrella. This is rhetorical slight of hand.
For us who seek salvation more than a glorified name, and who are willing to take up the long work…
More Catholicism.
…though, coding agents seem less valuable. One thing that I probably under-discussed in the last article was the fact that coding agents, however we spin it, cannot maintain code. They can produce a first product, but time passes, languages change, and APIs and libraries become deprecated, and things start breaking.
So you use the agent to do the leg work analyzing and creating a (cited) report for you on what the symptoms are, help you write tests and debugging scripts and dig through logs to figure out how to fix what went wrong, and refactor the code, manage updates to new dependencies (which often require a lot of small patterned edits that they are extremely reliable at), and have them help you write the new code. Coding agents are a system for helping you do coding-related work; there's no reason they can't help you with both ends?
The best that a coding agent can do in these situations is to, one way or another, "make the thing work again". To understand a bug or a breakage when it occurs requires you to read the generated code, line by line, unpick the whole chain of failures, understand what caused the issue and redesign the code so that the issue doesn't recur.
You should've been reading the code line by line the first time. And as I said, a coding agent can help with all of these things. It seems like she's equivocating what an AI coding agent can't do alone with what it can't do at all, even as an aid, in order to make those who use the tools seem like they just don't care at all about doing any of the things AI coding tools can't do alone. Again, this is rhetorical slight of hand.
I am certain that this will be disputed, but I know in my bones that this is the only way to fix a bug for good: while LLM code can often be adequate, LLM bug fixes basically never are. This comes from my training in non-software engineering: if you don't know why something failed, you haven't fixed it or prevented it from happening, but merely set yourself up for a bigger disaster to come.
Literally every software developer also knows this. Treating this as privileged knowledge only those with your engineering background can know is condescending hubris.
To build something that can be truly called reliable, then, takes multiple prototypes, lots of work on eliminating bugs, learning from previous projects, a lot of institutional logic and constant monitoring and maintenance.
- LLMs are extremely helpful in building multiple prototypes — even by her own logic they should be!
- LLMs are, as I discussed above, very useful for helping find a way to eliminate bugs, and help you execute on those ways.
- LLMs are also, since they're good at boilerplate, very good at helping you write tests to avoid problems in the future
- "A lot of institutional logic and constant monitering and maintenence" is exactly the situation in which LLMs are at their best: they get infinitely more useful the more automated testing, linting, code review, and CI/CD there is, since that operates as objectively verifiable guardrails; and the more and clearer the documentation is, the better they do. So if you're going to talk about how good humans are with access to those things, also consider how good LLMs are with those — and also consider that LLM usage highly encourages those things. For instance, people have widely started writing specifications, huge test suites, lots of documentation, and so on, now that their productivity is directly and visibly boosted by those things thanks to agents; and some have even begun considering formal methods because of the possibilities of LLMs!
And if you're taking it seriously, every time you push a fix you ablate away a little of the LLM code, to the point where in a mature product, even if you started with LLM-produced code, there's likely to be very little of it left.
Unless you're writing the improved/fixed code with LLMs too — which you can, since LLMs build the code you tell them to write, so there's no reason why you couldn't have them build the fixed code just like they built the original code.
In the framework of the long work, then, there is very limited point or value in what a code agent produces.
Well, without that original LLM-written code in this hypothetical project, the project might not have existed in the first place to build upon; moreover, didn't you just say "building multiple prototypes" is very important for building a strong final product? This is not only contradictory, it's especially frustrating because it seems to be placing all emphasis and value on the things that are made possible by an original cause, but not on the cause itself; as if you spend your whole life honoring building on things, but don't honor the thing you're building on in the first place.
It's often said that this is because software is less likely to kill people than the kind of engineering I was trained in, but this simply isn't true: the British Post Office using shoddy software led to at least thirteen suicides, the Therac-25 consistently gave patients massive overdoses of radiation, and it's nontrivial to find a whole host of other situations where code, in more or less dramatic ways, can kill a lot of people.
This is an extremely important crux in her argument, and yet it's extremely poorly sourced and argued. Two anecdotal instances, notorious as they are, are supposed to make this case for us — when perhaps they're notorious specifically because they're exceptions to the norm? She says it's trivial to find other cases where code could kill people, and I don't doubt it, but being able to pick out lots and lots of individual cases doesn't tell us what the overall risk factors are for the field, does it? I could pick out "a whole host" of cases where people were ostensibly killed by COVID-19 shots due to myocardis or whatever, and leave out the fact that it was like, what, 100 people out of 6 billion?
Not to mention the fact this is lumping all software development together, when there are wildly different subfields of software development with vastly different cultures and ways of writing code precisely because their risk profiles are so different. Lumping together aerospace programmers or progammers in charge of accounting software with the right to charge people with theft and fraud, with front-end software developers in order to prove that neither of them should be using AI to code, because the risks are "so high" in the aggregate for the two of them, is just faulty logic.
Also, in the fields where lives are seriously at risk, they tend to use languages like Ada, write specifications, undergo extreme testing, and prove their code with formal methods — situations where, arguably, LLMs would actually be so constrained by the guardrails that they would become just as safe as a human, and possibly more productive, so it's not even clear to me this is a very good argument even in those domains.
I suspect, then, that there's a difference in the attribution of responsibility between tech culture and engineering culture: while a tech-culture person who releases code with an LLM-created bug in it might be temporarily shamed by it, it usually won't be seen as morally reproachable. An engineer who releases similar code, however, will be judged as though they recklessly put the entire community in danger, which is a much more severe sanction.
Maybe because these are different contexts, but you're trying to apply your "engineering" mentality to contexts where it doesn't apply — and also judge the "tech honor culture" by its lights?
Firstly, we tend to take quite a different view of productivity: while Dan praises Simon and Jesse for their productivity and mentions that coding agents enhance that even further, I'm not actually sure that I care much. Django was enough; it was a diligent contribution to the Python ecosystem, and maintaining and stewarding that by sitting on the Python foundation's board is more than enough for anyone to be proud of. If Simon chose to maintain and steward what he's already done rather than producing more, then, that would be enough, and in the engineering view he doesn't necessarily gain more merit by doing more.
Have you considered it's not about merit, it's about the fact that he enjoys making things, enjoys his craft, and wants to keep doing it?
Secondly, if work is worth doing at all, it has to be worth something to someone: the value of code is in how, one way or another, it makes someone's life better in some way.
Like all the cases of LLMs helping people make software that is useful to others, concretely and specifically?
- Like making an instruction app for a BJJ instructor
- or any of these five stories
- Or any of the examples she herself listed in another blog post: "Moving into 2026, however, I started to see actually useful projects built with LLM assistance pop up.
rvis perhaps the prototypical example: it's a package manager for the R language that is much better than basically any other tooling available, at least from a software engineering perspective. It was also written with the assistance of Claude code … Moreover, I'm not clear that it would have gotten written without Claude being available at all … Similarly, I started seeing intelligent, level-headed people whom I respect use LLM tooling to write various small projects that make their life easier. These aren't huge projects or anything: they're things like gym tracking apps, data processing scripts and apps to help in the kitchen. Nonetheless, they're still useful and it appears that LLMs make them very quick to build. Tellingly, while everyone whom I've seen making the LLM tools work for them is a) very smart and b) works in a field where clarity and rigour are of foremost importance, they aren't all engineers: it turns out that say, an analytic philosopher or an IR analyst can get pretty good results from the tools in question."
How is that usage of AI "honor culture" or not "worth something to someone"? Does that usage of Ai for coding sound like vainglorious competition or performing masculinity? Where did knowledge of this go in the, like, week since you wrote that blog post, Iris?
Take a screwdriver, for example: you can cheap out and use soft metal or metal that has a tendency to rust. You can economise with the handle. There are a whole lot of ways in which you can make a screwdriver shoddy without compromising on the core function. If you create a screwdriver with a handle that a person can't grasp effectively or with a head that can't gain purchase on a screw, though, the tool is useless and you've failed at designing it
I think it's pretty clear that while vibe-coding with LLMs may produce a "shoddy" screwdriver, it rarely produces something that literally isn't fit for purpose and doesn't do what the person who made it wanted it to do. That's kind of the whole point, right? Even anti-AI people tend to admit that AI can create something that mostly works at the core functions; the objection is around craft.
This means that everything, to a greater or lesser extent, calls for the level of regard given to a situation where life is at stake …
I really, really don't see how that follows from the previous sentences.
… you never know when a tool that you make or something that you build might be put into a situation where lives depend on it, after all.
Ah, now we get the argument, appended as a clause after the conclusion, where the conclusion was framed as if it followed from something that came before. That's hilarious.
In any case, well, this seems like a shitty argument to me. This seems like an argument for nothing to be built unless it's built perfectly — no hobby projects, no small time DIY things that work well enough for the people it was made for, no projects that make it clear they're not meant to be relied-upon, but might be useful to you if you want them. All of those are perfectly valid niches totally outlawed by this. This also comes back to the risk-factor section earlier — even if your software doesn't fall into the categories I just listed, it's totally valid to look at the liklihood of anyone's lives actually depending on it, and deciding to adjust the amount of effort you put into it based on that. That's not "honor culture," that's literally an engineering decision: not over-engineering something in a way that wastes time and resources when it isn't necessary. There's a reason houses aren't always built like bunkers just in case they get bombed.
The situation we're faced with, then, is one where the code agent works "well" from the perspective of the tech culture that prioritises what is essentially competition between elites to do great deeds, but doesn't do "well" at all in a culture that for all that it's close in domain to what software developers do, has very different attitudes and discourages this kind of elite competition across the board in favour of a much more collaborative attitude.
This is just a summary of what we've seen before. I find it unconvincing.
While, for a number of reasons, I have little love for the tech culture and what it prioritises, I don't think we can say that either perspective is wrong
I think it's quite clear by this point — and will become clearer — that despite her protestations, she does believe one culture is right and the other is wrong. And surprise surprise, it's her own culture she believes is superior and is the source of all value.
certainly, I think the engineering culture perspective brings a lot of value to the table that it would be a shame to drop. However, a lot of people in tech culture seem… disinclined to offer that consideration to other professional cultures.
This is projection.
Oh brother: the gender shit
It's really rather hard to read this as anything other than "Simon and Jesse (who are male) are very clever and have the right experience, patterns of thought and temperament to make this very powerful technology work for them, whereas I (a woman) don't possess that". The possibility that I have the capability but don't share the value system that makes code agents useful to them is pretty neatly excluded here, and I can't help but read a bit of implicit sexism into it: if I don't get the results that I find valuable from a code agent, it's because there's a flaw in me rather than the tool being not fit for purpose. I should value something else, or I lack some fundamental gifts needed to make the coding agent work (personally, I wasn't aware that Claude Code was operated with the penis and required high levels of testosterone in the bloodstream in order to function, but I'm always open to learning new things). That, I think, certainly says something about the culture surrounding LLM use.
Just because they happen to be men and you happen to be a woman when they say this doesn't make this at all about "implicit sexism." That's literally all in your head, and then you're elaborating it out into some idea that people are saying that you have to have a penis and testosterone to use them, and then you're saying "this tells us something deep about the culture around LLMs."
No, it doesn't. You're arguing with your shower bottles.
You're making up a guy to be mad at.
AI proponents (whatever you think of them, and the claim) say this about everyone.
You're being this guy:

Even before the advent of code agents, the honour culture of tech was very much a patriarchal one: while women weren't directly excluded, the culture is very male and women are systematically disadvantaged in gaining the highest degrees of honour and glory in it.
This is not wrong.
After all, producing these new innovations requires you to dedicate a hell of a lot of time to writing software: time which often requires you to have someone else cooking, cleaning and picking up for you around the house. Men are far more likely than women to have access to this kind of support.
And this is a shoddy attempt at materialist analysis. The vast, vast majority of the people doing all of these impressive things are single and living on their own, because they're pale computer nerds (said as a pale computer nerd myself). They don't "have access to this kind of support."
Even in tech, the culture tends to relegate women and nonbinary people to support roles: minoritised people, far more than white men, tend to end up in fields like data, site reliability, front-end engineering and all of the maintenance work and reproductive labour that the tech industry requires rather than employing them as core software engineers solving what tech culture thinks of as the "real problems".
But data and front-end engineering are new-feature-producing roles using Python and JavaScript — in other words, exactly the roles where LLMs are most useful, and also roles that very much should be counted as the "tech honor culture," not the "saintly" engineering culture Iris is claiming for herself. Also, SREs, devops, and sysadmins are actually even more disproportionately male than regular software developers in general, completely contradictory to what she's saying. See for example this report, which shows:
| Role | Percentage women |
|---|---|
| Software engineering and architecture | 19% |
| Compute and operations | 15% |
| DevOps and cloud | 8% |
| Data engineering, science, and analytics | 30% |
According to this survey (which, due to differing methodology, may not be fully comparable, tbf), only 16% of system administrators are women, again if anything lower than for general software engineering.
It should also be noted that if she's trying to cast data engineering, science, and analytics — which is, I should note, also very much not a part of "engineering culture" — as female-coded, well, that's literally what Simon Willison is spending most of his time doing. So that seriously undercuts this framing she's trying to set up where he is (subconsciously) setting himself up as superior to her because his field is male-coded and hers is female-coded, and also undercuts this argument that she finds LLMs not useful, and he finds them useful (and in general women like them less then men do) because of different fields and cultures (since working on Datasette is one of the things Simon enjoys using LLMs for, and in general a lot of data scientists enjoy using LLMs to write their Python code for doing data analysis (and LLMs are very good at Python)). Basically from every angle you look at this, the dichotomies she's trying to set up fail with massive exceptions.
This isn't necessarily because core software problems are harder to solve, but because the work of doing them is not valued by the culture that it's embedded in: site reliability and data engineers are regularly solving problems far thornier than what your average application developer deals with, but they're marginalised as "maintenance" done by people who "aren't real programmers". I think it striking, for example, that a regular complaint that people like me make is that coding agents seem to really struggle with things like Terraform, Dockerfiles and CI/CD (you know, the things you'll probably be using to let someone actually use your app, which makes them more than a little important), yet this is almost never considered to be a major issue with what the tools can do: so long as they can produce adequate Python or Javascript in volume, people are happy. In short, the code agents are great with languages that are gendered more "male" in tech culture, but really rather bad at the ones that are gendered more "female".
Python would be gendered female by her rubric, and Python is what is used in data engineering; and Terraform, Dockerfiles, CI/CD, etc, are part of jobs where women are actually even more underrepresented (and coded male at least in the circles I run in: the grumpy old-man sysadmin vs the flashy female front-end developer). This attempt to establish this gendered binary is fundamentally very fragile, and easy to deconstruct.
In short, code agents are increasingly becoming a way in which masculinity is performed in the tech world.
Yeah, sure.
In this nascent dialogue, men are far more capable than women of the subtle patterns of thought and setup needed to make good use of code agents.
No one is thinking in terms of gender when it comes to these "subtle patterns of thought"! As I said before, AI proponents will and do commonly accuse anyone, not just women, not even proportionately women, of lacking these, whatever you think about the correctness of it; and as my deconstruction above should show, this kind of binary is wholly artificial and completely falls apart at the smallest investigation, leading to this analysis of the "nascent dialogue" to be fundamentally absurd; moreover, this analysis of the nascent dialogue just simply doesn't follow from her premises even if you grant them: if the idea is that LLMs are better in more male-coded tasks, and worse in more female-coded ones, and those who are having success with it accuse those who aren't of lacking these subtle patterns of thought, don't need to be assumed — even subconsciously! — to be doing it for gendered reasons, as opposed to culture ones, or ones about just lacking theory of mind and not knowing the other fields. You could argue that "it has that effect" and so it doesn't matter what the intentions are, but I'm always leery of such arguments, and that's not really how it's being framed here.
In the current tech culture that's forming this nascent dialogue, then, code agents are the spears that the high and mighty of the tech culture fight with for glory in battle. They're a way for men to assert their masculinity and their skill in producing much new and innovative code, and they demonstrate to the men that use them that they are fighters and effective on the field of combat that is innovation. As much as they're used to write code, they're used to fight for status and glory even more: they're a way for people to make a name for themselves.
And this is where things get patently absurd. Feeling confident in her gendering of all of these cultures, and her attempt to break software development into one vainglorious culture that doesn't actually care about the quality of its work, or even making anything useful, but only the approval of others, on the one hand, and one meek and hard-working, selfless, self-effacing culture just dedicated to dealing with "material reality" and making things useful for everyone (even though the world just does not break down this way, and it's an absurd strawman of her enemies' positions) she gives up all attempt to claim that both cultures are "of equal worth", or even to recognize that while one culture may, under her rubric, be male coded, and the other female coded, that doesn't mean they're biologically male or biologically female, or exclusively one or the other either, she feels free to descend into gender essentialist absurdity, boiling literally everything down to (subtextual) dick swinging.
we could build a culture in which people writing hard technical code are seen as useful but a bit weird and are generally kept out of the way in favour of the people who use these tools to engineer real systems that don't fall over. In that kind of case, the person who writes the Node app would be relatively unimportant while status accrues to the person who keeps the Kubernetes cluster turning over. But this isn't the world we live in.
Again, the mask of these two cultures being of equal worth slips. This is the dream of someone who truly believes they should be on top. Moreover, this just doesn't work. Without someone solving the hard technical code, or writing the Node app, there's nothing to "make sure it doesn't fall over." There's not value to provide by running that Kubernates cluster if there's nothing to run on it in the first place… or no Kubernates cluster in the first place, because no one cared to make the "brilliant and new" innovation to create it, or solved the hard technical problems involved with it, and because iconoclasm was considered a harbringer of disaster, not success, and so no one moved beyond mainframes.
Also, we have had this culture before, at places like IBM in the 50s and 60s. It wasn't a culture you wanted to be part of: fundamentally conservative, unimaginative.
The fact that the culture outright ignores an awful lot of the actual work that goes into turning a proof-of-concept application into something that users can reliably access and use is in itself a massive problem
They don't do that, though, not even the best AI users — the ones we should be listening to.