The phenomenology agentic coding

The phenomenology agentic coding   ai philosophy philosophy programming

The discursive trap of productivity metrics

AI coding agents are important because they fundamentally alter what it is like to program. That is what this essay is about: not whether this transformation is good or bad for programmers as a labor bloc, or economically, or socially; not whether it makes us more or less productive in the odd Taylorist sense that seems prevelent whenever the subject pops up. What interests me and, I believe, should interest you about this whole enterprise, is the phenomenology of how this new human-machine assemblage changes being-at-keyboard.

The productivity discourse is, to a large degree, misguided noise, a discursive trap motivated by reactionary Luddism on the one hand and market hype on the other. While AI coding agents certainly make me, as a programmer, significantly more productive in certain domains, that productivity increase is really a second-order effect of the core transformation agentic coding brings in the phenomenological experience of programming.

It's important to focus on that fundamental phenomenological transformation not just because it's, to me at least, the most interesting part, but also because judging productivity is extremely difficult in the software developer space, and has been since long before AI ever entered the scene. The 'negative 2000 line' story famously illustrates why trying to track lines of code per day is a fool's errand, but even if you're trying to track tickets closed per day or something like that, even then it's difficult. First of all, it's often very hard to figure out how labor intensive a ticket is supposed to be; then there's also the question of whether those tickets should be closed at all: just as lines of code may not always be constructive toward the overall goal, completing as many features as requested by management or users is not always the most productive thing; sometimes stepping back and looking at all of the tickets together, to create a holistic view, and then maybe writing a different feature than what was asked for — sometimes people don't know what they actually want — or to create a deeper system that may not immediately resolve the tickets, but will eventually allow them all to be resolved at once, is smarter. Furthermore, there's the question of whether pure productivity is really what we should be valuing as an industry at all, or whether this valuing of productivity is purely internalizing the logic of the capitalist interests we work for.

Consequently, this question of productivity is too complex and difficult — again, for reasons completely unrelated to AI — to really be a productive lens through which to discuss how AI coding agents are transforming our practice as software developers.

The phenomenological shift

Most accidental complexity in software development used to be immanent to dealing with the essential complexity of the problem at hand. The accidental complexity was fiddly, variable, specific, and case-by-case enough that it couldn't be automated — or that any sufficiently powerful and complex automation system that could handle it would expose nearly the same level of complexity to the user anyway — and it was mutually constitutive, intertwined with the essential complexity, often enough that it was difficult to distinguish the two at all.

For a refresher on Fred Brooks' distinction between accidental and essential complexity, here's a quick guide.

As "Altoids by the Fistful," (which is a beautifully written essay well worth reading, by the way) says, many of the problems involved in software development are "cat turds":

There’s this very real sense that I don’t… I don’t want to solve this problem. There is no intellectual reward at the end of this journey. It’s not interesting to me. This isn’t something that needs to be fixed, because it’s not a situation that ever should’ve been permitted to happen in the first place. This is just a bunch of contrived nonsense that I must work through because the broader situation dictates it. It doesn’t matter if the solution is good or elegant. It doesn’t matter if it barely works. It doesn’t matter if it causes another problem that I stub my toe on in three weeks. It’s just… what I have to do. […] These kinds of problems are my cat turds.

Dealing with these "cat turds" has perhaps been, unfortunately, the dominant experience in our profession — what we spend most of our time doing. Yet they're fundamentally nonsense.

Software development has always been a highly creative, abstract, intellectually rewarding profession compared to most — I don't claim that it's anything other than a privilege to get to work in this field — but at the same time, so much of it has still been tied into these cat turds, and the mental and physical symptoms that come with them, which in extreme cases can contribute to the high burnout rates and mental health crises found in the industry.

I would compare these "cat turds" to Hannah Arendt's concept of labor: the dull, repetitive drudgery necessary to survive — in this case, for the project to survive — which has no room for creativity and self-expression:

Lying at the root of Arendt's concern is the futile character of these activities. They are repetitive and ceaseless and can be understood on a circular conception of time (ibid.: 97-8). Once we have eaten we are required to seek out more food so that we can eat again, and so on, ceaselessly until the cycle is stopped by death. Looked at from this perspective, we share the circularity of time with the rest of biological nature. There is nothing at the level of labour, understood in this way, that distinguishes me from you except, perhaps, the details of how we confront and, for a while at least, defeat necessity. There is noth- ing that distinguishes me from an ant or a tulip either in so far as we all seek subsistence in our own peculiar ways. Arendt argues that our individuality as human beings is lost in the grand scheme of life and our distinction as particular human beings with purposes and projects is equally lost from view. The endless cycle of life and death confronts us with our inevitable decay and our mortality.

and which, since it is made necessary by nature — or, in this case, the arbitrary decisions and failings of computer's man-made Nature, namely libraries, frameworks, languages, operating systems, and platforms — is not free:

This constraint on human life takes the form of necessity that binds us to a capricious natural realm and from which we only escape at death. It is important to notice from the outset that Arendt directly opposes necessity to freedom. In so far as we are only embodied creatures tethered to our biological needs we cannot be free. The possibility of freedom requires that we transcend our natural, biological selves. Indeed, Arendt adopts the ancient view that "the labor of our body which is necessitated by its needs is slavish" (Arendt 1958: 83).

Writing a useEffect hook for the ten-thousandth time is exactly like washing dishes; it is a "natural" necessity of the environment that produces nothing of permanent intellectual value — it's labor, and we shouldn't want to preserve that.

Yet many programmers, because accidental and essential complexities have been immanent to each other for so long, have come to view them as the same thing: to be unable to distinguish labor from action, or to value labor as the primary value — something Arendt herself critiqued, although in another context than this one: "… Arendt opposes modernist tradition that, while it rejects the vita contemplativa,values labour and work over political action." Instead, to quote from the "Altoids" essay again, many software developers have come to value arbitrary knowledge that is only necessary to know — if it is at all — due to the happenstance necessities of frameworks and languages, not the craft or goal themselves, as status symbols:

At one point, I had the 7-bit ASCII table memorized. […] I don’t know why I took the time to learn that. I never really used that knowledge in any real day-to-day work, and it began to fade from my mind as soon as I found some other pointless esoterica to wallpaper over it.

Look at me now, having to Google how to read a text file line-by-line in Python despite having done it a hundred times at this point. The knowledge is up there somewhere, I’m sure of it. I just can’t always think of the idiom in the heat of the moment. Just a little hint to jog the old brain, that’s all I need.

I often wonder what my Younger Self would think of me now, failing to remember a two-line snippet of code that you’d find in the first ten pages of any beginner’s guide to the language. He’d probably sneer and say I need to devote more time to studying. But I’m an adult with things to do; I can’t spend all my time just memorizing things just in case I might need the information someday. Oh, and by the way: Younger Self […] were you trying to become a contestant on Computer Jeopardy! or something?

[…] I didn’t try to be an asshole. It’s just that I tended to gauge my own self-worth relative to others based on the only social currency we could accurately compare: the amount of “stuff” we knew. […] I had a litany of command-line switches that I never used for anything, HTML character entity names for writing systems I couldn’t comprehend, and tales of tweaking settings deep inside the Windows 98 Device Manager just so I could brag about having been in there in the first place. […]

Everybody else who didn’t know those little pieces of nothing? They were the lessers. […] I now realize that everything I lorded over other people—all the things I gatekept without consciously understanding that this was what I was doing—I didn’t need to do that. […] It was just me, alone in my tiny sandbox, safe and secure behind my towering fortress of cat turds.

Now, however, the agentic workflow can enforce a relation of exteriority between accidental and essential complexity: detachable, deterritorialized, so that they're no longer nailed to the same territory, immanent to each other. You no longer have to be stuck thinking about a library's boilerplate and searching documentation just to implement a higher level algorithm, because they're no longer totally identical actions. Throguh agentic mediation, accidental complexity is not rendered invisible; its mode of appearance just changes: it becomes exteriorized — available, queryable, detachable — rather than coercively immanent with essential complexity. Accidental and essential complexity can still mutually effect each other; you are still aware of both (I have in fact learned many low level accidental technical tricks from my coding agents!); but through agentic mediation, coding becomes less the effortful traversal of a striated space of obligatory minutiae and more a navigation across smooth spaces punctuated by intentional reterritorializations.

This transforms much of the labor that software developers formerly had to do into Arendtian work — the act of constructing meaningful, lasting artifacts that compose the human-made world around us, influencing our identities, making it possible for us to exist as humans and individuate, protecting us from nature — and even, I would argue, action: the political (in the sense of the Athenian polis) process of individual self-expression and self-revelation, through the social act of discourse and deliberation with other human stakeholders, such as other programmers, users, and managers, engaging in the complex process of negotiating between all their ideas and needs and coming up with a unique-to-you formalization of what they're asking for.

The best way to understand how this transformation of the experience of software development works is to go through the phases of the software development life cycle and look at what they look like with agentic coding.

  1. Orientation: The first stage of any agentic coding process on existing projects is to orient yourself for a given problem: to figure out what files, classes, methods, or functions implement code relevant to whatever you're doing, to trace the pathways through that code, to break down and understand relevant concepts, to find relevant existing code you could use to do whatever you're doing, to understand the technologies currently in use, and to search through the project's documentation. Without an agent, unless you're working in an exceptionally well-documented project, this aspect of the process is significantly improved by agentic coding. Instead of having to manually search through the code base and its documentation to build a mental model which — even if you're taking notes — must be slowly and painstakingly built out of dead ends, and be loaded out of memory in order to think about low level details like trial-and-error ripgrep searches. With an agent, you can ask it for the important details, or even just tell it what you intend to do and what it should generally look for, and it can build a report for you, providing files and line numbers for you to double check what it's describing first-hand. It can use find, grep, look through node_modules and GitHub source code to understand dependencies and double check documentation, and, with the latest agents, use language server function calls to find definitions, implementations, incoming calls, call graphs, and everything else you yourself could do in your editor. This allows you to build a mental model with significantly less effort and typically less confusion and context switching. This in turn lowers the cognitive effort of understanding an existing codebase enough that, at least for me, it motivates me to contribute more to existing codebases, and to read them more thoroughly and carefully before implementing things.
  2. Specification: Agentic coding allows me to work with an AI rubber duck to generate a detailed specification of the technologies used, architecture, algorithms, data flow, and usually data model — in terms of types and function signatures, as a form of type first development — of a feature I want to implement. In this phase, I can specify all these things myself, abstractly, in text that perhaps is still wrestling with ideas, or leaves some things implicit, or isn't perfectly well organized and coherent, using writing to think through a problem; then the AI can expand my prose into something that is perfectly clear and well structured, add more specific details, research documentation, and read existing code for me to help me build the specification. All through this process, crucially, I still understand the algorithms, what's being done and why and with what technologies, because I'm defining it — at least the crucial, essential complexity parts of it — and, since I'm defining the architecture and working with the agent to define the function signatures, types, and file structure, I'll crucially still know where everything is if I need to read more details. This process of working out the details of what you want and why, and how you want it to be implemented, and watching the agent sort of extend/extrude your logic into a complete specification, can be very enlightening, as well: it shifts software design from a monological existence (you alone with the syntax) to a dialogical one, changing it from a solitary act of construction into a collaborative act of negotiation. Even though the agent is a tool, the experience of it feels like engaging with an external intellect to augment your own internal subconscious, and because of that, you can find latent logical contradictions in your idea, or important details you missed, or realize that it isn't what you wanted after all, by seeing something else work out your ideas and mirror them back to you. Things I, at least, tend to otherwise only stumble across when I'm already knee deep in implementing something. In this sense, it allows a sort of accelerated form of hammock-driven development: the ability to step back and think about what you're telling the computer to do, except improved by the fact that you're actually writing down everything you're thinking (because writing is the best form of thinking) and actually getting to see your hammock-thoughts linked to real technologies and extended in front of your eyes, with the AI model inside the agent acting as a sort of Burroughsian "Third Mind" augmenting the subconscious usually active in the hammock phase, catalyzing new ideas or revealing latent ideas. You can even have the agent ask you clarifying questions or critique your ideas, leaning on the fact that they are, through their weights trained on the entire internet, vast libraries of software architectural knowledge. I imagine eventually perhaps lightweight formal methods, especially simple non-temporal model-checking, like is done with Alloy, might also become more viable due to agentic coding in the future, as agents make both writing a specification and, especially, translating it directly into working code, easier.
  3. Coding: Here is where things shift from just being able to talk to a chatbot that can automatically look things up. Instead of you, as the programmer, ending up beholden to the own specification, laboriously implementing by hand everything which you've already thought through — and is therefore no longer novel, but instead feels repetitive and unnecessary — and intellectual novelty is an important component of the experience of software development — you can instead feed the same coding agent the specification and have the coding agent execute on that. Generally, the specification is split into phases, each laying the groundwork for the next to facilitate this, and then each phase is (sometimes implicitly) split into various bite-sized pieces of work – such as writing a particular function — which the coding agent will pick up on and use tools like todo list functions. There's more here than just automatic specification-implementation, however: oftentimes, in the process of implementing a specification, or prior to / outside of any specification implementation, you come across smaller, more self-contained features that don't warrant a rigorous specification. Even here, and perhaps most visibly, agentic coding enables a more abstract approach to coding. As outlined in my essay on natural language programming, even the direct translation of a natural langauge "inchoate specification" into code has significant benefits in terms of not having to deal with accidental complexity and decreasing cognitive fatigue. I recommend you go read that essay after this one, as it is perhaps equally important for smaller uses of agentic coding.
  4. Verification: All through this process of implementing these bite size pieces, I'm carefully reading the diffs produced, stopping the agent if there's something I don't understand, or that isn't correct, or isn't up to snuff. At every stage, the AI model is automatically fed by the coding agent harness (such as OpenCode) feedback from linters, type checkers, and language servers, which I can carefully configure in detail to enforce invariants I want, and it automatically fixes any complaints those have before moving on. This is how you avoid hallucinations of non-existent library functions, or duplicate code definitions, or unused variables: the compiler will find them and detail the resolutions (at least, if you have a good compiler / type checker, like rustc or Astral's ty typechecker for Python), and if the model is confused it can use ripgrep and curl to read relevant documentation (usually linked to from an llms.txt which is either provided or generated) or even read the source code of an under-documented library — which allows me to use small libraries and languages I never would've before — to correct its mistakes.
  5. Specification Revision: An important aspect of this is that unlike Waterfall style development, where the specification is largely immutable, provided by upper management, and slaved over extensively so that it's difficult to justify changing in the face of unknown unknowns, specifications are, with agentic help, relatively quick to create, and to modify while keeping them consistent and organized, as you go along.
  6. Testing: Eventually, when you're done implementing whatever feature you had in mind, you can have the agent summarize everything it did, then run your tests — property-based testing, which is specifically good at overcoming malicious or superficial compliance with tests, and deterministic simulation testing, are probably the frontier of this aspect of the process, but regular unit and integration tests also serve fine, the more the better — and then you test the software yourself, manually, to see if it does what you want.

See here for a graph.

The key phenomenological transformation occurs primarily in the coding and verification stages, and really happens through two mechanisms: the automatic mechanical performance of text editing labor, and the transformation of the low level technical labor of production to the much higher level form of verification, which is hardly labor at all, more like the overseeing of labor. Let's look at both of these in turn.

Regarding the automatic performance of text editing labor — as in, the agent's mechanical ability to produce, transform, translate, relocate, delete, edit, search through, digest and summarize text — this is perhaps a bigger aspect in itself than one might expect. It's often argued that the bottleneck of software development is not typing or editing speed. This is true in terms of productivity per unit time on a fixed task — most of the time of which is spent thinking or debugging — but I think the phenomenal experience of editing, independent of its objective speed, radically alters developer behavior by making us more likely to perform tasks we wouldn't otherwise have done because of their text-editing intensity or laboriousness.

For instance, I personally find that my code quality is actually significantly better when I am using AI coding agents, because if I want to refactor, I no longer have to worry about having to physically rewrite all of the code: physically spend all this time selecting text, cutting and pasting it around, typing out its replacement, working one by one through all of the type errors and undefined variable references that are created by these in-depth refactors, again, none of which is novel or intellectually interesting work, which are the experiential factors that motivate us software developers — so I no longer end up deciding that it's going to take me so long and so I better just not do it. With an AI coding agent, I can just tell it the final architecture I want and some details about how to get from here to there, and it can execute on moving all of the code around and rewriting what needs to be rewritten, and then the automatic LSP runner in my AI agent harness will run the LSP and give it a list of errors that it needs to fix, both type and linter and etc., and then it can just go through one by one and fix them. In the end, I still understand the architecture because I defined that architecture, but I didn't have to manually get there.

Regarding the shift from technical production to expertise-based verification, the reality is simple: there is a huge advantage to being able to rely on an AI's built in knowledge of a vast range of libraries and programming languages, with all their functions and syntax, and — when that fails — its ability to read through vast swathes of documentation or source code and synthesize a correct answer for the particular problem at hand according to the particular specification and code style guidelines you have (without questioning you or closing your question without answering it or marking it as duplicate), as opposed to having to read 35 partial, outdated, unrelated, poorly answered, or closed Stack Overflow pages, or trawl for hours just looking at the specific documentation, hoping to find a function that happens to do what part of what you need and synthesize the answer yourself out of that. In this sense, verification of code is far easier than production.

Doing all that scavenging for documentation often doesn't even contribute to any kind of learning process. Yes, it requires me to "think more" in some abstract sense, but only in the sense that salting 50% of a programming guide with nonsense, or translating it all into pig latin, would — it's not productive thinking: most of that thinking is filled with loading unrelated, useless, or wrong info into my brain and then immediately throwing it away. And yes, actually assembling and typing in the answer yourself is much better for the learning process — but the entire point of using an agentic coding agent is, hopefully, to not have to worry about learning many of the details it handles as much, to the degree where more general knowledge is fine. Once you have enough experience and basic knowledge to be assured that, if you needed to, you could become fluent in whatever specific technical details you need, and as long as you make sure that you're still involved — if less intimately so — with the process so you know what's going on and are picking up on learning new things, you should be good to go.

There is a sense in which agentic coding is arguably better for learning as well: if I get a direct answer, I can test it to make sure it works, and then study it, ask the agent to explain to me what it did and how it worked, then look up the docs first hand based on that, then practice with it by hand if I need to; the point here is that, suddenly, instead of being forced to deal with the accidental complexity of the technologies you're using, instead of the essential complexity represented in your specification — which will include relevant technical details, perhaps having to do with performance and platform details or whatever — it becomes a choice on your part to zoom in if you want to. This is not good for juniors who still need to learn the ropes, and will build up the algorithmic and architectural knowledge, insights, instincts, and experiences necessary to be effective software developers only through engaging with the low level details; and exclusively using agentic coding might not be good for seniors either — deliberate, intentional practice has always been good, but it is now probably more important than ever — but this is not necessarily a reason to dismiss these tools wholesale, in the same sense that just because it isn't good for a student taking calculus to have Mathematica to solve their problems for them, it doesn't mean that senior physicists shouldn't have Mathematica at their disposal.

The natural response to this is to counter by pointing out that verifying code is often much more difficult than writing it. It's easy to fully understand code that you've written, because you were the one that created it; you constucted the underlying logic and then translated it into code: what it is doing and why is immediate to you. Meanwhile, when you look at someone else's code, it's at a remove: more like going over a crime scene as a forensics officer, trying to reconstruct what happened from various clues — the source code — and then guess at the motive, than that intimate first-hand knowledge you have of your own code. On the basis of this, it's often stated that the work of overseeing a coding agent is likely to be far more stressful and difficult, slower, and more likely to let bugs through.

My response here is simple enough: if you're following the software development life cycle that I described above: if you're building specifications that lay out the what and the why of everything, that lay out the types and function signatures, and then breaking those specifications down into bite sized features or phases that you implement step by step — using task lists or similar — with an agent, where each suggested edit is no more than a few lines for complex work, and no more than two dozen or so even for very simple work like logging, you have been following through the whole time.

Reading other people's code is harder than writing your own for a lot of reasons, but most of them don't appear to apply. It's not like trying to familiarize yourselve with a brand new, totally unfamiliar codebase, or reading a piece of complex example code in a bad textbook that doesn't explain why anything is the way it is or what's going on. In those cases, you're presented with too much code to easily digest all at once, wholesale, and you have to make the best of it. It's not even like doing code review on a tightly-scoped, small PR from a coworker, either. Even in that case, as I described in the previous paragraph, you still have to reconstruct the what and the why behind what's going on.

No, in the case of agentic coding, you already know the why, both on the micro and the macro levels, since you still have designed all of it; you already know all of the important semantic details of the how, and the agent's code is simply a transformer model (which is, at core, a model designed for translation)'s attempt to translate what you wanted into working code, there's no other mind at work, either it's correct or it's random, there's no forensics to worry about; for individual technical decisions, which are all the coding agent should be making, like how a library function's parameters are being set, that's easy enough to look up after the fact (once the agent reveals to you that the function exists and what it's called, and how to use it, and in what context, which is the annoying documentation-search part that it's short-circuited for you) just using standard documentation; and the size of each code bite is small, and organized into a coherent building-up of the final product, almost like a very didactic textbook taking you through how to write a project, that it should be very easy to grow organically an understanding of what's happening. All of this means that it should be trivial to spot any logic errors or mis-implementations of your will. Combine that with the strict type system, linting, and testing routine I recommend, and I don't think this should be as much of a problem as people make it out to be.

Another common response to this is that a better solution to the problems of accidental complexity, annoying configuration, confusing build systems, exhausting refactoring shovel-work, and boilerplate is to "simply use better systems." A well-stated example of this perspective can be found on the typically anti-AI lobste.rs forum, where user jclulow says:

The thing is, the friction of boilerplate and mundane problems can provoke two different kinds of responses:

  • grit your teeth, get through it, how can the world be any better than it is already, and thank god the computer can sort of do some of it for me now because good god do I hate what I do
  • wow, this is annoying, let me imagine better tools that don't require boilerplate or mundane labour, and then build them – without requiring the flushing of any hope of future improvement of craft down the drain of rentier capitalism, proprietary software, unimaginable complexity, plagiarism, and a whole host of other ethical, environmental, and societal issues

I prefer the second option!

Leaving aside that the disagreements one might, and I do, have with the framing of the "ethical, environmental, and societal" issues, what can be said about this point of view? I think, primarily, it's the point of view of the hopelessly naive. The vast majority of working programmers, or even hobbyists working alone, no matter how much they care about their craft, are not free to simply switch the tools they're using to avoid boilerplate or mundane labor, for several reasons:

  1. Technological investment: Depending on the technology, it could literally take decades of concerted software development effort to create a "better tool" that is comparable in robustness, performance, capability, or flexability to the tool being left behind, because excellent underlying technical characteristics are not and never have been identical with excellent user facing characteristics; for instance, the Go programming language runtime is an incredibly unique feat of engineering that is as good as it is because of the unique talents of Rob Pike and the other people working on it as well as the very rare concentration of industrial scale resources, refinement, and testing that Google's backing can give it — even as the Go language itself is an awful resurrection of 1970s programming language design ignorant of every practical and theoretical development since, and designed by someone with a transparent disdain for the "average programmer" that would be using his language. If you want to avoid dealing with the accidental complexity Go creates, you may "imagine" better tools, but you almost certainly won't be able to make them.
  2. Network/ecosystem effects: as I have said elsewhere, the range and technical capabilities of the libraries you have access to — those crystalizations of developer years of effort, wisdom, expertise, and refinement — often matter far more than the qualitative aspects of any language (or prospective library) you might want to replace it with. In terms of being able to directly achieve your overall goals without having to devolve into a million side quests to implement things you need (if you even can implement them to the level you need), a more technically capable library with more boilerplate, or a language with more boilerplate that has access to more technically capable libraries, will almost always win out against a more elegant and powerful language with access to less technically capable libraries, or an elegant and simple library with almost no boilerplate that simply can't do what you need it to do.

At this juncture, I want to interject that it might be interesting to think of agentic coding as somewhat deterritorializing the advantages of libraries and runtimes from the experience of using the languages that — often — we had to reluctantly swallow dealing with in order to access. I can now write Go code, for instance, as quickly as I'd write any other language's code — and perhaps read and review it more easily! — because the AI deals with the inherent verbosity that results from the language's simplicity for me. Would it be better if Go was better designed? Absolutely. I long for a language with access to the power of its runtime but the design of something like Gleam; alas, that does not exist.

  1. Social factors: if you're working on a project with other people — whether as part of a business, an open source project, or simply just a hobby project you want to make with others — one of the most important factors is choosing technologies that your prospective collaborators actually know how to use and want to use; languages that can "avoid boilerplate and mundane labor" tend to be much more difficult and advanced languages, like Haskell or Common Lisp that people aren't likely to know or what to use or which, in the grand scheme of things, might not actually be good for collaboration at all (although that's debatable).
  2. Sunk cost: likewise, sometimes you've already begun a project in one language, and don't have the time, energy, or resources to do a full rewrite just to choose something more "elegant"; while worse is better should never become an excuse, let alone a positive justification, for using or producing intentionally subpar technology, sometimes it really is the most practical option just to maintain backwards compatibility and build on top of what you have, instead of giving in to second system syndrome and/or not-invented-here syndrome.
  3. Business factors: sometimes, bosses just assign which technologies to use. That can be because of ease of hiring, because that's the technology the business is heavily invested in already, or because they actually purchased that technology and want to get their money's worth; or it can be for one of the reasons above, intensivied and made non-negotiable by the hierarchy of the corporation and the strictures of higher level business concerns.
  4. "Essential accidental complexity": I'm really not sure how to express this, but sometimes there's just a certain about of boilerplate, build system complexity, configuration complexity, etc, that is ineradicable with any modern tooling, even though it's not actually inherent to the problem in the abstract. Sometimes, it's not even that extreme — it's just that building the abstractions necessary to remove that complexity and boilerplate would produce something overly rigid and single purpose, or overly abstract and difficult to understand, or take too long, and in general not be worth the challenge.

Thus, for all that agentic coding in the manner described abstracts from the accidental technical details, you are still very much doing software development in a way that will prevent skill atrophy, and you are not "giving up on your craft." You still have to design data structures and algorithms; you still have to think through robust software architecture, redundancy, and error handling; you still have to know how to choose technologies, and what each technology is capable of as well as what its pitfalls are. The more you've studied various libraries of algorithms, mathematical formalisms, the better you understand space and time complexity, and the more clearly you can think and express yourself, the better you'll do. Far from "deskilling" you, I think instead that the skills just… shift.

In essence, what agentic coding does is transform the low level technical details of software development — from the random bits of technical knowledge needed, to the annoying configuration issues and build system management, to navigating and building coherent narratives out of foreign codebases, to, perhaps most especially, the grunt shovel-work of code rearrangement and refactoring — from often frustratingly present-at-hand experience, breaking your flow and distracting you from the overall picture, discouraging you from producing better algorithms and architectures, to something that is instantly ready-to-hand: transparent, a smooth space to move through, where the lower level accidental technical details come to you as quick as you need them, instead of forming an interruption in trying to use your tools.

It's something like Fred Brooks' model of the surgical team approach to software development, where you have one core mind — the titular "surgeon" — who "is primarily a system designer and who thinks in representations," who "personally defines the functional and performance specifications, designs the program…", and then many other members of the team who provide the surgeon with various kinds of aid, relieving them of having to think about various secondary tasks; in the original Mills model, the surgeon would have:

  • a copilot, "able to do any part of the job, but is less experienced … [whose] main function is to share in the design as a thinker, discussant, and evaluator,"
  • an editor, who "takes the draft or dictated manuscript [of documentation] provided by the surgeon and critizies it, reworks it, provides it with references and bibliography, nurses it through several versions, and oversees the mechanics of production,"
  • a program clerk, "responsible for maintaining all the technical records of the team," the toolsmith, to build ad-hoc tools that the surgeon may need, the tester (self explanatory),
  • and, perhaps most relevantly to our current subject, a language lawyer, about whom Brooks says:

By the time Algol came along, people began to recognize that most computer installations have one or two people who delight in mastery of the intricacies of a programming language. And these experts turn out to be very useful and very widely consulted. The talent here is rather different from that of the surgeon, who is primarily a system designer and who thinks in representations. The language lawyer can find a neat and efficient way to use the language to do difficult, obscure, or tricky things. Often he will need to do small studies (two or three days) on good technique. One language lawyer can service two or three surgeons.

Obviously, the agentic coding model is different in its details, but here too we have a core programmer with lots of experience designing the program, defining the core specifications and performance requirements, and then a team of — in this case — subagents filling various roles to relieve the core mind of the overhead of dealing with things outside what they're good at.

All of this combines to create an experience of programming where you are more focused on the map than the territory: you can stay in that high-level space of algorithms, data structures, architecture, and requirements, moving freely on that plane. You are no longer forced to regularly discard that built up high-level, meaningful view in order to swap in arbitrary, low level, technical details into your working memory just to deal with accidental complexity — only to have to build up your high level context again, building up mental fatigue and forgetting details all the way.

Worse, with every swap to low level details, you run the risk of simply forgetting to pay attention to the high level goal toward which you're running, or getting stuck in sunk cost fallacies. The technical minutiae of a specific, local problem threatens to overwhelm your recent memory, your attention, and your concern completely, sending you down a blind alley in a characteristic form of engineer tunnel vision: more focused on "let me get this thing working," or "let me just hammer this code out" than "do I need this?" or "am I implementing this in a way that'll function when fully worked out?"

Vibe coding lines of flight

There's more to it, though. Even if you are not doing this upfront specification and then execution style — even if you are somewhat vibe coding — vibe coding is often a really helpful way to very quickly get a proof of concept, so you can see your problem in action. This is important because you do not know everything about a problem up front, especially how it will work in the real world, how it will operationalize, how it all fits together, whether technologies can really do what you want and how they fit together — all of this! — until you have built something. There are always those unknown unknowns until you have a prototype, and vibe coding lets you create prototypes quickly and easily, with a minimum of time and cognitive effort, allowing you to explore lines of flight through the space of possible ideas, transforming fluidly the virtual — possibilities — into a low-resolution actualization only to allow it to dissolve again just as easily if necessary.

Prior to vibe coding, teams sunk so much effort into a prototype that often it ended up just becoming the final product — "there is nothing so permanent as a temporary solution," as the old adage goes. But with AI, you can set up the basic requirements and technologies and let it just slop something out without even necessarily reading it, just so you can get your mind around what your idea for how to deal with the problem might look like in practice, and then you can go back and totally rewrite it without worrying, because it takes so little time and effort to build a prototype that there's just not much sunk cost to pull you into its vortex. Not only that, but you can try more things thanks to this greater velocity and abstraction. Yes, the code quality may not be perfect; yes, things can be buggy; but the point is to explore into the plane of programmatic immanence.

This can combine with specification-driven agentic coding elegantly, as well, by allowing you to build prototypes based on various different core ideas or drafts for a specification, to fill in those unknown unknowns, through quick experimentation and feedback, that would otherwise make up-front software specifications — even ones that are loose and natural-language — too inflexible to work, representing a fundamental shift in even the process of creating specifications at all.

Conclusion

The core takeaway you should have from this is that even if, in terms of lines of code per hour, or features per day, agentic coding didn't make me any more productive at all, I think I would still prefer it — purely for the way it relieves cognitive fatigue and burnout spent dealing with accidental complexity, and for the way that removing that accidental complexity allows me to focus on what's important and interesting more: to write more tests, refactor to achieve cleaner architecturs, and keep my eyes on the bigger picture.

This is not to say that there are not fierce criticisms to be made of the political economy of agentic coding and generative artificial intelligence systems in general, but that is not the primary concern of this essay. The point of this essay was to focus primarily on how these things transform the experience of programming, and in my analysis, agentic coding makes software development more human. It transforms it from a solitary activity requiring unergonomic uses of the mind to deal with arbitrary, frustrating, cyclical problems with deeply flawed systems not truly designed for human use, struggling with cognitive fatigue, loading things in and out of memory, to a process of dialogue, negotiation, and exploration. One that can use natural language when suitable just as often as it uses formal languages, and engages the social, creative, world-building, and contemplative aspects of the human mind more than that of rote memorization.