Since natural language sacrifices specificity in favor of painting with broad brushstrokes, many people — especially programmers and a certain kind of analytic philosopher — consider it inferior, thinking that the only "true" way to think about a problem or understand properly is to express it in formal language. While I agree that computational languages are clearly better for gaining a deep understanding of a problem than non-executable formal languages, which in turn are better in many respects than natural languages, which are in turn likewise infinitely better than just thinking in our heads, that doesn't mean there's no place for natural language in the discussion of complex and formal subjects — just look at the use of natural language in mathematics!
Natural language is powerful precisely due to the ambiguity, the infinite flexibility, and the ability to lean on a copious "standard library" of common sense that at the same time make formal languages necessary. These features allow someone using natural language to inspect or describe the general contours of something at a high level concisely and efficiently, if they know how to phrase what they're saying well and know some relevant vocabulary, without getting lost in the infinite factal complexity that formal languages often foist on their users. This doesn't just mean you can communicate (at a level of depth of your chioce) what you're intending faster: it can also prevent failing to see the forest for the trees, getting lost in the formalism and failing to see where you're going or how to get there.
Natural languages also allow, when proceeding through such a high level contour, you to preserve the semantic meaning and context of the various terms involved in a way that computational and symbolic languages don't. Instead of removing all semantic meaning from symbols, plugging them into a sort of symbolic machine, turning the crank, and then re-assigning semantic meaning at the end, as most formal languages do, natural language makes it apparent what everything means at every step. This allows one to more easily detect equivocations (hiding equivocations in forests of modal or predicate logic is something I've noticed in a lot of philosophy) and make sure one's argument still makes sense and is on course throughout.
Moreover, natural language allows you to frame the context and import of any given formal exposition, to better understand, express, or discuss how it fits into a larger overall picture and what it means for those looking at it.
Thus natural language can be a significantly more concise and effective, high density means of communicating an idea, in situations where the details just aren't that important, such as when they can be filled in on the basis of general assumptions and then verified, iterated on, and refined later as needed (usually in formal languages).
Those philosophers and programmers who rail against the use of natural language are like the cartographers of Borges unnamed empire, who made maps exactly the size of the territories they were meant to describe: yes, natural language elides details, introduces inexactitude and ambiguity, but as compensation, it is useful in a way that no formal language, no matter how powerful or abstract, can compare to.
This is not to say that natural language is remotely sufficient by itself. Not at all! You need a formal language, in addition to natural language, to clarify and sharpen — to actually fill out all the details and ensure the operability (whether in terms of computer executabillity, or adherence to logic or mathematics) of what natural language describes. What I'm arguing for is not that natural language is a good language for mathematics or programming, but that, while not sufficient on its own, it is not inferior to formal languages, and more importantly, that it should not be something that should be relegated to a mere side note, a comment in the margins to "help those poor ignorant saps who can't read formal languages along." That it instead should be considered a critical tool for not only comprehending, but constructing anything using formal languages.
This doesn't imply that you have to plan out in advance everything you're going to discover through formalism, or everything that you're going to implement in a programming language, beforehand, in natural language, and then stick slavishly to it. This is not a call for program specifications and waterfall to crush out the discoveries that interacting with a cognitive system like a formal language can provide us. Instead, this is a call for something like literate programming, where natural language as a description of what's going on is a first-class citizen.
As it stands, however, there are several problems with literate programming.
The first is that too many literate programs have book/thesis/paper envy: they want to be something you proceed through linearly, that builds a "story" of some kind for someone to follow. But fundamentally that's just not very useful: even the best-architected programs are almost rhizomatic webs of interlinked dependency. That's why it's often such a pain when languages can't deal with functions that are declared "out of order" or modules that have circular dependencies: that just naturally happens given how code works. This is also why almost every modern editor has semantically aware jump to definition powered by a database: to turn code into hypertext. Trying to linearize a program into something like a book just isn't very ergonomic, or even possible, and it's also deeply unuseful to the people trying to read or modify your code, who aren't going to want to sit down and read it like a book, no matter how good your technical exposition is: they're going to want to bounce around reading just what they need to understand, fix, modify, or improve whatever it is they're after. Most people, most of the time, don't just read code for fun — or, at least, they're not reading your code for fun.
This is fairly simple to resolve: just make sure that whatever literate programming system you're using is structured like a zettelkasten: individual functions or modules of code, with their associated text and graphs hyperlinked together and compiled such that no one ever has to worry about what order anything is in, and the question doesn't even make sense to ask anymore.
The second problem with literate programming is deeper, and similar to the problem of types, verification, and unit testing: if you write prose that describes what your program does, and then write the code, it really feels like you're doing the work twice, and nobody wants to do that. Moreover, it can be hard to keep the two in line, since now you have two sources of truth.
This is why agentic LLM coding is so interesting to me. They allow you to take advantage of both a computational and a natural language, by outlining the logic, architecture, features, organization, refactorings, rules, and so on of your program, or modifications thereto, in high level natural language that lets you focus on the things that are unusual, novel, interesting, or important (such as error handling and security), painting with that powerful broad brushstroke to bulid a map quickly, where the 1:1 details don't have to bog you down, and then you can carefully watch a computerized servant interpolate the specifics between your broad brushstrokes based on a massive training set of previous programs abstracted out into semantic space and formalize your logic and architectures based on a common sense ontology and set of reasonable assumptions. As this progresses, as you see the formal statement of your ideas take shape, you can iterate, refining your original ideas through the rigor of seeing how they'd actually have to play out in code and whether they actually work, or modifying the computer's formalization of your ideas to better adhere to what you meant.
Maybe in the future, every commit will be the result of a single prompt output, and every git commit message will be the exact prompt that was used (with optional programmer meta notes), so that we could view a history of the precise natural language that actually, directly, as a source of truth, instead of a pre or post hoc description, created the code you're looking at and how it changed over time. Maybe we'll be able to view any piece of code as an embedded example with a summarization and synthesis of the commit messages that changed it above it, like classic literate programming. Maybe we'll even be able to one day highlight a piece of code, and see what parts of each prompt the AI's attention was on when writing those tokens! Who knows what the future holds, but I'm at least interested to see.