Written language is a tool for thought.
When thoughts are in your head, they exist as a shifting cloud of ideas and connections, never fully locked down to a specific web of meaning and ideas, with different parts of the larger thought and context free to shift continuously in and out of your focus as you analyze different aspects of the problem. This is because human working memory is generally too small, and too unreliable in particular details, to nail down a precise set of ideas and connections, expressed in specific detail, and hold them clearly in our attention, while still having cognitive space left over for manipulating them in some way.
This is what allows cognitive dissonance: when one part of the conceptual web of your ideas and context is in view, you can think one thing; when it shifts out of view and another comes into view, you can think another. This is also how, by keeping things shifting, vague, and nonspecific, we can fool ourselves into thinking there are solid connections or justifications where there aren't because "I know it's there, I see the outlines, I don't need to pin it down specifically" (when pinning things down in specificity often reveals errors you didn't see before).
When you're writing your thoughts down, however, the page is acting as an external brain: you express your thoughts on the page and then instead of having to hold them in your working memory in order to analyze them, it holds them for you, ensuring clearer recall and more mental space for actual consideration. Moreover, when you have written something down on the page — especially after some time — it presents itself to you as alien: as the writings of someone else (your past self) speaking to you, instead of you speaking. This allows you to externalize your thoughts more, get some distance, analyze them more objectively.
Of course, we can't just directly dump our thoughts onto a page. We have to use a language of some kind. This entails a few things.
First of all, we have to perform the act of "collapsing the wave function": deciding which specific ideas and connections, which specific pieces of context and larger thoughts, are relevant, and at what level of detail.
Second, we have to express those things in terms of the language we've chosen to note them down with: choosing what words and the concepts and categories they express, what grammar, what structure, what order, and so on. The act of expressing these thoughts in a language can bring even sharper focus and clarity to what precisely we mean and how it's arranged, and how our logic flows.
I think the same is true for programming languages. They, too, are a language for expressing ideas about behavior, operations, categories and ontologies, relations, and abstractions. And they, too, are written down using an "external brain." The only difference between a programming language and something like mathematics, or even natural language, is that it is designed to be even more precise and unambiguous, enough that a computer can execute it.
This might seem like a handicap for the expressivity of a programming language compared to other formal languages to those who are used to seeing code in traditional languages like Java, but if you've seen good high level code in Lisp, Scheme, Haskell, or APL, then you'll know that they can be just as beautiful and comprehensible (and concise, depending on your taste in languages – I prefer lots of full words, like in Scheme and Lisp, over terse point-free programming like in Haskell or APL) for expressing ideas as any other formal notation.
Moreover, a powerful (so that you can express any abstraction and mental model you need), high level (so that you're freed from accidental complexity in your expressions), symbolic (so that you have a way of representing unique concepts – essentially "proper nouns" – in a terse language level way, and doing symbolic manipulation for things like mathematics and logic) or logical (so that you can speak declaratively about the problem space and constraints) computational language can actually be a far more efficient vehicle for expressing these things than either mathematics or natural language.
This is due to the fact that programming languages are executable. If the measure of truly understanding something is being able to do it in all general situations, and teach it to others, then being able to write working code to represent an idea represents the strongest form of understanding of all: being able to write a description of the idea that is so precise, yet so general, that it can actually teach a computer to do it, and it actually works. For example, Sussman, Wisdom, and Mayer's Structure and Interpretation of Classical Mechanics is a graduate physics textbook expressed entirely in terms of generic programming in Scheme, instead of mathematics. In the Preface, they state why:
Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding. Traditional mathematical notation contributes to this problem. Symbols have ambiguous meanings that depend on context, and often even change within a given context. … [Therefore] [w]e require that our mathematical notations be explicit and precise enough that they can be interpreted automatically, as by a computer. As a consequence of this requirement the formulas and equations that appear in the text stand on their own. They have clear meaning, independent of the informal context. …
Computational algorithms are used to communicate precisely some of the methods used in the analysis of dynamical phenomena. … Computation requires us to be precise about the representation of mechanical and geometric notions as computational objects and permits us to represent explicitly the algorithms for manipulating these objects. Also, once formalized as a procedure, a mathematical idea becomes a tool that can be used directly to compute results.
Active exploration on the part of the student is an essential part of the learning experience. … That the mathematics is precise enough to be interpreted automatically allows active exploration to be extended to it. The requirement that the computer be able to interpret any expression provides strict and immediate feedback as to whether the expression is correctly formulated. Experience demonstrates that interaction with the computer in this way uncovers and corrects many deficiencies in understanding.
Use of programming languages this way is not limited to a few academics, either. Programmers in the industry really do it too: all the knowledge that goes into any reasonably sized program is going to be far too much for anyone to actually hold in our heads, especially over long periods of time. And comments, while they can help, violate the principle of a single source of truth: they can end up encoding misconceptions about what the code actually does or the knowledge it encapsulates, or they can get out of sync. So when we write code, we have to employ the same dynamic with it that we have with writing, relying on it as a sort of cybernetic extension of our minds, holding our thoughts about whatever we're programming as we do other things, relying on the mental models and knowledge encoded in it to think with.
This is totally unconscious, most of the time: once you've integrated well into a codebase, the surrounding code with its accompanying ontology and knowledge just automatically filters into and structures your thinking about whatever you're writing or reading. And whenever you're writing code, you will tend to structure it and organize it to match your unspoken ontology of the problem, and encode your knowledge about the behavior required and the other behavior and properties of the surrounding system.
This means a few things.
- Even if you don't use them often, or even most of the time, your language needs to be able to express powerful abstractions when necessary. Limiting the range of abstractions your programming language can express, or the level of abstraction of those concepts, is like giving your programmers brain damage. They will no longer be able to think as well, because they'll have gaps in their external minds where concepts that might be useful for modelling the world should go.
- Conversely, using overcomplicated ways of expressing ideas is just defeating the purpose of expressing them. You want to express ideas as clearly as you can, just like in writing.
- You want, as much as possible, a language that can express and manipulate whatever the primary elements of the ontology you're modeling are as first-class entities. But importantly, you don't want the mechanisms that let you talk about those entities to be highly abstract – you want them to be concrete, specific, like symbols in Lisp, not monads in Haskell, because speaking all in abstractions is not a good way to think. Humans tend to get tangled up in abstractions when they're not combined with concrete details.
- Your language should be highly multiparadigm – as long as a decent level of orthogonality is maintained – because not only do you need powerful concepts and high level abstractions to be available when necessary, just like they are in people's heads when they're thinking, but having different ways of expressing concepts and abstractions is also important. This is because different problems are most amenable to different kinds of ontologies – for instance, a system with a ton of stateful components interacting is probably best represented by objects – and different people think about things in different ways. Being able to express things in a diverse way is important.