Table of Contents
- 1. Software Development
- 1.1. A Case for Feminism in Programming Language Design programming philosophy
- 1.2. A Road to Common Lisp programming philosophy
- 1.3. TODO Common Lisp: the Language, 2nd Edition (plus a guide on how to modify it to be up to date with ANSI) programming
- 1.4. Complexity Has to Live Somewhere programming
- 1.5. TODO Design By Contract: A Missing Link In The Quest For Quality Software programming
- 1.6. Effective Programs programming philosophy
- 1.7. EQUAL programming history
- 1.8. Ethics for Programmers: Primum non Nocere programming software philosophy anarchism
- 1.9. Execution in the Kingdom of Nouns programming
- 1.10. Functional Programming Doesn’t Work (and what to do about it) programming
- 1.11. TODO Intuition in Software Development philosophy programming
- 1.12. Leaving Haskell behind? programming
- 1.13. Literature review on static vs dynamic typing programming
- 1.14. Maybe Not programming philosophy
- 1.15. Notes on Postmodern Programming philosophy programming
- 1.16. On Ada’s Dependent Types, and its Types as a Whole programming
- 1.17. TODO Ontology is Overrated: Categories, Links, and Tags philosophy software
- 1.18. TODO Programming as Theory Building philosophy programming
- 1.19. Proofs and Programs and Rhetoric programming philosophy
- 1.20. Semantic Compression, Complexity, and Granularity programming
- 1.21. Summary of 'A Philosophy of Software Design' programming philosophy
- 1.22. Technical Issues of Separation in Function Cells and Value Cells programming history
- 1.23. The Anti-Human Consequences of Static Typing programming
- 1.24. The epistemology of software quality programming philosophy
- 1.25. The Horrors of Static Typing programming
- 1.26. The Lisp "Curse" Redemption Arc, or How I Learned To Stop Worrying And Love The CONS programming hacker_culture anarchism
- 1.27. The Perils of Partially Powered Languages and Defense of Lisp macros: an automotive tragedy programming
- 1.28. The problematic culture of “Worse is Better” programming philosophy software
- 1.29. The Property-Based Testing F# Series, Parts 1-3 programming
- 1.30. The Safyness of Static Typing programming
- 1.31. The Unreasonable Effectiveness of Dynamic Typing for Practical Programs programming
- 1.32. Typed Lisp, A Primer programming
- 1.33. What Clojure spec is and what you can do with it (an illustrated guide) programming
- 1.34. What science can tell us about C and C++’s security programming
- 1.35. What We've Built Is a Computational Language (and That's Very Important!) programming software philosophy
1. Software Development
1.1. A Case for Feminism in Programming Language Design programming philosophy
I had to painstakingly convert this from a two-column PDF so that I can more easily read it in more places, instead of just on my ereader or at my desk. I hope anyone who finds this appreciates the effort, it took fucking forever.
In any case, despite what might for some be an inflammatory title – even as a leftist and a woman I recoiled a bit, wondering how feminism could possibly be applied to programming language design – this is a really excellent paper. It goes into detail about all the shortcomings of the PLT community with respect to studying the practical and human aspects of programming language design, and the cultural factors that exclude those who want to do it, as well as just exclude people in general. Well worth a read.
1.2. A Road to Common Lisp programming philosophy
This is not just an incredibly complete and excellent set of resources to get started with Common Lisp, literally enough to get anyone completely off the ground, but it's also a really good articulation of some of the reasons that Common Lisp is attractive even today (although I have my own list).
1.3. TODO Common Lisp: the Language, 2nd Edition (plus a guide on how to modify it to be up to date with ANSI) programming
Despite not quite being accurate to ANSI Common Lisp, I still find CLTL2 an invaliable reference for Common Lisp. It's much more comprehensible than the HyperSpec, while still being much more complete than any other book thanks to its position as an interim spec of sorts. Maybe someday I'll get around to editing my mirror to be up to date for ANSI.
1.4. Complexity Has to Live Somewhere programming
A more thorough elucidation of something I've said a lot before about how the mindless dogmatic pursuit of simplicity in the places most visible to programmers just pushes the necessary complexity needed to actually map to the world or requirements somewhere else, usually into less visible places like people's heads or glue scripts. This is very important to understand: trying to artificially "cut the gordian knot" of complexity is a bad move, it's just putting the burden somewhere else, possibly somewhere worse. Correctness is actually more important than simplicity. A good comeback to the "worse is better" and "unix philosophy" mindsets.
1.5. TODO Design By Contract: A Missing Link In The Quest For Quality Software programming
Design by Contract seems like the perfect balance between high assurance/formal methods and comprehensibilty, expressiveness-to-complexity ratio, and practicality, for reason's I've discussed elsewhere. I haven't read this paper yet but I'm very interested in doing so.
1.6. Effective Programs programming philosophy
I'm waring of Rich Hickey, since he really does seem to have built a cult of personality around him in the Clojure community, but he is extremely smart, incredibly articulate, and very pragmatic and wise, and this is one of my favorite talks by him. I love his combination of being intelligent and careful and a Right Thing thinker, but also deeply pragmatic – aware of the need of programs to change over time, to be embedded into, and composed out of, heterogenous and ever-changing systems, to be dynamic, and how programming ideas effect those things.
1.7. EQUAL programming history
This essay by Kent Pitman defending design choices in ANSI Common Lisp with respect to equality operators and copying functions might seem of only historical interest – and it certainly is that, too – but it actually puts up a pretty good defense, in my opinion, of why those design choices were The Right Thing and even maybe hints that other languages should do something similar. It also gets at a very important point: that types are not actually very indicative of intent, just some sort of general operational compatibilty, without the introduction of copious newtypes at least, which has its own costs.
1.8. Ethics for Programmers: Primum non Nocere programming software philosophy anarchism
This essay proposes one core principle that could provide a solid foundation for a code of programmer ethics to grow around:
Programs must do always, and only, what the user asks them to do. Even if the programmers who made it consider that request to be unethical.
To justify the first clause, it outlines all the deep ways that people in the modern day trust and depend on our computers:
- We are dependent on them, because we need them to almost everything these days, including things that are required of us by law – which is why having a smartphone is borderline a human right.
- Even for things we don't need them to do, we trust them to act on our behalf, as user agents constantly, for things like communication, purchasing, remembering things for us, and more.
- Throughout all of that, we entrust them with vital, sensitive, personal information about us, our lives, and our loved ones.
This means that in a world – such as the one we have now – where software does not serve users, life with computers would be one of unending paranoid, suspicion, confusion, frustration, betrayal, theft, and extortion for users. A world we do not want.
To justify the second sentence, the essay shows how if software developers attempted to enforce their own ideas about morality on the users of their software, this would only break that trust, because then sometimes the software will betray them, or not do what they want, but what someone else, with a possibly different moral code, would want. He talks about other professions that have codes of conduct where they pledge to serve those who come to them for succor regardless of their personal feelings regarding morality, such as doctors, lawyers, and priests, and why those codes are important.
I'd like to add two more reasons – somewhat hinted at in the essay – that software developers should not attempt to pass judgement as part of their software (as opposed to in their capacity as people) on the ethics of what people do with their software:
- Surveillance: in order to know what your users are doing with your software in a way that's detailed and flexible enough to actually exclude unethical activities, you're probably going to need some kind of telemetry or at least storage of user activity. Moreover, many people believe their ethical obligations, when someone does something wrong, include publicly naming and shaming them, or reporting them to "the authorities," in which case actual surveillance is implied.
- Power: normalizing this idea that software developers should have power over what users can and cannot do with their software gives software developers direct and intimate power to morally police users. I don't believe in moral relativism – I think that individuals should be able to react to other people's perceived immoralities as they see fit – but I am an act consequentialist and this is a level of power that seems wrong to me – like installing surveillance devices and electric collars on anyone so we can watch them for wrongdoing – because it involves more power over someone else than power over your own reaction to someone else: it extends further into their live and is more invasive. It is also much more centralized than just "social consequences." And if we normalize such power, those with insane codes of ethics, as well as those with decent ones, will use it to enforce their wills.
This is why I'm always somewhat disturbed by "FOSS" licenses that violate freedom 0 of the four software freedoms. Yes, they're usually a certain type of queer leftist I generally agree with on what applications of software are bad, and which are good. But this is not a rule that leads to the greatest individual autonomy compatible with the equal autonomy of all in the long run. It's just posturing.
1.9. Execution in the Kingdom of Nouns programming
This is an incredible, classic essay from the most famous ranter of programming rants ever to put finger to key. It is an incredible takedown of Java-style OOP that is both witty and also intellectually sound. And as a corollary I think it functions well as a takedown of any bondage and discipline language.
1.10. Functional Programming Doesn’t Work (and what to do about it) programming
I've actually compiled several blog posts on the same subject by James Hauge, including the titular one, into one larger narrative, because they're all closely related and significantly enhance the discussion in the others, and really work well as a combined narrative.
What I took away from this article is mostly that there are certain problem domains where pure functional programming actually introduces greater complexity, brittleness, and overhead which can outweigh the benefits in terms of explicitness and the more powerful architectures that rely on referential transparency that it can enable, even though pure functional programming is very beneficial in most cases. Therefore, we should carefully and stingily, but occasionally on a case-by-case basis, apply non-pure functional programming techniques to those problems where it's more helpful than harmful.
My experience in almost everything I've written though is that Trying to go for even a basic level of Purity would lead to an insane architecture. Take, for example, an Entity Component System:
If I want to have a system that gets all of the entities with a given set of components makes modifications to one of those components based on the information from the other components, I can either:
- Have a query function that goes through the entity system and returns tuples containing the applicable components for all of the entities that have the requested components, alongside the entity ID. This can be reused widely for other queries, and thus can be made very advanced, with negation, logical operators, backtracking, and so on. Then write a simple loop that goes through each of those and takes exclusive write access to the component it wants to modify and directly modifies it in place.
- Map over the list of tuples returned by the query function above, and producing a tuple of the entity ID and the modified component, which then has to go into a separate function that iterates over the entire list of entities again, in order to produce a new list of entities where the things indicated to be modified by the change list produced by the previous map are replaced.
- Iterate through the entire list of entities in the original map and produce the new list as you go, but then you can't reuse the entity selecting code, You have to do it all manually in the loop. So you end up duplicating code and it makes it difficult to have systems that operate on more interesting and complex selections that maybe do backtracking and such.
Even assuming a magical "sufficiently smart compiler" that can optimize away the copying implied by the latter two options, only the first option seems like a good option. The second introduces two separate loops to do the same thing, which both doubles how long it takes, and also just makes the code more complex and introduces more duplication. The third option is the worst of all, because while you get rid of the duplication of the loop, you can now no longer use the query system.
The other point I took from this article series is that there are certain types of violations of referential transparency that you can add that, although they may transitively infect a large portion of your code base, are small and simple enough that there isn't actually a significant cost to that. There is a difference between the net magnitude of the referential transparency violation's effects on your codebase, and just the scope of the things that are affected independent of how much they're really effected and how much it adds up. I think his example of a random number generator is a really good example of this. Literally almost every programming language except Haskell just allows you to directly get random numbers even if they are typically very purely functional (Erlang doesn't even let you modify things ever), and the reason for this is that if you use it in an inner place, yes it can sort of contaminate the rough, financial transparency of many other places in your code base and technically make their behavior non-deterministic as well. But since it's controlled by a seed and it's a very simple type of state in itself, that usually isn't actually a problem.
One thing that might help the predictability (for debugging) of functions that violate referential transparency by reading global state (so this only helps with one specific area, but yeah) may be the use of dynamic scope, so that you can treat global state as a variable you can pass in.
It could be argued that Haskell, too, has an escape hatch, with things like the State monad and IO monad, but there are two problems with this answer:
- This is pulling a kindgdom of nouns: you've artificially restricted the set of things your programming language can do directly, so you're using the abstractions it can access to claw those things back. They might be more "first class" in the sense that, since these capabilities are now expressed in terms of other parts of the language, you can now talk about them as values in the language – and that's pretty cool! – but they're second-class in reality, because you've got to sit on top of a tower of type abstractions and syntactic sugar to use them, and those abstractions are leaky: if you mess them up, the program will fall apart into very different bedrock abstractions that are more difficult to reason about. Yeah yeah, monads aren't that hard once you grok them, "I don't truly understand pure functional programming," whatever – but an imperative program breaking down into a page of type theoretic abstractions because that's its bedrock abstraction is still a qualitatively different (and worse) experience than a simple error from a language that actually knows about imperative code.
- These abstractions don't compose. There's no well-defined, consistent way to compose different monads, which means that it's very difficult to actually use them in more complicated situations where you may want more than one type of effect.
- Worse, because there's no implementation aspect to a lot of basic monads, they're just empty type tags with certain actions written for them, the operations one might want to do are scattered between a lot of different monads, and also occasionally confusingly duplicated between them, because there's nothing to keep them consistent.
- There's also a function-coloring effect to monads.
- When you're operating within the do-notation of a monad, you basically are just using an imperative language, but a particularly awkward, anemic one – because most of Haskell language and library design doesn't go into making the imperative side of it actually good to use – leading to people reinventing C in
do
.
1.11. TODO Intuition in Software Development philosophy programming
Painstakingly generated from a PDF.
Abstract
A characterization of the pervasiveness of intuition in human conscious life is given followed by some remarks on successes and failures of intuition. Next the intuitive basis of common notions of scales, logic, correctness, texts, reasoning, and proofs, is described. On this basis the essential notions of data models of human activity and of software development, as built on human intuition, are discussed. This leads to a discussion of software development methods, viewed as means to over coming the hazards of intuitive actions. It is concluded that program mers’ experience and integrity are more important than their use of methods.
I haven't read this yet, but given my attitudes toward programming as a trade and a craft, I think it'll be really interesting reading.
1.12. Leaving Haskell behind? programming
This article, from the perspective of someone that used Haskell for a decade, even in the industry, and loves it still even as they choose to set it aside, echos a lot of my thoughts and feelings toward Haskell as someone who learned it but quickly drifted away because of the problems I saw with it. There's a lot to like about Haskell, a lot about it that is beautiful and powerful, but also severe and endemic problems with the culture surrounding it (namely, their obsession with type theoretic explorations, which is often found to be impractical in larger scale projects in the long run, as the article points out) and with its ecosystem.
1.13. Literature review on static vs dynamic typing programming
This is a really excellent – thorough, cogent, even-handed – analysis of the state of the scientific research on the benefits and drawbacks of static vesus dynamic type systems. It really puts to rest the notion that we have any strong reason to condemn or insult those who prefer one or the other, at least for now. Perhaps in the future, with better studies, the benefits of one or the other may be concretely established, but for now it seems more like personal preference than anything. Personally, I fall on the side of static typing, as it's just really helpful to prevent me up front from making annoying mistakes or forgetting things, but from my experience it really is just that, a nice helper that can make things a bit easier, but nothing game-changing in terms of program correctness. This literature review seems, if anything, based on the effect sizes, to support that notion, and maybe should incline us to look more kindly on things like gradual typing that can allow us to have the best of both worlds.
1.14. Maybe Not programming philosophy
This is another excellent talk by Rich, describing the shortcomings and misconceptions of traditional nominal type systems such as those found in Haskell. Haskell and similar but more advanced type systems (e.g. Idris) are often treated as the uncomplicated Right Thing, only needing to be more powerful or more consistent or more extreme, but while Hickey seems to be focused on a few specific flaws in such nominal type systems, I think those flaws show a glaring underlying philosophical issue with nominal type systems as a whole. I plan to write on why structural type systems are better eventually.
1.15. Notes on Postmodern Programming philosophy programming
Fast paced, entertaining, full of creativity and variety, tongue-in-cheek, well written, and containing so many nuggets of wisdom I've learned myself about the nature of programming as an activity that takes place in, and must conform to, the real world. No totalizing narrative works!
1.16. On Ada’s Dependent Types, and its Types as a Whole programming
Another article on the idea of a dependent type system that gets there by being pragmatic, down-to-earth, and easy to understand, instead of through category theory abstractions and type system complexity, by giving you a pretty expressive static type system and then verifying whatever that can't get at at runtime, as well as letting you create and manipulate new types at runtime.
1.17. TODO Ontology is Overrated: Categories, Links, and Tags philosophy software
A profound and crucial piece about information organization (crucial in the world of the internet, where information is vast and distributed), and it also applies to many other areas of software, such as its development, where people are tempted to introduce ontologies unecessarily. I haven't read through this as thoroughly as I'd like, so I'm gonna go back to the well soon.
1.18. TODO Programming as Theory Building philosophy programming
Abstract
Peter Naur’s classic 1985 essay “Programming as Theory Building” argues that a program is not its source code. A program is a shared mental construct (he uses the word theory) that lives in the minds of the people who work on it. If you lose the people, you lose the program. The code is merely a written representation of the program, and it’s lossy, so you can’t reconstruct a program from its code.
This seems like an extremely interesting paper given my human-focused approach to programming, much like Naur's other, and I'm excited to get around to reading it. It's also important in the context of the high turnover and growing mistreatment of programmers-as-workers in our industry, and the looming threat of (CEOs thinking they can get away with) our replacement with large language models.
1.19. Proofs and Programs and Rhetoric programming philosophy
Sometimes, when I tell a new person that I absolutely love programming but hate math, they'll express surprise as to why. If they're a computer scientist, they'll quote Curry-Howard at me and tell me that "programming is math." This infuriates me. Here is an absolutely excellent article from a mathematician and computer scientist who likes math, and wants the two disciplines to be more similar (if that ever happens, I'm quitting) that explains why exactly this comparison, and the cliched phrases from CS and math people that accompany it, are not only wrong, but condescending and frustrating to people. If I had written the same thing out myself, it probably would've ended up saying the exact same things, so this is one of those cases where it's more efficient to just point to an existing article that says what I mean, rather than writing it myself.
1.20. Semantic Compression, Complexity, and Granularity programming
These two essays (joined into one here) have had perhaps the single greatest impact of literally anything on how I program and think about programming and good programming abstractions. The idea that we should wait to abstract things until we actually know how they will be used and instantiated in practice, instead of trying to predict what we'll need. The idea of iteratively abstracting and refining interfaces. The idea to focus on the end result the goals and what you want out of them, instead of the methodology or succinctness, to avoid complexity or confusion. The need to maintain continuous granularity in APIs, so there aren't holes. Well worth a read.
1.21. Summary of 'A Philosophy of Software Design' programming philosophy
I found this to be the best summary I've seen yet of actually practical, well conceived, software construction methodology. A good way to form taste.
I really like the idea of deep modules – ones that present a simple yet powerful interface that hides a ton of complex logic and functionality. I think this should be applied even on the function level – large functions are not a bad thing, they don't really threaten comprehensibility in my opinion as long as the whole function is on the same level of abstraction; they only threaten reusability, but to that I say: semantic compression, my friend.
Another really powerful idea is that of worrying about cognitive complexity. I think this is deeply important in this industry where we deal with a lot of essential complexity and complexity when unchecked can grow infinitely. I think it's important to remember that abstraction itself is a form of complexity – not just because it's leaky, but because if you abstract beyond concrete referants, reasoning becomes more difficult.
1.22. Technical Issues of Separation in Function Cells and Value Cells programming history
This paper is an extremely thorough and even-handed discussion of the benefits and drawbacks of Lisp-1s (like Scheme) versus Lisp-2s (like Common Lisp), and even articulates the point that ultimately they're both ~Lisp-5s by default due to macros, packages, etc, and Lisp-ns if you use macros and hash tables to assign arbitrary meaning to symbols, so they're not that different in the end. Ultimately whichever side you fall on, this is a useful reference to have handy, and of historical value as well.
1.23. The Anti-Human Consequences of Static Typing programming
Non-gradual typing subjects the human programmer to the machine. This is a problem, because while the machine can check some limited set of properties about your program, it can't know what your actual, practical, local, nuanced goals are – including whether you even need perfect consistency on the things it can check or not – so you're subjugating human values to machine values! Essentially a moral argument against type systems, for what that's worth.
1.24. The epistemology of software quality programming philosophy
"Studies show that human factors most influence the quality of our work. So why do we put so much stake in technical solutions?" I really agree with this one, as someone very interested in the human side of software. Very well worth talking about.
1.25. The Horrors of Static Typing programming
In this video, a type theorist who works on the type systems of compilers for statically typed languages walks through some of the incredible complexity that static typing can bring when attempting to type even basic things like numbers and collections with subtyping and implementation inheritance (phrased as object-oriented, but many non-OOP languages have those features, because they're so incredibly useful), as a corrective to the idea that static typing is "always good" and using dynamic typing is always bad and illogical. Instead, he pushes for a more cautious, thoughtful approach to understanding the tradeoffs on a case by case basis, and reverse-gradually-typed languages that let us make that choice, while staying statically typed by default.
1.26. The Lisp "Curse" Redemption Arc, or How I Learned To Stop Worrying And Love The CONS programming hacker_culture anarchism
Another essay (by the same author as Terminal boredom, Ethical software is (currently) a sad joke, and Maximalist computing), this time attacking the so-called Lisp Curse from the angle of a radically decentralist, anti-organizationalist, egoist anarchist – namely, defending the idea that a community that can experiment widely with different language constructs, syntaxes, and so on in order to find the right one, while still remaining cross- and backwards- compatible, is actually very beneficial, as it allows the community to much more efficiently hone in on actually good solutions instead of having to stumble around blind before getting locked into path dependency and forcing a particular solution onto the entire community. Not only that, but such a community can still arrive at general standards by way of network effects, rendering moot most of the problem of the Lisp Curse.
See also: What is wrong with Lisp?
1.27. The Perils of Partially Powered Languages and Defense of Lisp macros: an automotive tragedy programming
Both of these blog posts (the second one in much greater detail) use real world, concrete industry examples to show that when the programming language used doesn't have enough power to express domain specific languages, data formats, and high level abstractions, within itself, and when its development tools don't allow live introspection and hot code reloading and rapid prototyping, those things don't just go away – developers don't actually "just stick to the basic language." Because that's deeply inefficient. Instead, they implement a plethora of domain specific languages and data formats separate from the main language, all incompatible and partially powered, which makes everyone's lives harder. Thus, in the end, the rejection of languages that are powerful at making DSLs like Lisp (or Haskell, in the case of the first article, but Lisp is far better at it than Haskell, and Haskell has other issues too) is not a practical decision made to ensure the software being built is comprehensible to as many people as possible and doesn't get lost under DSLs and complexity. It's a short-sighted, anti-intellectual exercise.
1.28. The problematic culture of “Worse is Better” programming philosophy software
Richard P. Gabrial's essay gave a name to the idea of "worse is better" and thus unleashed a monster that has now become a dogma. While there is a kernel of truth to the idea that worse is better – namely, leaving space for things to evolve, be flexible, and be adapted; remembering to stay pragmatic instead of getting lost in abstract planning or beauty; and trying to iterate and get early versions of an idea out quickly so they can interface with the world and catch on – used as a dogma it is ultimately harmful. This essay describes how that happens: how, as a slogan, it has become a thought-terminating cliche used to justify doing whatever is easiest in all situations, without having to actually step back and think about good design, and how those bad historical bedrock abstractions have lead inexporably to more and more bloat and complexity piled on top to get away from those bad abstractions.
Basically, no one seems to grasp that when stuff that’s fundamental is broken, what you get is a combinatorial explosion of bullshit.
I plan to write an essay on what aspects of worse is better are worth keeping, good correctives toward the tendency of the Right Thing toward software planning and modernism, but this is a good critique of the idea that blindly following worse is better is itself better. A good corrective to some of the ideas of Notes on Postmodern Programming.
1.29. The Property-Based Testing F# Series, Parts 1-3 programming
1.29.1. The Enterprise Developer From Hell
Does an incredible job motivating randomized property-based testing and demonstrating how it's different (and better) than regular unit testing (or non-dependent static type systems). This is probably the single best place to start for those interested in PBT.
1.29.2. Understanding FsCheck
Introduces a PBT library for F#, but doubles well as an introduction to the whole field of libraries, since they all operate similarly. Gives you a really good starting understanding of how they work and how to use them.
1.29.3. Choosing properties for property-based testing
This one is the real meat, the real magic. This piece is full of such totally concentrated useful wisdom, just an incredibly useful, actionable, and meaty conceptual framework. Extremely highly recommended.
1.30. The Safyness of Static Typing programming
An author who (like me) likes static manifest types introspects and analyses the psychological factors that might lead to people assuming static types must automatically be more safe, even though empirical studies generally fail to bear that out in a meaningful way. It's important to consider such psychological factors, even if (as I think they do) good static type systems probably confer some benefits.
1.31. The Unreasonable Effectiveness of Dynamic Typing for Practical Programs programming
(Used Whisper to make a transcript on my local computer, edited it a bit. If you wanna see the slides, watch the video.)
This talk obviously made a lot of static typing proponents angry. The speaker was accused all over the internet of not understanding what static types are, or why they're useful. But I actually think he's completely right.
The criticism that he didn't use F#'s type system to its fullest potential to avoid the lapse in correctness he demonstrated in its unit typechecking is beside the point – he was illustrating a general point that static types generally indicate structure and the presence or applicability of certain operations, but not the specific context and intent of the value in question. That's why he goes on to talk about how much munging of strings and JSON and so on we have to do every day – those are also largely structural types that don't encode actual meaning or intention or context. And this is absolutely true in the general case.
And of course, yes, you could use type systems to encode these things if you try much harder (the point is that they don't by default), including a million newtypes and phantom type parameters everywhere, but then you just fall into the second horn of his argument, namely that the development costs, and the costs to flexibility and modularity of software, in using types that rigid (even assuming a good language like F# that can make the types to do this reasonably simple) probably outweigh whatever benefits it might have.
The other criticism is insisting that types can catch more than those 2% of Type Errors. But can they really? Unless you're doing extremely hardcore data driven design all ML-style types really offer you is some assurances about structure and applicable operations, polymorphism, and exhaustiveness checking. The first two are TypeError-related things. The final one is reasonably easy to remember to do on your own and usually annoying unless you're using custom ADTs absolutely everywhere. (I still like it though, because I'm forgetful).
And then there's the fact that, empirically, there's no sizeable effect on program reliability thanks to static typing over dynamic typing, at least as far as studies have been able to show, as seen in the other links on this page.
1.32. Typed Lisp, A Primer programming
As someone who prefers expressive type systems with things like sum and product types and refinement types and parametric polymorphism and exhaustiveness checking, all of which can help me keep track of states and constraints, but has always wanted to use Lisp despite thinking it didn't have an expressive-enough type system (and thus dreading dealing with "undefined is not a function" style errors and trying to figure out what data library functions expect and return) this article was deeply enlightening. Finding out that Lisp has such an expressive type system, and seeing it expressed in terms that are familiar to me from ML languages, was really cool. Yes, it's runtime, but SBCL can check most things statically while leaving the advanced stuff (satisfies
) to runtime.
It was extremely enlightening to find out that Common Lisp is almost dependently typed: you can create and manipulate types with the language itself, since they're just regular symbols and lists, and also express constraints using the language itself, and express types and constraints using term-level values and not just types. It's just that some of it is dynamically verified instead of statically.
1.33. What Clojure spec is and what you can do with it (an illustrated guide) programming
An incredibly powerful demonstration of what's essentially an advanced structural type system combined with a randomized property-based testing system can do for you. It really opens your eyes to what can be done to verify the correctness of programs even without static types. It seems much more powerful, flexible, and expressive than all but something like Idris, let alone model checking (which do basically the same thing but destroy having a single source of truth for your application logic) and almost for free in terms of cognitive overhead!
1.34. What science can tell us about C and C++’s security programming
Empirical results are painfully rare in computer science. But, as this blog post covers, we have many extremely strong real world pieces of evidence to conclude that memory safe languages are horribly unsafe, and that human beings are not up to the task, no matter how good they are, of avoiding memory unsafety. This is why (concurrent) garbage collection should be built in at the lowest levels of our systems as possible, and everything that needs real time reliability or needs to be lower level than that should use automatic reference counting or borrow checking.
1.35. What We've Built Is a Computational Language (and That's Very Important!) programming software philosophy
Reading this was honestly completely eye-opening. I'd already though that programming languages could maybe be tools for thought, but seeing what Wolfram Language could do solidifed it for me. Wolfram Language is the ultimate confirmation of the idea that a computational expression of actual ideas is not only possible if you have the right lannguage that was high level enough to allow you to express things on their own terms, yet didn't force you to get tangled up in abstractions, but could be profitable – could be a whole new way of expressing and clarifying ideas.