A plea for simpler FP

Software development is a terribly young industry in comparison to other similar fields like engineering and architecture. In so many ways, we have no idea what we're doing; we can't even agree on basic methodological issues, let alone more complex questions, and we've got almost zero empirical evidence – even meaningful case-studies, let alone proper scientific studies with decent sample sizes and good design – one way or the other for most practical debates (memory safety and static types are really the only things we have empirical evidence in favor of).

Meanwhile the theoretical/academic branch most related to our industry, computer science, has almost totally left us to our own devices to climb the ladder of Platonic abstraction toward ever more pure and simple mathematics, and while concrete instantiations of those abstractions do regularly show up in everyday code, understanding the most abstract category a common thing belongs to is rarely practically useful in everyday life. More generally, the field just seems mostly concerned with problems different from those of day-to-day construction of good software people actually use, such as computability, models of computation, mathematical proofs, and so on. Most academics in computer science aren't even concerned with thinking about how to write software that runs consistently, interacts with users and the external world, has to adapt to unclear or dynamic requirements, or design programming languages in terms of holistic systems of software, practice, and people, preferring to work on software that runs once, with a given set of static inputs, and produces a final output, or never even runs at all.

Even the academic discipline that might come closest to finding something like real engineering practice for software development, formal methods, has tradeoffs that make it impractical or hard to accept for practical programmers (in fact, I'd argue, serious methodological flaws as well).

This has led to the software development industry, desperate for some guidance as to how to approach building software, to fall under the sway of an endless parade of absurd fashions, each pitched in breathless terms as the solution to having reliable, maintainable code, avoiding errors, delivering software on time, and a million and sundry other things. Each of these fashions had some good points – a kernel of truth – but because their ideas were applied dogmatically, their prescriptions treated as sacred, their doctrines treated as fixed ideas, they became more harmful than helpful.

Among all of these, Object-Oriented Programming has probably been the most devastating. The OOP craze had a few crucial features:

  1. It was embraced with religious fervor, an ever growing set of hard-line activists pushing it as the ultimate sacred law and the secret to good software. Anyone who didn't agree was ridiculed as stupid or blind. Eventually, the industry at large was convinced, and OOP ruled as an iron-fisted god-king over the industry, its sacred laws absolute and incontrovertible, taught in schools, written into textbooks and required by managers, specifications, and even regulations.
  2. Its solutions were applied like violence: more was always the answer. If you found it awkward, unwieldy, or ran into problems with its abstractions, either it was because you didn't understand it, or because you weren't using it right, or because you simply hadn't done enough of it and you needed to use more. Escape hatches were non-existent, or treated as Satanic anathema if they did.
  3. It focused on modelling your domain, and the behavior of your program, through restricting you to a highly limited set of concepts that didn't necessarily map easily onto the most natural mental model of all problems or domains. It was this restriction of your applied ontology to this tiny set of entities that was supposed to bring all the benefits; somehow, sucking concepts and reality through this thin straw would prevent errors.
  4. As a result of point (2), in an attempt to achieve things that would have been trivial had point (3) not also been the case, ever more complex and abstract constructs, known as design patterns, were created to compensate. Whole books were written about them. Often there would be claims that using these design patterns, instead of the more direct constructs other languages or methodologies used (such as procedures, first class functions, multiple dispatch, even careful usage of multiple inheritance, to name a few) to express the same concepts, was beneficial in some way, but whatever benefits could be had were vastly outweighed by the added complexity and abstraction.

Nowadays, in the 2020s, we know that OOP doesn't work as a primary methodology – that some of its core concepts were more harmful than helpful (trying to create a rigid is-a hierarchical taxonomy as the ontology of your software is unlikely to work out, because things don't work that way in the real world), that its attempt to rigidly limit the vocabulary software developers could use to conceptualize a problem was detrimental, and that it didn't deliver on its promises of decreasing bugs.

So that's all well and good right? We're safe now. No more dogmatic ideologies will come to take over the software industry, because we've learned our lesson?

I hope so, but I'm worried we haven't.

Everyone is so focused on being relieved that we finally left dogmatic OOP behind, taking with us only the individual constructs and techniques from it that were legitimately useful, writing retrospective articles explaining why it's bad, that they're not on the lookout for any other similar fads that may do equal damage to our industry. But I think I see dust clouds approaching on the horizon, and I don't think it's just the wind. See, this typed pure functional programming thing? I think it could easily turn into another OOP. It bears all the same signs, after all:

  1. Dogmatic advocacy as the solution to all our problems.
  2. Solutions applied like violence. It claims to be able to do things in an ideologically pure way that it actually cannot.
  3. Focused on rigidly limiting the set of things your language can comprehend directly (such as side-effects and mutation), instead of trying to provide a powerful multiparadigm set of concepts for building a useful ontology.
  4. Using an ever-higher tower of abstractions to claw back the features lost due to the previous point, in the process developing a library of extremely abstract and often confusing design patterns that everyone must become familiar with in order to even understand basic code. Whole books are written.

Maybe I'm wrong on this. Maybe, despite the warning signs, this time it's truly different. Maybe pure FP really is the one true ideology that will actually work, and we should all bow our heads.

I don't think so, though. Typed pure functional programing certainly doesn't suffer from the exact same problems OOP did, but I think we're going to have an extremely harsh awakening as an industry if we adopt it wholesale, as a sacred law, the way we did OOP. We're going to wake up after our pure FP bender with a massive hangover and our entire life savings drained for the second time, and we're going to have to slowly pick up the pieces and revamp our text books yet again.

The fundamental reason for this is simple. The core problem in software development is complexity: everything is about figuring out how to reduce accidental complexity and cognitive load to the minimum, and then efficiently and effectively manage the rest. And while typed pure functional programming has useful ideas to offer in reducing some forms of complexity – the dependencies between things, by sequestering mutable state and side effects, and preventing incorrect states from occurring – when taken as an end in itself, purity necessarily excludes being able to represent certain ontologies as well as certain operations like iteration, mutation, state, and side effects directly in the base language. This matters because you really do need things like iteration and mutation fairly often, requiring you to forge through multiple layers of abstraction and syntactic sugar to represent that functionality. Abstractions which do not compose, that leak1, that are very difficult to explain because they have no concrete referent, and which have a huge number of other drawbacks. This is similar to how, since Java lacks multiple dispatch, it needed to invent the visitor pattern, and since it lacked lambdas, it had to invent the strategy pattern, and so on. Thus, trying to achieve total purity just adds another form of complexity back in through cognitive load and accidental complexity. Some claim Haskell is the "best imperative language" on the back of things like this, but these really have all the same drawbacks (as admitted in that blog post) under the hood, just with nicer looking syntax.

You and I might find concepts like monads, monoids, applicative functors, and so on, easy enough to understand when we're reading a Haskell blog post during lunch break, maybe while sipping coffee if you go for that sort of thing, but the set of abstractions that pure functional programmers need to use to get around purity is not closed – it is continually expanding – and we also need to remember that even if we can understand these abstractions under good conditions, such abstractions ultimately still impose cognitive load on us, still require effort to wade through. Do we really want to be adding insanely complex Haskell type system shenanigans, debates over type hierarchies, complex type inference errors, and pages of lambda calculus-looking type errors on simple imperative code on top of the already-existing complexity of our problem space and the existing code base? Do you really want to be wading through applicative functors at 9 A.M. when prod is down and you just need to fix a bug?

Especially when, in many cases, you don't need to use such abstract concepts to achieve the same goals. You can carefully control mutability – making sure it's demarcated wherever it occurs, and avoiding invisible dependencies between completely different places in your program thanks to mutable references, keeping the flow of data modification on a linear path through your program – without a State monad, via something like Rust's borrowing model and explicit mutability labels. You can make side effects explicit through object capability systems, which work not using abstract category theory types, but with simple straightforward tokens that you pass around and add to your type signatures. You can get rid of boilerplate with Lisp macros, instead of do notation (Haskell does not have macros) or with carefully-chosen simpler language features (Scala implicit parameters would go nicely with object capabilities!). It's not even clear you need typeclasses at all if you have basic functional programming features or generic methods with multiple dispatch, although I like them. Even the need for static types is unclear. Some research suggests that it isn't static types that make your code high quality (in both studies, the most reliable language on that graph is Clojure, which is immutable and functional, but not statically typed, and in the second study there's no statistically significant relation between static types and reliability), and you can get 80 percent of the benefits of dependent types (much more powerful in what they can specify than even Haskell's types) through things like Clojure Spec and Malli (which can look at your specs and automatically generate test values and verify them, intelligently searching through the space to find the minimal example that will violate your specs), or systems like Common Lisp's type system (see also) in SBCL, or Ada, where a fairly powerful but also quite simple and easy to understand type system is available statically, and then more complex things can be specified in the same way as regular types, but are asserted at runtime.

My hope is that things will keep on as they have been, with the pure typed functional programmers standing on the street corners foretelling doom and exhorting those on their way to their day to day jobs to mend their ways and join their religion, and there's a larger chance than with OOP that that will remain the case, since OOP was easier to understand and thus easier for people to get enthusiastic about, but I worry sometimes…

See also: