Why I'm obsessed with Common Lisp

Although I haven't had much chance to use it yet, one of the languages that I have been utterly obsessed with for years is Common Lisp. This, of course, is not a new phenomenon amongst hackers like me. However, I think it's worth articulating why I personally am interested in Common Lisp, because the programming language landscape has changed drastically since the earliest hacker essays singing its praises were written, which has led to some of the points they make becoming obsolete – or more nuanced – and many more of their points being claimed to be obsolete by those who for one reason or another can't or don't want to use Common Lisp (which is valid by itself, but leads to motivated reasoning).

I think my reasons may also be worth articulating because I come from a different background, and with different preferences, than many of the earlier hackers whose famous essays extolling Lisp have made their rounds for decades on the internet. For one thing, I'm much more familiar with ML-family typed functional programming, and have a strong affinity for it, despite my occasional criticisms of some of that world's excesses. For another, I have a strong affinity for soft-real-time programming like real time graphics and game engine programming, and experience with Rust, so I do actually care about performance and being able to access low-level constructs in my languages, and I don't buy into the idea of the "sufficiently smart compiler" with total religious fervour.

Also, it's worth saying a word about my background here. One of the earliest books I cut my teeth on was Land of Lisp, when I was in my early teens, and then I moved on to Realm of Racket, so I have a long history of on-again-off-again fascination with the Lisp family of programming languages. This biases me in the sense that, while I find visually matching deeply nested sets of parens as hard as anyone else, I don't actually find ignoring those parens for the most part and simply reading Lisp code difficult at all, and I actually find the clarity around which expressions end where and how they're nested that S-expressions provide very helpful. I leave dealing with the actual paren-balancing up to electric-pair-mode and puni-mode.

Now, on to the properties of Common Lisp that I personally find highly unique and fascinating.

Homoiconicity

Much ink has been spilled on this subject, but I think a lot of the conversation is so muddied in ambiguity and talking past each other that the point gets totally lost in the weeds. Let me try to state the idea as clearly and concisely as I can, specifically with an eye toward explaining how this is different from more traditional languages that may have things like eval and AST manipulation (e.g. Template Haskell), instead of just saying it is and then focusing on singing Lisp's praises.

Your language is homoiconic if and only if:

  1. The syntax of your language for any given snippet of code A is identical to the syntax in your language for representing A as:
  2. A tree of symbols and literals, thus carrying structure and differentiation which…
  3. …/is not/ yet an abstract syntax tree (does not have particular semantics tied to any of the elements in the tree, thus not limiting how the tree can be constructed, deconstructed, or manipulated)
  4. …is represented using one of the most common and well-supported-by-the-standard-library data structures in your language.

Thus a language is not homoiconic simply because it has a string type, because while in that case there is technically a data structure in the language that can represent code in that language with identical syntax, in order to do that, you're having to use the least meaningful, least structured data type of them all, an array of bytes. It's not a tree and it makes no distinction between names and concrete values, so it carries no structure and differentiation, and is thus infinitely harder to work with. Having to use strings for metaprogramming makes you do endless error-prone, tedious string munging with concatenation and regular expressions, instead of clear and clean structural manipulations.

Likewise, a language is not homoiconic simply because it has an AST that can be accessed at runtime or compile time. Every parser generates an abstract syntax tree where each syntactic element in that code is represented by a directly analogous element in the abstract syntax tree, but the syntax in the language for representing that AST is invariably different than the syntax in the language that leads to generating that abstract syntax tree, and an abstract syntax tree is too rigid – it already has specific semantics encoded into it, not just in the names, but in how the tree can be constructed, deconstructed, and manipulated, that are far too rigid for doing some of the things macros can do.

The benefit of homoiconicity is that it allows you to transform programs:

  1. in an abstract, structural way, instead of awkward byte-munging or text-munging,
  2. using the same features and library functions you use every day for other things, meaning both familiarity and wide support, instead of it being a weird edge case thing,
  3. retains flexibility to modify or even ignore the language's normal syntactic constructs and semantics,
  4. in a way where what syntax will produce which data structures and the general structure of programs is extremely clear and unambiguous, which is important for something like metaprogamming.

<<Scheme homoiconicity>>Note that even Scheme fails this definition on the final criteria: in removing slots from symbols in the name of theoretical purity, Scheme lost any way to store debug information on regular symbols, thus necessitating the introduction of syntax objects, which are a sort of mirror universe of lists, literals, and symbols that do have extra metadata (almost like slots!) attached to them, but which aren't compatible with all of the functions for manipulating lists and symbols and literals in the rest of Racket, necessitating extensive conversions back and forth (because Racket lacks generic programming facilities by default, which also makes it horribly verbose) or their own shadow library.

Procedural non-hygienic macros

Leading on from the previous section, the most important part of homoiconicity is really what it enables: macros. I'm not going to spend time explaining what Lisp macros are here, if you're on my website I'm sure you either know what they are and how they work, or can research it for yourself. Instead I'm going to spend some time here defending why Common Lisp macros, in particular, are interesting to me.

First of all, while many languages these days have macros, such as Rust and Haskell (via Template Haskell), without homoiconicity and a language that's built from the ground up to readily support them, these attempts are often hamstrung, awkward, and bolted-on, probably better replaced with something that integrates with their type systems like Zig's comptime. For a brief look at what I mean, I recommend checking out these links:

  1. Haskell doesn't have macros
  2. This walkthrough of procedural macros in Rust that really demonstrates what a horror it is compared to Lisp's procedural macros

Second of all, while many languages have syntax that makes it easy to build domain-specific languages without needing macros – which is often claimed to be "80% as good" as having macros – these sort of DSL facilities often rely on extremely ugly and involved internal implementations, usually relying on obscure language edge cases and nearly accidental syntax rules that probably weren't really intended to be used that way, as well as a ton of hidden machinery and state, all of which is easy to break between language versions, difficult to understand, and can easily lead to bugs, instead of the extremely simple and straightforward concept of taking in a tree representing code and returning a tree representing the new code to replace the expression that called the macro as in Lisp. Thus even if these facilities can give you 80% of what you want from macros, they add complexity and confusion, and often require understanding a lot more, instead of reducing it, so this doesn't seem like a good 80-20 tradeoff, unlike for instance using clever language syntax features to eliminate the need for advanced category theory.

Then there's often the objection that macros are actually bad to have in a language, because they allow you to construct "mini-languages" with different syntax and semantics, thus making your code harder for other people to understand. I have a few responses to this.

At a most basic level, I think this is ultimately a misunderstanding of how code comprehension works. Any system of abstractions is a new language for talking about that thing – whether that system of abstractions is implemented using heavy-handed OOP, functional programming, procedures and structs, or macros – and any new language for thinking and talking about things requires you to learn how to communicate with it – its own syntax, semantics, and rules for operation, what is and is not valid. I don't see the difference between a language introduced via macros and one introduced via other means, except that macros tend to be a less leaky abstraction; you don't have to think about the underlying language machinery as much. They're all just forms of abstraction. Of course, you should use the right tool for the job – the least powerful construct that will let you efficiently and effectively achieve what you need to do, weighing tradeoffs between repetition, performance, developer ergonomics, complexity, and comprehensibility – so macros shouldn't be used all the time, because they are more powerful than many other language features, but I don't think they're a fundamentally different category of thing.

Furthermore, people manifestly do need to create such domain specific languages, even ones that seem to suspend the traditional semantics of the language, very often. Just because you don't use Lisp and don't have macros doesn't mean you're not free from that need. Wanting to be able to force-mulitiply your coding through metaprogramming, to have the compiler automatically deal with common patterns, or move up and down the ladder of abstraction, or improve your language to allow it to speak in terms of the domain you're dealing with, are often all very necessary things. So you don't avoid metaprogramming – you just end up using awkward hacks like templates, Java compiler plugins, decorators, or stringly-typed eval hacks, to get where you need to go, or a patchwork of different languages poorly duct-taped together, instead of a consistent, elegant, reliable, and generally easy to understand way of doing the same thing. Now you might argue that there's a benefit to a language making metaprogramming painful, in preventing people from doing it too much, but I argue that's a category error.

Another point is that yes, maybe having a more flexible language with a larger domain-specific vocabulary might make bringing new people onto a project harder. That isn't really a problem with Lisp itself, though – it's a problem with its fit for the modern software industry. And why take the modern software industry's method of treating people as interchangeable deskilled cogs to be swapped in and out, as bodies to be thrown en masse at a problem, instead of valuing small teams of dedicated hackers working for long periods of time on a project and in the process becoming domain experts, as an ultimate good? Maybe the fact that Lisp ill-fits the modern software industry is a basis for an indictment of the modern software industry, not Lisp.

Why am I specifically happy that Common Lisp's macros are unhygienic and procedural, though? Aren't Schemes macros better? I'm not actually so sure. For one thing, Common Lisp mostly solves the problem of symbol clashes through its package system – all quoted or quasiquoted symbols are automatically implicitly qualified to the current package, so as long as you define your macros in a separate package from where you use them, no clashes should be possible, and if you want to expose a macro-defined variable to the user code inside the macro, the user has to pass in the symbol they want you to use, just like in Scheme's hygienic macros. Even beyond that, it's relatively easy to avoid any other possible clashes through gensyms. Granted, gensyms are pretty awkward by default, but it isn't hard to define a macro that takes care of most of the problems for you (like a gensym-let) or even a reader macro that automatically gensyms any symbol you prefix it to, and stores that in a macro-local table so references to it with the same reader macro prefix elsewhere in the same macro will refer to the same symbol, but ones in other macros won't (this is how Clojure does things). And hell, as a last resort, the fact that Common Lisp is a Lisp-2 reduces the chance of an accidental collision by 50% and the chance of a cross-intent collision (where you get a type error for calling a variable as a function or vice versa) to 0%.

As to why I don't want Scheme's hygienic macros, my issue with them isn't that you can't do the same thing with them that you can in Common Lisp – such as anaphoric macros – because I'm well aware that you can. The issue is that thanks to the need for hygiene, macro systems in Scheme are much more complex, and have many more intermediate layers of abstraction. In Common Lisp, there's only one kind of macro, and one way to construct them, by default – defmacro – and it's just the basic concept of "a function that runs at compile time that takes some S-expressions, does some things, and then returns some S-expressions", and the full power of the macro system is immediately available at the surface. Meanwhile, Scheme has a whole complex tower of not-quite-orthogonal consecutive abstractions, from syntax-rules, to syntax-case, to using just raw define-syntax and syntax objects directly; and while syntax-rules is simpler and more concise than defmacro, it's not that much simpler and more concise, and it's a lot less powerful; meanwhile using raw define-syntax to get something like defmacro is painfully verbose and awkward, thanks to Scheme's compromised homoiconicity – and that compromised homoiconicity is, in fact, also made necessary by the hygiene (as well as the removal of property lists from symbols).

The reason I don't like this is not just the added number of macros, functions, and semantics needed to learn how to use macros, but the fact that it requires you to climb this ladder of abstractions to get at the core idea and functionality of macros, and experience this sudden discontinuity in the concepts when you go from template based macros to procedural ones, instead of it just all being direct, consistent, simple and laid bare from the start, all for very little gain. I willingly admit it isn't all that complicated in the grand scheme of things, but it's just extra ugly abstraction and concepts interposing themselves between me and the beauty of metaprogramming in Lisp. Moreover, the actual details of how hygiene works are often obscure, complex, and difficult to understand – multiple PhD theses have been forged just out of this subdomain of Scheme implementation, and the details are constantly shifting and getting more complex in most implementations.

Macros aren't just useful for writing domain-specific languages or adding a little syntactic sugar, either. Since they allow you to access the full power of the entire language – the same exact language you know and love, with identical semantics – at compile time, including the ability to change the state of the compiler, load or unload packages, and even run side effects, you can use macros to add entire new features to the type system of the language, or introduce new optimizations to the compiler (even ones from other languages), or even write entire new languages with completely different semantics that compile down past Common Lisp to faster assembly code than regular Common Lisp can produce or adding JIT compilation for array computations for highly parallel computer architectures. In Scheme, this is impossible thanks to the the phase level system, which also adds more complexity still to the macro system, further illustrating my point.

Despite all this talk about how important it is that macros operate on a tree of data, however, there is one more kind of macro that Common Lisp has that Scheme has nothing like: the reader macro. While regular macros, as I described above, operate at compile time, and see your code after it's already been parsed according to the existing S-expression syntax, reader macros see your program when it's still raw text, and can introduce entirely new syntax, breaking the regularity of S-expressions the same way the quote and quasiquote syntaxes in Common Lisp do – because nothing that CL the language does is off limits to you, the programmer! Obviously, the regularity of Common Lisp's syntax, and how directly it corresponds to its underlying data structures is a huge selling point, so it's best to do this rarely and with extreme care, but it's important not to hold regular syntax as some kind of sacred ideal that you never break either – sometimes it really is helpful to do so, and you can do so without sacrificing the benefits of regular syntax on the whole, something the designers of CL clearly saw themselves. Thus, for example, you can allow CL to have syntactically correct JSON literals if you want.

Thanks to all of this, Common Lisp is the ultimate hackable language.

Image-based development

One of the most compelling, and still to this day completely unique – outside of Smalltalk, anyway – aspects of Common Lisp is that it is image-based. This means that your compiler, interpreter, runtime, program (including dependencies), and parts of your development environment all share the same memory space in your running program, and you can save the running state of all of that to disk and restore it to memory at will, seamlesslyor reproduce it from scratch as needed. Furthermore, Lisp programs (as images) are designed to continue running indefinitely and have your development environment, including your editor and your REPL, connect to them while they're running so you can modify them as they run, live, through dynamic code reloading.

This has several amazing properties:

  1. While many complain about the size of Lisp programs distributed as images – specifically, many Unix grognards complain about it – as we've seen with the rise of Docker, it's actually extremely useful to be able to distribute a program along with the exact dependencies and resources it expects, in the exact same environment that it was created by the developer in. (If you're worried about the repeatability of code produced this way, don't worry, there are build systems that let you specify dependencies and what code should be loaded and so on to generate the base image for your program reproducibly).
  2. Being able to save-lisp-and-die means that you can easily create custom versions of the language, that will load your own custom packages and code when you start them up.
  3. Your REPL and editor have full access to your entire program, as well as the state of the runtime, and the compiler, all in one coherent universe, which allows for incredibly powerful IDE like integration that's much smoother than what even the most advanced modern IDEs can offer and works from first principles instead of being a pile of hacks (such as with LSPs, which are typically full reimplementations of the compilers or interpreters of the languages they're for, since regular compilers aren't suited to language server work, that have to recompile your entire program every time you make a change in order to give you feedback).
  4. You can use the compiler to modify your programs live, as they're running – recompile any function, variable definition, class or struct definition, or anything else, and send it to your in-progress running program and watch it change its behavior in real time. And unlike attempts to do similar things in C# or C++, it isn't a buggy half-working mess tacked onto a language that isn't meant to do it, full of exceptions and "yes, but…" potholes. This is true REPL-oriented programming: not what most call a REPL, which is just a tighter iteration on batch-processing – where you write code, submit it to the computer (at which point you can't change it), wait for it to run to completion, see output, and then write a new version and submit that – but dynamically changing the program as it runs and you see errors or have new ideas. Other languages may have "REPLs", but they don't have this. BEAM languages can come close, but only at the level of granularity of a whole module.

The last example here is the one that's the most important to me. I see most coding as a process of reflective equilibrium with the computer – you may have ideas for what you want to implement and how, but software is often so complex, and computers so alien to our natural ways of thinking even for the most experienced of us, that there are too many unknown unknowns to predict from the outset. Furthermore, most software must interface in some way with the external world, and especially with the needs and psychology of human beings, which means the requirements may not always be totally clear up front, because they're inherently fuzzy and ill-specified, meaning that you may only recognize that you have what you need when you stumble upon it. Therefore, being able to write down an idea quickly, test it out, experiment with it, rapidly changing it and molding it like clay on the potter's wheel, is very important to the development process, and there is literally no environment better suited to that than Common Lisp (or Smalltalk).

Type system

It might seem strange for me to reference Common Lisp's type system as a reason to be interested in it, since this aspect of the language is not oft-remarked upon even by its biggest fans. However, I think that's a shame, because it's severely underrated, largely because what exactly the type system means to the compiler isn't specified by the ANSI standard, and so varies so much by implementation. Now that Steel Bank Common Lisp has handily risen to the station of the premier, main, most advanced, and most updated free software implementation of CL, the one that everyone supports and writes documentation for, and SBCL also happens to make the strongest and most advanced use of the Common Lisp type system, I think it's worth discussing.

The first thing that's interesting about it is it's magical balance between simplicity, flexibility, and power. If you don't believe me, please go read "Typed Lisp, A Primer", which is truly excellent, and explains Common Lisp's type system from the perspective of a fan of Haskell. Common Lisp's type system has a rich panoply of simple type specifiers (and you can add more, including simple ADTs), bounded numeric types, bounded and polymorphic arrays, polymorphic vectors, bounded strings, the ability to define (some) polymorphic types, sum types, union, intersection, compliment, enumeration, and singleton types, and a lot more. Crucially, though, type specifiers remain just simple symbols and lists for easy compile time manipulation, and it's all pretty straightforward and easy to understand. SBCL can do fairly precise (including adding bounds!) type inference for all of these types, as well as compile time type checking for all of them as well. So right out of the gate, you've got a pretty decent type system to act as guard rails for keeping you out of trouble.

It gets better though, because, as the SBCL User Manual states, not only are types checked at compile time (and used for very powerful compiler optimizations, which is why they were originally added to the ANSI standard), it also treats these type checks as assertions at runtime. The reason this is helpful is because there is actually one part of the Common Lisp type system that can't be checked at compile time: satisfies conditions. Satisfies conditions allow you to check arbitrary requirements using the full power of Common Lisp. This would be essentially impossible to provide statically without dependent types at huge complexity and abstraction cost, so instead, Common Lisp allows you to specify these conditions and just checks them at run time – which it can do with little added complexity – instead. The cool part about this, though, is that these conditions are not assertions, they're still part of your type system. That means there's a lot more you can do with them in theory, with a little macro magic (which I'll get to in a second).

Incredibly, this is essentially the same strategy Ada, one of the languages that is most highly respected for formal assurances and reliability, uses: allow a supremely expressive type system (one of Ada's proudest features is its bounded integers, something CL also has, and can check statically!), and just check what can't easily be checked statically dynamically instead.

Even more interestingly, since types are just regular symbols and lists, and type definitions are essentially macros themselves already, and macros allow compile time computation, code generation, and even interaction with the compiler to reject code or create warnings and so on, it seems that the Common Lisp type system integrates extremely well with its macro system in a way that could, if someone wanted, open the door to some very powerful things.

Of course, there are limitations here due to the way CL was designed. Primarily, slots on classes can't be type-checked statically, as they're not really part of the type declaration system – since CLOS was added later in the second draft of Common Lisp – multimethods can't dispatch on built in types, and there are no traits or parametric polymorphism for classes or non-derived types. The type system also doesn't allow you to specify schemas for general data types such as lists and hashmaps and use them as types, nor do randomized property-based testing based on types like Clojure's spec does. However, as usual, many of these issues can be resolved with a touch of compiler wizardry and macrology:

  • defstar adds a more convenient type declaration syntax.
  • Peltadot adds:

    • a version of generic methods that can dispatch on built-in types as well as classes, as well as providing inlining, static dispatch if the types of the arguments are known at compile time, and enhancement of the body of the statically dispatched method for particular calls with more specific type information if the provided arguments are more specialized than the types of the actual generic function being called
    • a Haskell-style trait system (which could already mostly be done using CLOS and mixins, but now integrates with the full type system and lets you define traits / "mixins" for existing types)
    • a powerful system for writing brand new top level parametrically polymorphic types that don't have to reduce to existing types,
    • a parametric polymorphism through type templating system that doesn't have to monomorphize,
    • extensible coerce
  • Lisp Interface Library, which provides an interface-passing style version of parametric and ad hoc polymorphism through passing first-class interfaces, like OCaml
  • Schemata provides schemas, like Clojure's spec, for Common Lisp, that can use Common Lisp types and classes, as well as be used as Common Lisp types, and can automatically create schemas for classes inheriting from the schema-class metaclass, It can also randomly generate data fitting a schema using check-it.
  • Quid Pro Quo, which is an Eiffel-inspired design by contract system for CLOS methods and classes.
  • A myriad of Common Lisp testing frameworks, which you can find a thorough and up-to-date comparison of here.
  • Serapeaum's etypecase-of, ecase-of, match-of, and defunion for lightweight ADTs and exhaustiveness checking based on the existing Common Lisp type system, as well as a Haskell-like type definition macro and a version of the (which lets you specify the type of an expression explicitly, like :: in Haskell) that actually checks the types of things.
  • cl-parametric-types adds parametric polymorphism to Common Lisp classes, structs, and functions using the C++ template model.
  • generic-cl wraps many built in Common Lisp functions (e.g. equality predicates and sequence operations) with generic versions you can add methods to, to make the language more consistent. Best used in conjunction with static-dispatch for performance.

And finally, if none of this is enough for you – because of the more fundamental limitations of gradual typing – and you need the strength of a full-blown Haskell-style type system, but don't want to give up the other features of Common Lisp, you can check out Coalton, which is a whole separate Lispy language with a full Haskell-like type system implemented in Common Lisp macros that compiles using those macros straight down to highly optimized low-level Common Lisp that ends up faster than hand-written code, allowing you to gain performance and ML static types while keeping conditions and restarts, macros, image based programming and the dynamic programming environment, and easy interop with the rest of Common Lisp when you want access to more dynamicism (and CLOS).

Generic programming (CLOS)

Although it's called the Common Lisp Object System, CLOS is your typical object oriented programming system. Instead, it takes the core ideas of object-oriented programming – dynamic dispatch based on the identity of one of the arguments to a method, and encapsulating multiple data slots together under a single identity – and integrates them with its more functional approach, by detaching methods from classes, allowing them to stand on their own as regular functions (called multimethods), just ones that have specialized implementations (methods) for types or classes (new instances of which you can specify anywhere you need to, not just in the definition of a class). This means that you can use methods in a completely syntactically and semantically consistent way with regular functions – not just in call syntax, but also in other ways; for instance, you can pass methods around to higher-order functions. This also has the knock-on effect of effectively freeing you from the Kingdom of Nouns problem other strongly OOP languages face, by allowing operations to be on the same level as classes as first class objects, even operations that dispatch on classes. This is a level of integration between object oriented and (classic) functional programming that very few languages can achieve, which is admirable in my opinion.

Just this idea of having functions that can specialize on the types of their arguments, and can have additional specializations for new types introduced by code the original implementation never even has to know about, having multiple dispatch without tying the function down to being defined in specific blocks or locations or associated with specific typeclasses, is pretty rare. I think this kind of generic programming is extremely powerful, because it allows you to define implementations of the core expected operations used by existing code (even code you may not have control over) for types that code may never have known about before, allowing that code to seamlessly interact with new things without having to hard-code the interaction between everything. For more on this, see this thread and the talk linked in it. Julia's multiple dispatch and type system were heavily inspired by – some would say they're almost identical to – Common Lisp's.

CLOS also expands dramatically on object orientation in so many fascinating ways. For instance, it has an algorithm that can resolve conflicts and eliminate duplicates and linearize class hierarchies using topology, thus allowing it to have generally safe and comprehensible multiple inheritance, allowing patterns such as mixins and even an Entity-Component system to be trivially represented without needing custom code or language features to do so. It also expands the number of arguments methods can dispatch on from just one, the implicit this or self argument in most languages, to the classes (basic types are also mirrored in the classes hierarchy) of all arguments (multiple dispatch). Additionally, instead of just having a concept of calling "super" (which it does have, under the guise of call-next-method), which allows only very limited composition between different implementations of the same method, it also has the ability to specify when a method is declared how it should compose with other versions of the method. For instance, it can:

primary
Act as the "primary" method, of which there must be at least one, replacing any other primary method if it is more specific in the types it specifies it can operate on than the others, or being overridden if there is another more specific method. (This is the default, and works like override in most other languages).
before
Run if its types are applicable before any primary method for this multimethod is called, irrespective of whether its types are more or less specific, and in addition to the primary method. There is actually a stack of these methods, and when the overall multimethod is called, all the ones that apply are selected and sorted in order from least to most specific and they're all run in order before you get to the primary method.
after
Same as above, but runs after.
around
Add a method which is run whenever the multimethod is called with applicable arguments instead of the primary method, with the primary method passed as an additional argument, and gets to conditionally choose how to run the primary method and filter its inputs and outputs. Represented as a concentric series of wrappers around the core of the most specific applicable primary method, that each receive the previous wrapper in turn.
?
Or use literally any arbitrary function as a composition method, which then receives all the applicable methods as arguments and decides what to do with them.

This is another aspect of how to solve the diamond problem: if methods can specify how they should compose with other methods preemptively, than method composition can be encapsulated and intelligently handled, instead of things just clashing and overriding in strange ways. This composition doesn't just solve semantic problems either, it unlocks massively powerful new horizons for composable, modular program development.

CLOS has many more features than I've covered here. Far too many, in fact – I'd run out of space. And that's not even to mention the Meta-Object Protocol, which is a pseudo-standard many CL implementations support that expands the power of CLOS even further.

Ugly

Criticisms of Common Lisp tend to fall into one of four buckets:

It's too big/complex of a language standard

While the first point may have been accurate for its time, the contemporary languages that Guy Steele had to compare Common Lisp to were things like COBOL, Fortran, BASIC, Pascal, Scheme, Ada (of the time), and C – in other words, languages that we would recognize today as woefully lacking in features, syntactic sugar, and standard library functionality, meanwhile Common Lisp is much closer to our modern standards. Standards for language design have simply changed significantly as the decades have rolled on and Moore's law has given us more compute power to play with.

So how does CL stack up now? Let's do a little informal comparison of Common Lisp with modern programming languages. I'll be using the latest standard or draft standard for each language I can access, or a reference for that language and its standard library if a specification is not available, if that reference appears to be sufficiently complete and formal and not in any way inclined toward a tutorial or guide. This source is then converted to plain text using either pandoc or ps2ascii to extract just the content without any typesetting or typesetting commands, and then run through wc to get the statistics shown.

Obviously, statistics for languages that didn't have a formal standard should be taken with a grain of salt, but I tried to be careful to only select references that, to my untrained eye, looked about the same level of detail, completion, and terseness as a specification, and explicitly stated their intention to be such in their opening introduction (the Racket, Python, and Rust references all do so).

Here are the results:

Language Version Used Lines Words Chars
Common Lisp dpANSR3 45029 372985 3659804
C# Draft standard of 9.0 32721 238747 1742402
JavaScript ECMAScript 2025 52838 292051 2216995
C++ Working Draft of C++26 129735 866312 10366237
Ada Ada 2022 92065 440991 3804499
Ruby ISO Ruby 2012 10930 98630 836729
Scheme R^6 RS + R^6 RS-lib 9108 96112 1026925
Rust Rust Reference + Unsafe Rust Reference 23711 149739 1439574
Java Java SE 23 31035 260664 2445031
Racket Racket Reference 132920 620742 7427007
Python Python Language Reference + Python Standard Library Reference 216310 937237 7113173

It's unfortunate that so few popular, mainstream languages are formally standardized, such that an accurate comparison can be made, but I think this should give you a ballpark idea of the size of Common Lisp relative to the modern world. Undeniably, Common Lisp remains a big language; however, in the context of today's programming language landscape, it isn't unconscionably large. It's only a fourth larger than Java, JavaScript, and an old version of C# (which has added many features in the few versions after the one listed here), smaller than C++ and Ada, and probably smaller than Racket and Python, two languages nobody complains about the size of. And really, it's not surprising that it's half the size of Racket and a third the size of Python: while Common Lisp has a reputation for complexity, and Racket and Python a reputation for simplicity, Common Lisp really has relatively few individual language features – those features are just very powerful, such as CLOS, the condition and restart system, the macro system, and the type system – and a positively anemic standard library by the standards of Racket and Python, while Racket and Python have far more individual language features (just look at the tower of macro abstractions Racket comes with!) and much bigger standard libraries.

In fact, the next Scheme standard – the one intended for practical software engineering and development work as opposed to language research, pedagogy, and embedded applications – will probably be much larger than the ANSI Common Lisp standard (this is even according to the R7RS committee), demonstrating, I think, my point that a language with sufficient built in features to allow for practical use is always going to end up pretty large. I think the size of Racket (and Guile, possibly, although its reference wasn't suitable for this comparison) demonstrates this further: although the core Scheme standard was very small, implementations of Scheme that wanted to be usable for practical programming had to expand massively on it, despite coming from the same community that values smallness and simplicity. As Kent Pitman once said, "large languages make small programs and small languages make large programs."

It can't be implemented in a way that's performant

This is simply false, as we'll see in the section below.

It isn't clean/elegant/orthogonal/beautiful

There are a lot of reasons that a language might be ugly:

  1. it might be poorly thought through, having had insufficient design work go into it;
  2. it might have odious or misguided design principles at its heart;
  3. it might simply have grown organically over time away from the clean ideas at its core, or simply have a beautiful core and then a few award decisions on top due to path dependency;
  4. or it might be ugly due to practical tradeoffs.

Languages that fall into the first two categories are basically unforgivable in my opinion. Such languages include PHP, Go, Java, and Perl. Languages in the third category are usually tolerable, and often in fact quite awesome, because while there is ugliness to them, if you can look past the surface level ugliness – some strange syntactic or semantic decisions here and there – there are beautiful or powerful ideas locked away within (Erlang, OCaml, Prolog). And languages in the fourth category we may gripe and grumble about, but will ultimately be more useful to us than a beautiful language that makes no compromises with practicality.

It is my contention that Common Lisp is in the third and fourth category, not the first two. It is not the Right Thing in an absolute sense, but it is the best combination of the Right Thing (a beautiful core set of ideas and a set of extremely powerful ideas and concepts built on top) and Worse is Better (a practical workhorse for the here-and-now, made of compromise and organic evolution) that exists in the present moment. If a language with all of its capabilities and benefits existed, without any of the warts, I'd switch in a heartbeat, but there isn't yet… and ultimately, I don't think there can be (maybe Jank will prove me wrong, who knows – I'm praying).

Let me put it this way: when you focus on purity, you get an overly abstract, annoying to use for practical work language and obfuscatory culture like Haskell's (stylistic neophilia, awkward tooling, and constant changes, to quote the second link), filled with undocumented, half-implemented libraries for doing abstract manipulations on types and little else, the dead littered remains of someone's PhD thesis.

And when you focus on simplicity and beauty, you get something like Scheme: a standard that is completely minimal, totally orthogonal, utterly beautiful… and so impossible to use productively for practical programs that almost every single implementation – Chicken, Chez, Racket, Guile, etc – has had to reinvent a set of multiparadigm language constructs and standard library batteries from the ground up on top of it, which has resulted in a total Balkanization of the community, leading to a dearth of substantial programs and practical libraries compared to Common Lisp:

and very few books or documentation on specific applications of Scheme outside the realm of PLT or just "learning Scheme."

Worse, this focus on beauty and smallness has meant that attempts to standardize enough of these features that the language can actually become a usable language for practical programming have run headlong into the community's obsession with simplicity, smallness, and beauty preventing making a language for practical programming, leading to the decades-long fiasco that has been R7RS (including Scheme being officially split into two languages with possibly totally different semantics on some fronts, and then the committee for the large version further splitting).

Meanwhile, in Common Lisp's case, it's ugly because it's a compromise between many very powerful predecessor languages that were all used extensively for serious programming and designed by smart people deeply invested in Lisp, and designed to be the foundation of an entire industry, to make it possible for all of them to continue doing their work and not give up any of the powerful features they actually used in the pursuit of some abstract purity. And while every member of the working group that standardized ANSI Common Lisp came away claiming that if it had just been them they could have made something far more beautiful, I think the resulting language, while ugly, definitely serves its purpose in exemplary fashion. So it has a huge and rich standard library; it is unabashedly multiparadigm – sporting a rich set of concepts with which to write programs – and each paradigm is fully implemented and powerful; every feature is fully rounded out (such as the list comprehensions with the LOOP macro), with all the corner cases accounted for and every peripheral feature thought of and added; and the whole thing is clearly, completely, and unambiguously described in a central reference. This means that while there are many implementations of Common Lisp (SBCL, CCL, Allegro, LispWorks, GCL, CLISP, ABCL, and more), all with their own pros and cons, you can actually port large scale, meaningful programs between them, so the community is much more unified even as they get the benefit of multiple implementations.

Yet, despite all the compromise and focus on practicality and history, the standardizers of Common Lisp seem to have put an unusual amount of effort into doing the Right Thing. Whether that's the package system solving macro hygiene problems, or the homoiconicity, or the use of bignums and rationals by default. Even the particularly ugly compromises have their reasons and defenses.

Common Lisp killed Lisp

This one doesn't require much of a response.

Just like the idea that Lisp died out because it was simply too powerful and flexible for programmers to grasp, or for the industry to adopt, or that its power and flexibility lead to fatal fragmentation, it's a narrative used to patch up the raw, unpleasant reality that history doesn't have neat lessons like "ugly languages are bad" or "programmers are stupid/Lisp was too good for this world" or "powerful languages can't work in The Industry." Ultimately, Lisp died for two reasons:

  1. Lisp's fate was tied to the first AI boom because it was invented by the same people who were pushing the forefront of artificial intelligence, and soon became their favorite language, so that eventually all the research departments using and improving Lisp were dependent on AI work for their funding, and later all the companies selling and supporting Lisp depended on a customer base mostly composed of people doing AI work, and the broader opinion of Lisp was intimately tied to AI instead of the language's own merits, so that when the AI Winter came, it was totally wiped off the face of the map.
  2. When the transition from minicomputers to microcomputers came, stock machines were still far too slow to run Lisp, so specialized Lisp machines that were rare and expensive had to be created to run it, limiting its reach. Meanwhile, languages that ran fast on the horribly limited hardware of the day spread everywhere, and everyone started hacking with them. This led to those faster languages, mostly C, coming to form the basis of modern computing infrastructure, and most programmers being familiar with them, and most pedagogy being oriented around them. As a result, when the day came when Lisp was fast enough – more than fast enough! – it was too late: everything was already written in C, everyone was already familiar with C, and so all anyone wanted was C-like languages. So few people were willing to learn Lisp, and worse, a culture of rationalizing and justifying a desire to not learn Lisp as being the result of inherent problems with Lisp sprung up, further scaring people away from it. So programmers stole a few ideas from Lisp for their C-like languages and moved on, and now it's too late to change.
  3. By the time people had started to come around to some of its ideas it was no longer new. So while it was still very powerful, still largely a superset of the features of similar languages, still the ur-dynamic language from which all others pull features and ideas from, it was also somewhat crufty, somewhat held back by past technical debt and a lack of contributions, and most importantly, just exuded this air of being ancient – it isn't new or hip. All of this just made it very difficult to generate any meaningful pop software culture hype around it.

Profitably dead

One of the best aspects of Common Lisp is that it's "dead" – it was standardized once, as the ANSI Common Lisp standard, and has not been updated since, nor is it likely ever to be updated again.

I'll let Steve Losh explain the basics of why:

If you're coming from other languages, you're probably used to things breaking when you "upgrade" your language implementation and/or libraries. If you want to run Ruby code you wrote ten years ago on the latest version of Ruby, it's probably going to take some effort to update it. My current day job is in Scala, and if a library's last activity is more than 2 or 3 years old on Github I just assume it won't work without a significant amount of screwing around on my part. The Hamster Wheel of Backwards Incompatibility we deal with every day is a fact of life in most modern languages, though some are certainly better than others.

If you learn Common Lisp, this is usually not the case. In the next section of this post I'll be recommending a book written in 1990. You can run its code, unchanged, in a Common Lisp implementation released last month. After years of jogging on the Hamster Wheel of Backwards Incompatibility I cannot tell you how much of a relief it is to be able to write code and reasonably expect it to still work in twenty years.

Of course, this is only the case for the language itself — if you depend on any libraries there's always the chance they might break when you update them. But I've found the stability of the core language is contagious, and overall the Common Lisp community seems fairly good about maintaining backwards compatibility.

I'll be honest though: there are exceptions. As you learn the language and start using libraries you'll start noticing some library authors who don't bother to document and preserve stable APIs for their libraries, and if staying off the Hamster Wheel is important to you you'll learn to avoid relying on code written by those people as much as possible.

One of the great aspects of this is that sometimes Common Lisp libraries can just be done: they've fixed all the major or relevant bugs, implemented all the features necessary, and the basic operating system or FFI things they rely on aren't going to change out from under them anytime soon, since POSIX and C are also extremely backwards-compatible, nor is the language itself, or the package manager or build system, so they can just… let it be, and you in turn can use it without worrying about checking GitHub vitals or updates breaking anything, or documentation being out of date. Better yet, many of these libraries are older than entire other languages like Python, meaning that they're well developed and well tested. And on a more ironic note, this ecosystem stability is really important for a language that has such a slow moving, small community, since you don't have to worry as much if a library really is dead.

The fact that the language is standardized and hasn't changed since the 1990s might seem like a death sentence, but thanks to Lisp's power and flexibility, it isn't – because Lisp has non hygienic macros, reader macros, compiler macros, the meta-object protocol, CLOS, access to the compiler from within the language, and more, it can simply absorb any feature from any other language. Even better, any feature absorbed this way is just a (usually quite portable) package on top of a fully specified and stable standard with multiple conforming implementations. So the language can both evolve with the times however you need and also keep a permanently stable backwards compatible baseline target that doesn't break your code. And you can compose and mix and match language features, since they're just packages, and do so without messing up the language features used by other packages you want to use – since they can simply not import the macros that you're using, or even import their own without clashing with the features you're using – and there's always a common language under the hood that's big enough and stable enough for practical work all by itself.

Of course, this might bring to mind the dreaded "Lisp Curse", but here's the thing: the person who wrote that essay was a web designer with no documented experience with Lisp at all, and his essay doesn't really capture the realities of the Common Lisp world at all.

While what he says is somewhat accurate for the horribly fractured and confused Scheme world, where every implementation of Scheme is in effect a totally different incompatible language with its own small sliver of the overall community, it doesn't even really hit home there, since within the world of each Scheme dialect, there seems to actually be a clear and concerted effort to rally that sub-community around a core, unified set of batteries and language feature implementations, and the fact that the community is fractured into multiple dialects isn't the result of Scheme being "too powerful," but instead the result of other problems.

Meanwhile, his critique holds even less for Common Lisp: while the power and flexibility of Common Lisp certainly attracts a certain kind of mind (of which I am one) due to the power it gives individuals to achieve their visions alone, and the flexibility it gives them to mold the language they use in the image of their own ideas, preferences, and thought processes – essentially freeing you from the constraints of being stuck with another's design decisions or having to work with others – and that can lead to cultural problems, that doesn't mean the community can't unify around single solutions to problems eventually. In fact, it has: if you look at the Awesome CL repository and the 2024 Common Lisp Survey you'll notice that in almost every category, the community has unified around a single main implementation of something, with maybe one or two major alternatives, and where there is a major runner-up, it isn't just another incompatible but overlapping 80% solution, but has good independent reason to exist. Yes, there is always a long tail of other half-implemented libraries, but that's true in any healthy language ecosystem.

This freedom to experiment widely, to implement each individual's vision of how something should be done, and then to slowly unite on a single implementation of an idea, or a few meaningfully unique and different solutions that solve different needs, is actually a very good thing. Yes, it may lead to a community that does worse on some metrics, but there are reasons to prefer it, too, such as not having to make the choice between a feature being painfully absent from the language for those who need it right now or adding the feature prematurely, before it's been fully developed and thought through (something greater linguistic experimentation can help with as well, through providing more information), and when a feature is centralized on, that doesn't mean it's locked in – it can still be ditched. Compare this with, for instance, the situation with Async Rust.

Ultimately, I don't think the "Lisp Curse" is what killed Lisp. Instead, I think what killed it is a lot simpler and dumber than that – path dependency, accidents of historical context and development, and cultural issues.

The stability ("deadness") of the Common Lisp standard is great for implementations, too – it means that once an implementation conforms to the standard, it really needs very little work done on it, mostly just small bug-fixes and upgrades to stay able to run on modern hardware and OSs. There isn't a new edition every year, or even every five years, to bring along major new changes to the language that you have to keep up with; as long as you implement this standard once, you'll be good to go, a valid implementation anyone can use. That's why so many people were able to use Clozure CL even for the many years that it was unmaintained. That's also how new implementations of Common Lisp such as SICL and CLASP have a chance at actually working out. Of course, there are other pseudo-standards like MOP (from The Art of the Meta-Object Protocol), CLTL2 (from Common Lisp: the Language, 2nd Edition, which introduces environment reflection capabilities), and threading, but at least there's a fairly large, powerful, flexible, and practical foundational language for programmers to build on that they can be confident will transfer between implementations. Moreover, there are usually libraries (such as cl-environments, bordeaux-threads, closer-mop, uiop, cffi, and so on) that paper over implementation-specific deficiencies or incompatibilities in these pseudo-standards. To see what conformance looks like across the CL ecosystem, you can look here.

The benefit of having multiple implementations should be obvious. While obviously SBCL outshines the rest by a mile in terms of completeness, active maintenance, and most especially performance, and thus the vast majority of Common Lisp users use SBCL, the worst that does is put Common Lisp in the exact same position as any other non-standardized language with only one implementation. And in reality, other implementations shine in different ways:

  • ABCL lets you use CL on the JVM and get great interop with Java.
  • CLASP lets you use CL on top of the LLVM for seamless C++ interop.
  • CLISP is very fast at numeric code.
  • CCL compiles extremely fast and has great error messages, so many people use it in conjunction with SBCL.
  • ECL is very easy to embed in C programs (like Lua) and can also compile to C.
  • SICL is intended to be the cleanest and most correct Common Lisp implementation, fully implemented in idiomatic Common Lisp code.
  • LispWorks has a Lisp Machine-inspired GUI development environment fully implemented in Common Lisp and inside their Lisp image that puts even SLIME to shame, a portable GUI toolkit, and support for unusual platforms like Android, as well as having incredible commercial support.
  • Allegro is extremely fast, has commercial support, advanced symbolic AI and LLM support, and a powerful server and database.

Such diverse yet compatible implementations would be possible if the language standard was constantly changing and difficult to keep up with. (See LuaJIT vs Lua, or Clojure vs ClojureScript vs Jank, or CPython vs PyPy).

Performance and systems programming

Traditionally, in the world of programming, you have two choices: if you want to write something that requires performance, than you have to use a systems programming language, accepting worse development experience and speed, longer compile times, and either far less power, or far greater complexity, as well as either far less safety, or much greater difficulty in writing programs (to satisfy static verification systems); and if you don't care as much about performance, then your entire world opens up, with a huge panoply of pleasant, powerful, safe, and dynamic languages with fast development times, that are good for experimentation, to choose from.

The problem with this dichotomy is not only that it's simply unpleasant for those who care about performance, and annoying for those who want to use higher level languages but don't want to have to deal with the performance penalties (or annoying for users of projects written in those higher level languages), is that often you need both properties in the same project. For some parts, you need to buckle down and optimize those tight inner loops into oblivion,a nd for some parts, you really want the higher level programming constructs and faster turn around time. When faced with a problem like that, your only two choices are really to buckle down and pick one horn of the dilemma, or to try to embed a language, like maybe Lua. The problem here being, of course, that now you've got to decide which parts of your program go in which language, constantly spend time building interfaces between them and solving compatibility issues, and occasionally move pieces of your project back and forth between the sides. Worse still, most embeddable languages are only designed for lightweight scripting, more on the R7RS-small side of the language spectrum, which means that you can't actually use them too extensively without running into issues – and more complete dynamic languages are larger and slower and harder to embed.

Common Lisp largely resolves this dilemma by offering a language that is higher level, more powerful, more dynamic, and better for experimentation than basically any other, that can also seamlessly deal with low level optimizations through features like:

  1. marking functions to be inlined,
  2. using type declarations to eliminate dynamic dispatch and indirection,
  3. adjusting compilation speed and safety settings (can be done on a function-by-function level),
  4. selectively turning off late binding (in SBCL),
  5. manually JIT compile code by calling out to the compiler at runtime,
  6. view the raw assembly of your code to see how the compiler is optimizing it,
  7. inline assembly (in SBCL),
  8. SIMD support (in SBCL),
  9. arena allocation (in SBCL),
  10. access to pointers through generalized references,
  11. avoid heap allocation entirely via mmap,
  12. specify custom compile-time inline replacements for regular functions,
  13. low-level high performance threading primitives,
  14. fine-grained control over when allocation occurs through:

    1. forcing values within a scope to be allocated on the stack,
    2. the ability to return multiple values without allocating memory,
    3. faster low-level arrays, vectors, strings, and bit vectors as well as regular ones,
    4. unboxed types (such as fixnums, floats, double-floats, etc),
    5. and explicitly non-consing (destructive) list operations,
  15. an extremely powerful LINQ-like DSL for iteration over various data structures that allows the compiler to generated optimized low-level looping code under the hood unlike mapcar and friends,
  16. the ability to create complex objects at load time and use them like literals at run time,
  17. a built-in facility for inspecting memory usage, composition, and management (sort of like a built-in Valgrind),
  18. stand-alone executable delivery,
  19. jump ("goto") instructions (lexically scoped, so they're safer to use than classic GOTOs),

    1. dead code elimination (in SBCL),
  20. and a lot more I'm probably not even aware of.

Of course, memory usage and garbage collection pauses will always be a possible problem if you don't want to have to go full arena allocator, but you can get around this by manually triggering the GC every tick of your program if there's extra time left, using object pools, and using non-standard tricks like temporarily pausing the GC (something SBCL allows you to do).

All of which allows Common Lisp to be pretty damn fucking fast for a language that can also seamlessly transition to extremely high level and dynamic outside of the hot loop, and provides an unmatched development experience. It's pretty consistently only around 4x off C/C++ on most benchmarks, and it can be optimized to be just as fast as the same algorithm in C in some cases. It can even occasionally beat C++'s performance with clever use of the tools given:

We develop tiny SQL-like query engine from scratch in Common Lisp which performs on par with a cheating C++ version and handily outruns even more cheating Go version.

This is possible because CL compilers are competent, blazing quick and can be programmatically evoked at runtime over arbitrary just-in-time generated functions. Generation and native compilation of specialized for any concrete query code is deferred right back to query (run) time. Where other languages must pre-compile recursive interpreter, CL compiles down a single if-condition.

As for code generation we have the full power of the language plus whatever we've additionally defined, we show off arguably the most powerful Common Lisp Object System (CLOS) in use. This combined with the fact that generating code in Lisp is isomorphic to writing "regular" code, makes the solution so simple, concise and elegant, it's difficult to imagine it does the same thing as those unsung geniuses writing low-level optimizing compilers in all those powerless non-Lisp languages.

There are also a fair number of high performance libraries for various purposes available for Common Lisp as a result of all this, such as numcl, lparallel and lfarm, Petalisp, cl-async, woo, stmx, and Sento. And thanks to its flexibility and how introspectable the compiler is, new performance optimizations can be added to Common Lisp trivially, such as:

  • specialized-function, which offers Julia-like JIT compilation of type-specific versions of generic methods when they're called;
  • static-dispatch, which statically dispatches calls to generic methods where the types of the arguments are known at compile time;
  • loopus, which can do (the README isn't helpful)

    • type inference and specialization,
    • loop invariant code hoisting,
    • common subexpression and dead code elimination,
    • automatic vectorization;
  • tree-shaker, which can reduce executable sizes by up to 30%;
  • memory-regions, which offers manual memory management;
  • and trivial-garbage, which offers weak hash tables and vectors, as well as access to finalizers, which are handlers that run when an object is garbage collected, which are more useful than they sound,

to name a few.

That's not it, either – if Common Lisp native libraries aren't enough – as is often the case in performance sensitive work or systems programming, where the vast majority of existing work has been done in C and most major, powerful, fast, and well-maintained libraries are written in C – CL also has excellent C FFI capabilities through three libraries:

  • CFFI, a Common Lisp library that provides a way to link to and call C shared libraries in an implementation-independent way (using each implementation's specific methods for doing so under the hood, but providing a consistent interface with consistent semantics). You still need to explicitly write bindings for each and every function you want to be able to access from Lisp manually, essentially writing a second header file. You also need to manually convert back and forth between Lisp and C types, and although an extensive set of foreign types are provided to represent C types on the Lisp side. Moreover, you need to be very careful about how you manage memory at the border between Lisp and C. However, this level of bindings is already superior to what most dynamic languages with managed runtimes can provide – the Java Native Interface requires a custom (and verbose) wrapper to be written on the Java side, the Go FFI is slow and awkward to use (1 2 3), and while Python isn't that bad, it requires you to use a special Python script to compile a C extension for your interpreter to interface other Python code with C, which is flimsy and doesn't generalize to other implementations like PyPy, as well as being awkward.
  • cl-autowrap is able to parse header files Automatically generate function bindings for Lisp, allowing you to use C functions without needing to manually write bindings. It can also generate thin and performant lightweight wrappers for C structs and unions and various other sea types so you don't need to manually write wrappers for those. It will also generate recursive accessors and so on. It also annotates all of the things that generates with full-type information from the C side. This is already better than anything Rust really has. Rust has Auto-CXX, but it doesn't seem to work as well.
  • cffi-object, which automatically wraps CFFI pointer types in structs that then automatically free the equivalent C memory when the garbage collector frees that list struct using finalizers. This essentially automates What you typically do in rust, which is similar, wrapping C pointers in rust structs that have a finalizer that frees the C memory when the rust struct is destroyed by RAII. It's just done for you automatically and integrates with a full garbage collector instead of just RAII. So it's a fuck ton better.

To get a sense for the performance of C FFI in Common Lisp, we can take a look at the FFI overhead benchmark.

While they haven't updated their list to show SBCL, they did include an SBCL implementation, so I cloned the project, got SBCL and Go working in a container, and got these times:

Common Lisp via SBCL:
934
934

go:
21186
21053

This implies that SBCL is around 21 times faster than Go at interfacing with native code. For a more complete comparison, let's find the performance scaling factor between my computer and the computer the results in the README were gotten on using the common denominator – Go – and then scaling the SBCL results by that factor and placing them in the list. That ends up looking like this:

Language time in ms
luajit 891
julia 894
c 1182
cpp 1182
zig 1191
d-ldc2 1191
rust 1193
haskell 1197
nim 1330
d 1330
ocamlopt 1634
sbcl 1674
v 1779
csharp-mono 2697
ocamlc 4299
java7 4469
java8 4505
node 9163
wren 14519
node-scoped 15425
dart 31265
elixir 23852
go 37975
dart-scoped 61906

That's the fastest managed language, aside from Haskell! I'd say that's plenty fast enough for most tasks. Now, this is using sb-alien instead of the CFFI library, but CFFI is built on top of sb-alien but treats everything as a void pointer for greater performance, so raw sb-alien, used properly, is actually supposed to be slightly slower than CFFI because it tries to be safer, so I think this is a fair comparison.

All this means that while it won't be used for hard-real time or embedded programming (due to the memory footprint) any time soon, it's almost certainly good enough for the vast majority of applications, including games, if you put in some elbow grease, and much faster than similarly dynamic and high level languages like Ruby and Python. And of course being able to use such a highly dynamic language for performance oriented code has a lot of benefits.

(The only language that beats Common Lisp in the "dynamic, high level, but fast" world is Julia (by a factor of 2 or so), but it does so through mandatory just in time compilation that's slow to start and warm up and difficult to control or predict, a highly managed runtime, and generally preferring to be automatic over giving you fine grained control over optimization, all of which adds up to prioritizing throughput over latency, meaning that its performance is applicable in fewer cases than CL's. CL gives you the low level tools to decide what kind of performance you need.)

This has a few benefits for me:

  1. You successfully write a much wider array of software with it, including games.
  2. You can worry about performance much later than with something like Python, you can progressively improve performance from easy-to-write prototype to industrial-grade performance beast, and it'll take a lot longer for you to hit a hard performance wall where you're forced to rewrite.
  3. In theory, I think it is acceptable to give up some performance in the pursuit of malleable systems (see also).

The condition system

The last feature of Common Lisp that really intrigues me is also the one that stands out the most, as a very singular, unique, and unmatched feature in today's language landscape. Who knows, maybe in another 30 years it will start percolating in half-hearted, poorly-imitated fits and starts into other languages the way macros have been, but for now, no other language has anything like it.

This feature is the Common Lisp condition system. At first glance, the condition system might seem like just another exception system – if you run into an error, you throw an exception, which makes a non-local exit to the lowest level wrapping expression that indicates it can catch that type of exception, at which point the code can decide what to do. However, what's unique about Common Lisp conditions is that they don't unwind the stack.

What this means is that while, by the time an exception is caught (or makes it to the top level) in a language like Java or Python, it's essentially too late to meaningfully recover without restarting some coarse-grained chunk of the process, like a whole function call, and often far too late to actually try to figure out exactly what went wrong, in Common Lisp, when you catch a condition, all of the information necessary – the call stack, local variables and function definitions currently available, even what part of the current expression the execution was in when the condition was signaled – is all intact. Combine this with the runtime malleability, ability to generate new code and execute it on the fly, and access to the compiler, and your language gains with conditions an incredibly powerful tool for essentially being self-repairing: detecting errors and fixing them when they occur.

This also means that when a condition reaches the top level, instead of the program irretrievably crashing, like it would in another language, it can just drop into an interactive menu asking the user or programmer what they want to do to recover from the condition that was thrown. This allows you to literally open a REPL at the exact location and point in time when the condition happened, so that you can explore, experiment by running different code, or redefine whatever types, functions, classes, or variables you need to; or you can ask it to literally replace the expression that signalled a condition with a different expression and just rerun that innermost, most specific part of the program and continue on from there; or any combination thereof. This actually came in handy during the first mission of NASA's New Millennium program, the flight of Deep Space 1, as Ron Garret tells it:

The Remote Agent software, running on a custom port of Harlequin Common Lisp, flew aboard Deep Space 1 (DS1), the first mission of NASA's New Millennium program. Remote Agent controlled DS1 for two days in May of 1999. During that time we were able to debug and fix a race condition that had not shown up during ground testing. (Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem. The story of the Remote Agent bug is an interesting one in and of itself.)

The Remote Agent was subsequently named "NASA Software of the Year".

This might sound like the capabilities of a debugger to you, and so it might not seem that novel, but what you have to understand is that this means that Common Lisp essentially has an extremely powerful debugger built into the language runtime, that is active at all times without a significant performance penalty, that can also be programmatically controlled to debug or recover errors dynamically. This adds a whole new dimension to what can be done to recover from errors, similar to some of the impressive dynamic error recovery feats Erlang can perform, but with even more surgical precision.

Other Lisps

I won't speak too much on this here, since I have a rant about Racket in the works. Instead, I recommend you read these:

  • Common Lisp versus Racket (from a Lisper's perspective)
  • Common Lisp versus Clojure (from a Lisper's perspective), Common Lisp from a Clojuran perspective, part 1 and part 2
  • Switching from Common Lisp to Julia, response, a response to one particular point, A Lisper's first impression of Julia.

    • My own notes: due to the lack of multiple inheritance in Julia's type system, you can't do mixins, which are the natural way to get composable interface-like polymorphism in languages based on type hierarchies and multiple dispatch, since they allow you to attach names to particular protocols/sets of behavior. The lack of this causes severe problems in being able to specify, as a function, what behavior you expect from a type in a way that isn't either too specific, too general, or too broad, and causes issues for being able to specify, as a type, to be able to specify clearly what you actually provide. See more here. I think this and the fact that all functions are generic, instead of it being an explicit API choice to make a function generic, plus the lack of aspect-oriented programming (before, after, around method combinations) is why Julia has such serious correctness and reliability issues.
    • Lack of homoiconicity.
    • It is so fucking fast though.

As well as this essay, which makes some good points about Common Lisp's magic being in its holistic system, not in its pieces, although I'm not totally a fan of the tone (since I respect languages like Clojure):

Another huge benefit to Common Lisp is that it has so many excellent, nay, legendary books written about it, that will also expand your mind and teach you a ton about programming itself in the process. Scheme has SICP and HtDP, which are far better than most of the options Common Lisp has, to be fair, but CL has them beat in number and variety: LISP: 3rd Edition, PAIP, On Lisp, Let Over Lambda, ANSI Common Lisp, The Art of the Meta-Object Protocol, Recursive Functions of Symbolic Expressions and Their Computation by Machine, Common LISP: A Gentle Introduction to Symbolic Computation, CLtL2…

Conclusion

So, while none of the features that Common Lisp has are today as totally unique as they once were, as various languages over time have slowly adopted bits and pieces of them, none of those languages has the same intersection of all of these features, or any of them in as complete and integrated a form. And in some cases, such as homoiconicity, the condition system, and image-based development, I don't think they ever will, because the familiarity tradeoffs are too great for modern languages to make if they hope to gain traction. So even if the list of features that are totally unique to Common Lisp has dwindled, I still think it has a lot to offer.

It is, in this sense, the lost Atlantis of programming languages: it invented so many ideas long before their day would come for the rest of the world, but those ideas were lost to the accidents of history, and the rest of the programming language civilizations are left finding artifacts of that lost civilization and reverse engineering them in bits and pieces, but never so great and powerful as they once were.

Another way to look at it, in view of the fact that Common Lisp is not a beautiful pearl of programming language design like Scheme or Smalltalk, is that it's a terrifying, hideous, cthonic monster that you can barely stand to look at, branded with strange symbols and chanting strange abbreviations, which you can summon from the depths of ancient lost history to grant you untold power to manipulate the fabric of reality.

Just beware: it may drive you mad in the process!

If you're interested in Common Lisp, I recommend you use Emacs with SLY as your development environment, SBCL as your implementation, CIEL as your starter pack (includes more batteries for Common Lisp), the Common Lisp Nova Spec and the Common Lisp Technical Reference as your docs.