There's been a lot of discussion over the years of how static languages effect the experience of programming. The summary is that high performance, static, compiled, typed languages come at the cost of being able to have a tight feedback loop and freely experiment as you're writing code, which is typically a bad thing because programming is not an act of mere construction following a specification, but an act of design, where we're feeling out how things should be executed and constructed as we go, because there are far too many unknown unknowns prior to actually trying something out, and our brains aren't good at simulating computers, and any attempt to plan a piece of software up front that's detailed enough to remove the need for exploration and experimentation in how we implement something will just end up being code we need dynamic experimentation to design anyway.
I'm not here to talk about that, though. What I'm here to talk about is the fact that the use of static, compiled, high performance programming languages effects how users experience software, not just how developers experience writing it. Such software, by its very nature, is a closed black box to users – difficult to modify, difficult to inspect and understand, difficult to compile. It may offer extension frameworks you can hang off it, but unless so much of the software is implemented in the extension system, and the extension system is so powerful and extensive, and so intimately involved in everything, that the program essentially becomes a somewhat specialized language runtime and development environment more than an application for a particular purpose – e.g., the Emacs and web browser strategy – the program will always remain opaque and limited from the perspective of the user. This leads to all the shortcomings of UNIX and most other modern software, the shortcomings that drive people like me that want integration and flexibility and powerful programmability and malleable environments to Emacs (or to browsers!).
If we had environments that were written in a language more like Common Lisp, which offers runtime dynamicism, malleability, power, and access to low level concepts like pointers and manual memory management, and the ability to do high performance optimizations when necessary, we could have the extensibility and malleability of things like Emacs for our whole operating system, everywhere. And development would be much faster and more pleasant, too.
So why don't we?
Generally the argument that people make is that Common Lisp isn't fast enough to write operating systems and drivers and browsers and shells in. That was once true, twenty or thirty years ago, when most operating systems we use today were being created, and that's why we're locked into this statically compiled program black box hellscape, with separately compiled processes communicating via pipes and IPC instead of just calling each other's functions and passing data structures, because you can't compile programs in a way that would allow that level of interaction, and you couldn't get away with not compiling your programs. But I don't think that's true anymore. Our computational budget has increased incomprehensibly since the late 1990s, to the point where what once was considered a horribly slow and inefficient language like Common Lisp is actually one of the fastest languages out there, and can readily be used for high performance work.
You might still argue that Lisp is too slow – that we shouldn't accept that 2x performance hit versus C in our most basic bedrock layers, because that would make everything else too slow, especially since Moore's law is largely dead, but I don't think that's actually a good argument – I think it's penny wise and pound foolish. Because what's happened in reality is that we've written all those tools in C or C++ or similar for performance, but then, because they're terrible, awful, insufficiently reliable and extensible abstractions, we've written new layers of abstractions on top of them just to get away from them, like Electron and web browsers and IDEs and so on. And those abstractions aren't free: not only do they have a performance tax just by virtue of the many extra levels of indirection involved, but they're often written using languages that are both slower and less dynamic in a way that a user could use than Common Lisp and similar languages. This means that in the end, we're probably worse off, performance-wise, than if we just implemented a good set of bedrock abstractions from the start, even if they were a bit slower, and then had to build fewer layers of abstraction and indirection on top just to escape bad abstractions. What if we spent more of the wondrous performance budget of our computers on creating better abstractions in safer, more dynamic, and more powerful languages from the ground up, instead of gaining a ton of performance budget and then wasting it all later trying to claw back safety, dynamicism, and power later on?
Crucially, I want to say that I don't believe in the myth of the "sufficiently advanced compiler" – if your programming language's paradigm and fundamental mode of operation is completely contrary to how computers actually compute – for example, if it's pure, immutable, and lazy – and it can't express more directly-mapped computations without further layers of abstraction and indirection – such as refs in OCaml or monads (which are boxed types) in Haskell – then you're never going to be able to make it fast. Nor do I think that any amount of performance loss is acceptable to achieve the goals of a dynamic, malleable computing environment – this is a tradeoff, since no one will use a computing environment that's too slow – so you need to have a language that's fast enough but also dynamic. As far as I know, only Common Lisp meets these requirements, and maybe eventually something like jank – although how fast exactly a language for what I'm proposing would need to be is an open question.