The rise of "Whatever" Machines
Rewrite: refactor to remove all the border properties. If this results in args being merged with an empty :style map, then remove the styled-args variable entirely and have elem-wrapper use args directly instead
I've increasingly used my built-in LLM editor integration to essentially replace emacs macros and regular expression transformations with a syntactically and semantically aware fuzzy transformation system, because it's just infinitely more robust — yes, LLMs are stochastic, but so is my ability to make a very complex regex do what I want the first time — and it requires less finger contortions and looking shit up.
And before you complain, I'm actually extremely good at regular expressions compared to the average programmer, and I've taken my time to learn them. However:
-
they're just not suited to some tasks:
- sometimes it takes a very long time to find that creative side-step that allows a regex to represent some transformation you want
- sometimes doing it requires using obscure or non-standard features that aren't in my immediate memory
- they can often get very complex, which can lead to a lot of time wasted debugging them
- sometimes what you want to do can't be represented by them at all, but needs a combination of regexes, elisp, macros, and manual editing, so then you've got to load all that into your memory and deal with the impedence mismatches between each and the individual time sink each represents
- sometimes it actually genuinely isn't worth learning the Nth intricacy of something, or even if you know it, spending the time remembering the specifics and debugging it when it goes wrong
- they're hard to type
- there are a million different incompatible versions of regular expressions
For all of these reasons, it can often be a very sound strategic choice, even if you care deeply about your craft as a whole, not to get sucked into the rabbit hole of using them. And whereas usually the answer to that was just to make all the manual edits by hand, LLMs save you that.
So this is not a question of "not wanting to learn", in the abstract, as if I'm lazy and don't care about anything, as it is often disingenuously framed. It's that sometimes it isn't worth learning some particular minutia: it's a distraction from your overall goal — and that overall goal can actually be improving at your craft or making a good thing for people! We have to stop fetishizing Work for the sake of Work in this Puritan way.
Also, this just saved me so much pain and heartache. I went an entire day trying to solve it on my own, and it helped me solve it in like an hour:
It's the world's best rubber duck, because instead of saying nothing, it can summarize and rephrase what you've said before, add new ideas and insights that can be wrong, but at least jog you off the one track you're stuck in and send you in new directions to explore, and can — extremely fast and pretty accurately — search the web and extract useful nuggets of information on topics from places I at least would never have thought to look or spotted them (like offhand React Native dev comments in a github issue).
It's incredible to me how extremely useful this is, compared to the twisted reality people like this portray based on one or two desultory, hostile attempts to use these things, where they purposefully refuse to learn how to use them, and their vague impressions from bad AI applications. It feels extremely disingenuous, because it relies on a set of assumptions, namely:
- a machine must be perfect and make no mistakes to be useful
- that our usage of even deterministic machines is, as a cybernetic feedback system, itself deterministic in achieving even sub-goals of the original aim
- that it must be an all-or-nothing thing, where either you use the tool to do everything, and no longer care about your craft, or you care about writing every single class and utility function in your codebase equally and maximally
- that if it's bad at one thing — or implemented badly in various places — then it must be universally bad
- that a lot of our work isn't just rote work that can be enhanced by machines that mostly just do pattern matching and remixing
- that it can't be useful at all if it isn't as good or powerful as the most insane hypsters claim it is (that e.g. semantically-aware text transformation using fuzzy natural language instructions isn't a massive breakthrough on its own)
None of these assumptions of which apply to anything else they look at.