More on LLMs and the occult
I recently watched a the It Could Happen Here podcast episode "Occulture, William S. Burroughs, and Generative AI." The discussion of occulture itself, William S. Burroughs, the mentions of the CCRU and early Nick Land, hyperstition, and other things like that were decently comprehended (better than I can say for most of that group's understanding of subcultures) and somewhat interesting. Might even be worth a listen, although it's quite thinly sketched out.
So I was somewhat hopeful when they got to the GenAI part, but I really shouldn't have gotten my hopes up. As soon as the topic came up and they literally admit that they had a knee jerk blinding-rage reaction to anyone even (appearing) to consider an audience question about the use of generative AI as a valid tool as part of creative work or occult ritual without immediately and aggressively saying no. Then they follow that up by saying that the best answer was instant and unreflective — specifically rehersed in response to any mention of AI — a "rigorous materialist answer" that just repeated — as if by LLM! — all of the stupid left-Canutist bullshit that has been spouted and debunked a thousand times before about the environmental costs of individual generations, the fact that it's "stealing labor", and that it's effectively "just an advertisement" because it can't do anything interesting or new (as if a user can't introduce enough pre- and post-processing when working with these tools to produce something new and interesting, or have them find and collate sources for new interesting ideas, or have new ideas triggered by brainstorming sessions).
There's more too — some inconsistency that I find actually really funny. Basically, the specific context of this question was the Burroughsian concept of the "third mind." As they explain it, the idea is that when you get two minds together, along with various texts (and art and ideas floating around in their head) and they work on something together, they are able to, metaphorically speaking draw out a "third mind" created by the larger interactions between their ideas and thinking processes that is greater than the sum of its parts, an organic whole. This "third mind" is not something from the outside; it's all sort of inherent in what's put into the process, just revealing what was already implicit, but that act of revealing is still helpful and necessary.
I don't know about you, but to me this very much feels like something that can happen when you use generative AI for brainstorming and research, as they can create new connections and elaborations of your ideas that you wouldn't have seen without the tool, as well as generating entirely new directions and ideas that weren't obvious before.
The podcasters insist that you can't possibly use AI like this, of course with a lot of vehemence and sincere offense at the very concept, but their reasoning is because they don't think the LLM counts as a mind in the human sense. My issue, however, is that even if it doesn't — because it's not conscious or whatever, and you think that something has to be the precise kind of reasoning self-aware intelligence that humans are in order to count as a mind — this is still a fantastically complicated, nuanced, interesting pattern matching generation and extrapolation machine trained on 4 trillion words, basically all text all humans have ever created, learning the patterns of language-production from all of that. There's still the fact that the patterns that it's reproducing are patterns of human thought and ideas that it's abstracted, there is still knowledge from all of human writing in there it can draw on as it sees relevant, and there's still the fact that it can hallucinate in interestingly topical ways, not to mention the literal temperature slider! All of which are the product of another mind in some sense. And then there's the fact that they admit the third mind concept itself never requires anything brand new to be inserted in the system, only what's alread implicity there being drawn out.
Similarly, they mention someone did a talk about how there was an occultist in — I think? I'm not going to listen through again — the 70s that was interested in trying to create automatic mechanical story generation machines using tarot, to experiment with inhuman art production, and the primary issue he ran into was that to make the stories comprehensible, a human still had to interpret the cards and string them together with human-made connective tissue into a narrative. The question was asked if LLMs were basically the completion of this.
…and they answered no??? Why? Apparently because it's a "people pleasing one word at a time algorithm." As if that means anything. What do either of those features, insofar as they're applicable to LLMs as a technology on the whole, that have to do with anything? First of all, you can get rid of the first aspect, and second of all, it may generate one word at a time, but it maintains concepts and intentions over a far longer span than that, and even if it didn't, what's one word vs one concept at a time, what's the difference?
The point I'm making here is not that I'm deeply invested in using AI in these occult rituals or techniques or whatever, because I don't practice any occult stuff. The point here is specifically that even in supposedly critical, open minded, exploratory communities, as long as they're left/progressive adjacent, there's this knee jerk reactionary mindset that seems very grating.