-
Agentic AI outside coding
The level of competence, adaptiveness, and general capability even qwen-code (known to be a pretty bad agent framework/app compared to the best ones) displays is genuinely insane. I can just give it a task, turn on a sandbox and YOLO mode, and it will execute exploratory find, grep, curl, ls, which etc commands to figure out the structure of its environment and the data I'm giving it, and then write and execute command line scripts (or python, and install dependencies, and use curl to access RES... -
Are LLMs inherently unethical?
In my view, large language models are just tools. Just like all tools they can have interesting uses --LLM agents; summarization, even in medical settings; named entity extraction; sentiment analysis and moderation to relieve the burden from people being traumatized by moderating huge social networks; a form of therapy for those who can't access, afford, or trust traditional therapy; grammar checking, like a better Grammarly; simple first-pass writing critique as a beta reader applying provided ... -
The loom-maker's luddism
Software developers need a reality check. We are a trade that has spent decades feeling like we were the prime movers of progress through technological automation. Now that we've suddenly realized that we're subject to the same historical forces we applied to everyone else, we shouldn't back down. We cannot demand the benefits of the Industrial Revolution while simultaneously trying to build a software developer guild system to protect our own privileges. I see frequent statements online from de... -
Do AI images and video mean "reality is broken?"
A lot of people seem to be angry, upset, or panicking about the advent of generative AI image and especially video models like Veo 3 and Sora 2, claiming that since these systems allow us to effortly create photorealistic videos and images completely out of whole cloth, our society is finally completely screwed: we have no way of getting information that we can know comes from actual reality. We can no longer trust anything. Leaving aside the possibility of technological solutions --- such as cr... -
Why is machine decision-making bad?
Instead, at the most, I believe machines should be used to automate helping human decision-makers gather information and understand it, in order to further human decision-making power. Some key rules for this are: -
On Gary Marcus
I have Gary Marcus in my blogroll. I agree with his idea that neuro-symbolic architectures are the way forward for robust AI.<div style="border: 1px solid white; padding: 5px; margin: 5px;"> Side note: Although unlike him:So I don't think merging symbolism with deep learning approaches is "ultimately" the right approach in some philosophical sense; I just think that as things currently stand, given what symbolic and connectionist approaches can achieve and can't achieve respectively, a hybrid is... -
Empires of AI by Karen Hao thoughts
Some thoughts on the book as I go through it. This is a book I really have to grapple with, as someone who loves advanced, cutting-edge technology and wants an accelerationist vision of fully automated luxury market anarchism, not an anti-civ, primitivist, or degrowther's vision of returning to the land --- or picking over urban remains --- with a "few nice perks left over," or the common leftist position of desiring to go back to just before some latest technology has been invented, not seeing ... -
More on LLMs and the occult
I recently watched a the It Could Happen Here podcast episode "Occulture, William S. Burroughs, and Generative AI." The discussion of occulture itself, William S. Burroughs, the mentions of the CCRU and early Nick Land, hyperstition, and other things like that were decently comprehended (better than I can say for most of that group's understanding of subcultures) and somewhat interesting. Might even be worth a listen, although it's quite thinly sketched out. So I was somewhat hopeful when they g... -
The intellectual property views of traditional artists in the age of NFTs and Generative "AI"
I recently came to a really interesting realization. So, okay. We all remember the huge cultural phenomenon that was NFTs, that appeared for like a couple months and then immediately disappeared again, right? What were NFTs exactly? I'll tell you: they were a way of building a ledger that links specific "creative works" (JPEGs, in the original case, but theoretically others as well -- and yes, most NFTs weren't exactly creative) to specific owners, in a way that was difficult to manipulate and e... -
The phenomenology agentic coding
AI coding agents are important because they fundamentally alter what it is like to program. That is what this essay is about: not whether this transformation is good or bad for programmers as a labor bloc, or economically, or socially; not whether it makes us more or less productive in the odd Taylorist sense that seems prevelent whenever the subject pops up. What interests me and, I believe, should interest you about this whole enterprise, is the phenomenology of how this new human-machine asse... -
The "dogshit economics" of AI according to Cory Doctorow
This is a response to this post. I'll respond to the post point by point, because I think that in his rush to discount something he finds personally distasteful, Doctorow gets his economics and arguments very wrong. First, Doctorow argues that the current excitement around AI is a massive economic bubble, larger than previous bubbles like the dot-com boom or the Worldcom fraud. A significant portion of the stock market is tied up in a few AI companies that are not profitable and have no clear pa... -
Two fictional analogies for large language models
When using a large language model to gain knowledge or perform tasks, they remind me of the Library of Babel: they're capable of outputting basically all grammatical assemblages of tokens, and their probability distribution contains (a fuzzily associative, highly compressed, copy of) essentially all of human knowledge and thought. Thus contained within it is the complete catalogue of useful, insightful, correct, and wise things a human being might say, and all the wrong, dumb, or plain nonsensic... -
What OpenAI should have been
OpenAI is fucking awful. We all know this. But I want to offer a vision of an alternative, better future --- what could have been, had they not been a techno-cult of privileged power-hungry tech bros totally divorced from reality, but instead people genuinely dedicated to the project of making "AGI" that benefits all of humanity. Imagine, if you will, a non-profit foundation, incorporated in a jurisdiction that holds non-profits legally to their charter, with a board representing a wide variety ... -
Why am I being so mean to indie artists? Am I a tech bro?
To be perfectly clear, the purpose of this post, and all my other posts on this page expressing frustration at popular views concerning information ownership and "intellectual property," is not to punch down at independent artists and progressive activists. I care a lot about them, because I'm one, and I know many others; I'm deeply sympathetic to their values and goals and their need for a livelihood. The reason I write so much about this topic, directed as often if not moreso at independent ar... -
Better Offline with Ed Zitron
I started listening to Better Offline pretty soon after it started. I had tried Chat GPT (3.5-Turbo) once not too long ago and been singularly unimpressed, been taken in by the talk about AI's inherant energy inefficiency and "huge" global climate impact, and was mad about it, so I was looking for good criticisms of the whole bubble, and BO seemed like just the thing. I was initially a little bit put off by the title --- it seemed to indicate a sort of reactionary anti-technologism (colloquially... -
Explaining LLMs to laypeople
This is an attempt to explain large language models (e. g. ChatGPT or DeepSeek) in such a way that a layperson could actually understand what's going on under the hood --- not just in terms of lazy metaphors (whether overhyped or dismissive) but in terms of the actual algorithms they use. I think this is really important, because if we don't understand what AI is, it's very easy to get misled, either by overestimating its abilities, using it for the wrong tasks, or underestimating it --- any of ... -
How to use a large language model ethically
Note: There is a caveat to my point on local models, however: datacenter models are more energy and CO2 efficient than running an equivalently sized model locally. Additionally, they can run larger and much more useful proprietary models, and sometimes there's a certain threshold of capability above which it's worth the energy and time spent on a model, and below which it's completely not worth it, but not using it at all will just waste more time and energy --- after all, saving a human some ti...
Tag: ai
17 files