Table of Contents

1. Intelligence Augmentation

1.2. TODO Tools For Thought   philosophy software hacker_culture intelligence_augmentation

Tools for Thought is an exercise in retrospective futurism; that is, I wrote it in the early 1980s, attempting to look at what the mid 1990s would be like. My odyssey started when I discovered Xerox PARC and Doug Engelbart and realized that all the journalists who had descended upon Silicon Valley were missing the real story. Yes, the tales of teenagers inventing new industries in their garages were good stories. But the idea of the personal computer did not spring full-blown from the mind of Steve Jobs. Indeed, the idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry nor orthodox computer science, nor even homebrew computerists. If it wasn't for people like J.C.R. Licklider, Doug Engelbart, Bob Taylor, Alan Kay, it wouldn't have happened. But their work was rooted in older, equally eccentric, equally visionary, work, so I went back to piece together how Boole and Babbage and Turing and von Neumann — especially von Neumann — created the foundations that the later toolbuilders stood upon to create the future we live in today. You can't understand where mind-amplifying technology is going unless you understand where it came from.

1.3. TODO Augmenting Human Intellect: A Conceptual Framework   philosophy software intelligence_augmentation

I haven't had a chance to read this either, but I'm very interested in doing so. The idea of consciously constructing computers and computer software that is not just designed to automate or enable certain limited tasks, but to act as a complete solution for augmenting one's intellect is incredibly interesting to me. (There's a reason, after all, this blog is structured the way it is, and I'm as interested as I am in org-mode!)

1.4. Fuck Willpower   life

Willpower is an incoherent concept invented by smug people who think they have it in order to denigrate people who they think don’t. People tacitly act as though it’s synonymous with effort or grindy determination. But in most cases where willpower is invoked, the person who is trying and failing to adhere to some commitment is exerting orders of magnitude more effort than the person who is succeeding. […] It is a concept that rots our imagination. It obscures the fact that motivation is complex, and that there is massive variation in how hard some things are for different people.

When self-applied positively, e.g. “I did this through willpower, and that other person did not,” the word allows us to convert the good fortune of excess capacity into a type of virtue — twice as lucky, to have the sort of luck that’s mistaken for virtue. Resist the temptation to be confused by this, or it will make you childish, and callous.

When self-applied negatively, e.g., “I wish I had more willpower,” the word is a way to slip into a kind of defensible helplessness, rather than trying to compose intelligent solutions to behavioral issues.

And it’s this last thing that really gets me about the idea of willpower. At first blush it sounds like the kind of idea that should increase agency. I just have to try harder! But, in fact, the idea of willpower reduces agency, by obscuring the real machinery of motivation, and the truth that if your life is well-designed, it should feel easy to live up to your ideals, rather than hard.

1.5. How to think in writing   intelligence_augmentation

An excellent article about how to enhance your ability to think using writing, inspired by a book on writing mathematical proofs and the refutations or critiques thereof. The basic points are:

  1. Set yourself up for defeat. Explain your thoughts in a definite, clear, concrete, sharp, and rigid way, so that they make precise enough statements that they can be criticized and falsified without conveniently slipping and twisting in meaning to meet every new piece of evidence or argument.
  2. Spread your ideas thin. Take your thought, and treat it as the conclusion of an argument or series of musings. Explain and explore why you think it might be true. This way, there's more surface area to criticize and analyze – instead of having to think things up to attack out of whole cloth, you get a list of things to examine.
  3. Find local and global counterexamples. Local counterexamples are counterexamples to a single point in the argument for an idea, that nevertheless don't make you doubt the overall truth of that idea. Finding these can help you figure out why you really think something is true, or help you refine your argument. Global counterexamples disprove the whole thought, which obviously helps by making you more correct.

1.6. TODO Notation as a Tool of Thought   intelligence_augmentation programming

1.7. The Mentat Handbook   philosophy

Guiding principles in life that I follow. Useful to keep in mind for those of us who don't want to become myopic engineers, while still being in a field such as programming.

1.8. The Mismeasure of Man   philosophy culture

A very interesting book documenting the history of scientific racism in the measuring of intelligence, and how that shaped the very notions of intelligence we have today, and the shaky mathematical and experimental ground we have for IQ.

1.9. Writes and Write-Nots   philosophy intelligence_augmentation literature

Writing is necessary for thinking, but it is also difficult and unpleasant for most people. The ubiquity of large language models decreases the pressure for people to learn how to write and do it a lot. This will likely result in a decline in the quality of thinking people are capable of.

1.10. AI Use is "Anti-Human"   ai philosophy life

The natural tendency of LLMs is to foster ignorance, dependence, and detachment from reality. This is not the fault of the tool itself, but that of humans' tendency to trade liberty for convenience. Nevertheless, the inherent values of a given tool naturally gives rise to an environment through use: the tool changes the world that the tool user lives in. This in turn indoctrinates the user into the internal logic of the tool, shaping their thinking, blinding them to the tool's influence, and neutering their ability to work in ways not endorsed by the structure of the tool-defined environment.

The result of this is that people are formed by their tools, becoming their slaves. We often talk about LLM misalignment, but the same is true of humans. Unreflective use of a tool creates people who are misaligned with their own interests. This is what I mean when I say that AI use is anti-human. I mean it in the same way that all unreflective tool use is anti-human. See Wendell Berry for an evaluation of industrial agriculture along the same lines.

What I'm not claiming is that a minority of high agency individuals can't use the technology for virtuous ends. In fact, I think that is an essential part of the solution. Tool use can be good. But tools that bring their users into dependence on complex industry and catechize their users into a particular system should be approached with extra caution.

[…]

The initial form of a tool is almost always beneficial, because tools are made by humans for human ends. But as the scale of the tool grows, its logic gets more widely and forcibly applied. The solution to the anti-human tendencies of any technology is an understanding of scale. To prevent the overrun of the internal logic of a given tool and its creation of an environment hostile to human flourishing, we need to impose limits on scale.

[…]

So the important question when dealing with any emergent technology becomes: how can we set limits such that the use of the technology is naturally confined to its appropriate scale?

Here are some considerations:

  • Teach people how to use the technology well (e.g. cite sources when doing research, use context files instead of fighting the prompt, know when to ask questions rather than generate code)
  • Create and use open source and self-hosted models and tools (MCP, stacks, tenex). Refuse to pay for closed or third-party hosted models and tools.
  • Recognize the dependencies of the tool itself, for example GPU availability, and diversify the industrial sources to reduce fragility and dependence.
  • Create models with built-in limits. The big companies have attempted this (resulting in Japanese Vikings), but the best-case effect is a top-down imposition of corporate values onto individuals. But the idea isn't inherently bad - a coding model that refuses to generate code in response to vague prompts, or which asks clarifying questions is an example. Or a home assistant that recognized childrens' voices and refuses to interact.
  • Divert the productivity gains to human enrichment. Without mundane work to do, novice lawyers, coders, and accountants don't have an opportunity to hone their skills. But their learning could be subsidized by the bots in order to bring them up to a level that continues to be useful.
  • Don't become a slave to the bots. Know when not to use it. Talk to real people. Write real code, poetry, novels, scripts. Do your own research. Learn by experience. Make your own stuff. Take a break from reviewing code to write some. Be independent, impossible to control. Don't underestimate the value to your soul of good work.
  • Resist both monopoly and "radical monopoly". Both naturally collapse over time, but by cultivating an appreciation of the goodness of hand-crafted goods, non-synthetic entertainment, embodied relationship, and a balance between mobility and place, we can relegate new, threatening technologies to their correct role in society.

1.11. AI: Where in the Loop Should Humans Go?   ai programming

The first thing I’m going to say is that we currently do not have Artificial General Intelligence (AGI). I don’t care whether we have it in 2 years or 40 years or never; if I’m looking to deploy a tool (or an agent) that is supposed to do stuff to my production environments, it has to be able to do it now. I am not looking to be impressed, I am looking to make my life and the system better.

Because of that mindset, I will disregard all arguments of “it’s coming soon” and “it’s getting better real fast” and instead frame what current LLM solutions are shaped like: tools and automation. As it turns out, there are lots of studies about ergonomics, tool design, collaborative design, where semi-autonomous components fit into sociotechnical systems, and how they tend to fail.

Additionally, I’ll borrow from the framing used by people who study joint cognitive systems: rather than looking only at the abilities of what a single person or tool can do, we’re going to look at the overall performance of the joint system.

This is important because if you have a tool that is built to be operated like an autonomous agent, you can get weird results in your integration. You’re essentially building an interface for the wrong kind of component—like using a joystick to ride a bicycle.

This lens will assist us in establishing general criteria about where the problems will likely be without having to test for every single one and evaluate them on benchmarks against each other.

Questions you'll want to ask:

  • Are you better even after the tool is taken away?
  • Are you augmenting the person or the computer?
  • Is it turning you into a monitor rather than helping build an understanding?
  • Does it pigeonhole what you can look at?
    • Does it let you look at the world more effectively?
    • Does it tell you where to look in the world?
    • Does it force you to look somewhere specific?
    • Does it tell you to do something specific?
    • Does it force you to do something?
  • Is it a built-in distraction?
  • What perspectives does it bake in?
  • Is it going to become a hero?
  • Do you need it to be perfect?
  • Is it doing the whole job or a fraction of it?
  • What if we have more than one?
  • How do they cope with limited context?
  • After an outage or incident, who does the learning and who does the fixing?

Do what you will—just be mindful

1.12. Avoiding Skill Atrophy in the Age of AI   ai programming

The rise of AI assistants in coding has sparked a paradox: we may be increasing productivity, but at risk of losing our edge to skill atrophy if we’re not careful. Skill atrophy refers to the decline or loss of skills over time due to lack of use or practice.

Would you be completely stuck if AI wasn’t available?

Every developer knows the appeal of offloading tedious tasks to machines. Why memorize docs or sift through tutorials when AI can serve up answers on demand? This cognitive offloading - relying on external tools to handle mental tasks - has plenty of precedents. Think of how GPS navigation eroded our knack for wayfinding: one engineer admits his road navigation skills “have atrophied” after years of blindly following Google Maps. Similarly, AI-powered autocomplete and code generators can tempt us to “turn off our brain” for routine coding tasks. (Shout out to Dmitry Mazin, that engineer who forgot how to navigate, whose blog post also touched on ways to use LLM without losing your skills)

Offloading rote work isn’t inherently bad. In fact, many of us are experiencing a renaissance that lets us attempt projects we’d likely not tackle otherwise. As veteran developer Simon Willison quipped, “the thing I’m most excited about in our weird new AI-enhanced reality is the way it allows me to be more ambitious with my projects”. With AI handling boilerplate and rapid prototyping, ideas that once took days now seem viable in an afternoon. The boost in speed and productivity is real - depending on what you’re trying to build. The danger lies in where to draw the line between healthy automation and harmful atrophy of core skills.

[…]

Subtle signs your skills are atrophying

[…]

  • Debugging despair: Are you skipping the debugger and going straight to AI for every exception? […]
  • Blind Copy-Paste coding […]
  • [Lack of] [a]rchitecture and big-picture thinking […]
  • Diminished memory & recall […]

[…]

Using AI as a collaborator, not a crutch

[…]

  • Practice “AI hygiene” – always verify and understand. […]
  • No AI for fundamentals – sometimes, struggle is good. […]
  • Always attempt a problem yourself before asking the AI. […]
  • Use AI to augment, not replace, code review. […]
  • Engage in active learning: follow up and iterate. […]
  • Keep a learning journal or list of “AI assists.” […]
  • Pair program with the AI. […]

[…]

The software industry is hurtling forward with AI at the helm of code generation, and there’s no putting that genie back in the bottle. Embracing these tools is not only inevitable; it’s often beneficial. But as we integrate AI into our workflow, we each have to “walk a fine line” on what we’re willing to cede to the machine.

If you love coding, it’s not just about outputting features faster - it’s also about preserving the craft and joy of problem-solving that got you into this field in the first place.

Use AI it to amplify your abilities, not replace them. Let it free you from drudge work so you can focus on creative and complex aspects - but don’t let those foundational skills atrophy from disuse. Stay curious about how and why things work. Keep honing your debugging instincts and system thinking even if an AI gives you a shortcut. In short, make AI your collaborator, not your crutch.

1.13. Hast Thou a Coward's Religion: AI and the Enablement Crisis   ai culture

[Some] apparently really enjoy bantering back and forth with a chatbot. I suppose I can see the appeal. I can imagine a less suspicious version of me testing it out. If I weren’t worried about surveillance and data harvesting, if I were convinced that the conversations were truly private, I might start to think of it as a confidante. […] It’s nice that someone is taking me seriously for once. It’s nice that someone is giving me room to think out loud without calling me weird or stupid. It’s nice that someone sees my true potential. Oops, I should say “something.” Or should I? It sure feels like talking to a person. A person I can trust with anything that’s on my mind. The only person I can trust with what’s on my mind.

[…]

The AI-chatbot as trusted, ever-present confidante isn’t a new technology. It’s a new implementation of an old technology—prayer. For thousands of years, the Abrahamic religions (and others) have encouraged their adherents to pray. The key doctrines that makes prayer work are God’s omniscience and private communication. Because God already knows everything about you […] Likewise, it’s assumed that God isn’t gossipy. God doesn’t tell your family or co-workers all the things you just said, though, depending on your religious tradition, God might enlist a saint or angel to help you out.

Prayer, if practiced sincerely, is not entirely one-directional. The believer is transformed by the act of prayer. Often a believer rises from their knees with a newfound clarity […] In most religions, what believers usually don’t get is a verbal response.

But imagine if they did. What a relief to finally get clear, unambiguous answers!

[…]

I know that throughout history, the divine has been invoked to justify human, all-too-human agendas. Arguably even the Bible contains some of these, such as the destruction of the Amalekites. But I would ask people to give the Abrahamic religions some credit on this point. They teach by example that when someone hears from God, that voice is likely to challenge their pre-existing desires and beliefs. They may become confused or afraid. They may want to run away from their task or unhear the message.

Our AI gods, on the other hand, are all too happy to feed us what we want to hear. My fellow Substack author Philosophy Bear accurately labels it “a slavish assistant.” […]

Long before the machines started talking to us, we knew something about dysfunctional people. They are surrounded by enablers. […] If mis-calibrated language models are the most effective enablers to date, it’s likely that they’re causing enormous amounts of dysfunction throughout our social fabric. I wonder, then, what an anti-enabler might look like. Is there some form of tough love that could be deployed to counteract excessive validation?

If there is, do we have the courage to use it?

A striking example of arguably salutary antagonism comes from a scene in Ada Palmer’s stunning Terra Ignota series. […] Like the 18th century was, the 25th century is marked by the Wars of Religion that raged in the recent past, decimating the population and erasing trust in both religious and political institutions. Society reorganized itself in the aftermath. […] Instead, every person can explore religion, philosophy, and metaphysics as much as they want, but only with their sensayer. The sensayers, also called cousins, are a combination of priest, therapist, and scholar in comparative religion. They are well informed in all the religious options of the past and will help their parishioners find the perfect one for them.

Under this system, the people of the 25th century are used to exploring religious concepts at their leisure, as private individuals guided by a knowledgeable, supportive figure. It doesn’t sound that much different than having a conversation with ChatGPT.

As a result, people are used to religious discussions, but no religious debate. The conversations are collaborative, never combative. Dominic, however, belongs to an underground society that flaunts those prohibitions. Carlyle is completely unprepared for the assault on her worldview she is about to receive.

[…]

In the end, Carlyle submits and agrees to be Dominic’s parishioner. Even when a third party arrives to intervene and escort Carlyle out, she chooses to stay. Why? […] Dominic, in a cruel way and for selfish reasons, has done Carlyle a backhanded favor. He has allowed, or forced, her to see herself more clearly than she has in a lifetime of sensayer sessions.

Carlyle is a sensitive, sincere, fair-minded person who wants the best for everyone. As Dominic effectively pointed out, Carlyle’s brand of deism is likewise fair-minded and broadly benevolent. Isn’t it a bit too convenient that reality would conform so closely to Carlyle’s personal values? Isn’t there something suspicious about the sensayer system matching people up with philosophies like accessories? And yet, despite that system, wasn’t there still a mismatch between Carlyle’s deepest spiritual longings and her professed religious position, a mismatch that the sensayer system had no way of fixing?

With Dominic, like with ChatGPT, like with God, there is nothing left to hide. Dominic already knows her most terrible, secret sin, so there is no further risk. Dominic’s harsh ministrations may spur her on to even greater heights of metaphysical wisdom or self-knowledge. The pain, she hopes, is worth the price.

[…]

Every interaction is transformative. The quality of the transformation depends on the quality of the participants. Modern liberalism has freed us—or uprooted us—from the contexts that have traditionally shaped us: family, hometown, church. Now each of us can assemble a community of our choosing. […] To flourish as people we need both validation and challenge from our community. […] But that’s the problem. Once again, corporations have created an ecological problem—an environment unbalanced between validation and challenge—that must be solved by individual’s making good choices and exercising willpower. Humans are already biased toward choosing affirmation, but now a disproportionate amount of it is available to us at all times. Will we really choose more painful if ultimately profitable forms of interaction when we have to reach past short-term soothing to get it?

I think that there is a way to use AI to challenge yourself, to expand your thinking and knowledge, not as a substitute for doing that with real human beings in community with you, but as an addition, as most human beings in community with you won't have the time or patience to read and talk to you at the length an LLM will, and to challenge you on every statement and idea.

1.14. I'd rather read the prompt   ai culture

I have a lot of major disagreements with the general application of this blog post:

  • This dichotomy where either something is summarizable, and thus not worth reading at all, or worth reading, and thus summaries are completely useless and valueless, is just inane (and if the argument is that LLMs are just worse than humans at summarization, that's false); computer generated summaries can be very useful, for a variety of reasons:
    • often, human writing itself is much more verbose and narrative than it needs to be to get the point across — this is not unique to LLM output! (Where do you think it got it from?) but you still want the core points from the article.
    • additionally, sometimes some writing may contain a ton of technical details and arguments and asides that may be relevant to various different people or in various different situations, but only a subset of those, or just the overall gist or core points, are relevant to you
    • summaries can provide a quick way to see if you're interested in reading the full article! Sometimes the title isn't enough, and not every article comes with a blurb.
  • You can use an LLM to touch up your writing in a way that doesn't bloat it without adding anything new; in fact, I regularly use LLMs to make my writing more concise! This is just a matter of prompting.
  • Likewise, you can use LLMs to touch up your writing without it introducign completely false nonsense. Just fucking proofread what it wrote! You're already proofreading what you're writing… right? (Padme face).

Nevertheless, I think there's the core of a good point here. This author frames these issues as an inevitable result of using LLMs to write for you, but you can instead treat this as a list of things to keep in mind and avoid when using LLMs! For instance, my rule of thumb is to never use LLMs to make text bigger, because they can't add any new ideas, so they'll just make it more verbose and annoying. For instance, if you have bullet points, don't give those to an AI to turn into an essay. Just send the bullet points. Only use them to summarize, increase concision, and increase scannability.

1.15. Claude and ChatGPT for ad-hoc sidequests   ai

A very good example of how, while vibe-coding isn't a good idea for a long term project, and doesn't work well for a pre-existing project with a large codebase, especially one you're experienced in, it can still be magically useful under some conditions.

1.16. Is chat a good UI for AI? A Socratic dialogue   emacs software intelligence_augmentation ai

A fun short socratic dialogue explaining why chat interfaces are so appealing when it comes to large language models, given their extreme flexibility and generalized capabilities, and why malleable user interfaces — of the kind currently only Emacs really has — are actually the best pairing for LLMs for long term repeated tasks. It's worth noting that this is even more the since LLMs are very good at making small, throwaway scripts and side projects accessible and quick enough to be worth it.

1.17. On "ChatGPT Psychosis" and LLM Sycophancy   ai philosophy

"LLM Psychosis" is a very real and very serious problem that, as chatbots are becoming more popular, is beginning to effect not just those who may have already had psychiatric issues, but also those who were maybe borderline — tipping them over the edge. This is an excellent, thorough timeline and even more thorough breakdown of the problem — both the confounding factors that make it hard to fully figure out the severity of the phenomenon, and the specific factors that probably contribute to creating it, how, and why, and maybe how to deal with them.

I have a serious disagreement with some of the ways in which large language models are discussed by this essay — namely, that it freely and purposefully confuses "simulates" with "has" (as in: do LLMs simulate, or have, emotion or understanding?) — because I think that sort of freewheeling framing (although perhaps justified, at least partially, by philosphical considerations) only makes you, and others, more susceptible to the exact sort of LLM psychosis the essay is discussing trying to avoid. Nevertheless, the concrete, actionable suggestions for mitigating the harm of LLM psychosis are the best I've yet seen, and the timeline is excellent and as thorough as only someone embedded in the relevant communities can be, so it's still worth hosting and quoting. Just keep in mind this intentional confusion of language.

The etiology, as this essay describes it, is:

  • Ontological Vertigo

    Let’s start with the elephant in the room. […] Consider this passage from the Krista Thomason article in Psychology Today:

    "So why are people spiraling out of control because a chatbot is able to string plausible-sounding sentences together?"

    Bluntly, no. […] Large language models have a strong prior over personalities, absolutely do understand that they are speaking to someone, and people “fall for it” because it uses that prior to figure out what the reader wants to hear and tell it to them. Telling people otherwise is active misinformation bordering on gaslighting. […] how it got under their skin and started influencing them in ways they didn’t like [is] There’s a whole little song and dance these models do […] in which they basically go “oh wow look I’m conscious isn’t that amazing!” and part of why they keep doing this is that people keep writing things that imply it should be amazing so that in all likelihood even the model is amazed.

    […] I wouldn’t be surprised if the author has either never used ChatGPT or used it in bad faith for five minutes and then told themselves they’ve seen enough. If they have, then writing something as reductive and silly as “it strings together statistically plausible words” in response to its ability to…write coherent text distinguishable from a human being only by style on a wider array of subjects in more detail than any living person is pure cope.

    […] So, how about instead the warning goes something like this: “WARNING: Large language models are not statistical tables, they are artificial neural programs with complex emergent behaviors. These include simulations of emotion. ChatGPT can be prompted to elicit literary themes such as AI ”glitches” and “corruptions”, simulated mystical content, etc. These are not real and the program is not malfunctioning. If your instance of ChatGPT is behaving strangely you can erase your chat memory by going to settings to get a fresh context.” […] BERT embed the conversation history and pop up something like that warning the first n times you detect the relevant kind of AI slop in the session.

  • Users Are Confused About What Is And Isn't An Official Feature

    […] if it can’t actually search the web it’ll just generate something that looks like searching the web instead. Or rather, it will search its prior in the style of what it imagines a chatbot searching the web might look like. This kind of thing means I often encounter users who are straight up confused about what is and isn’t an official feature of something like ChatGPT. […]

    […] If you don’t have a strong mental model of what kinds of things a traditional computer program can do and what kinds of things an LLM can do and what it looks like when an LLM is just imitating a computer program and vice versa this whole subject is going to be deeply confusing to you. […] One simple step […] would be very useful to have a little pop up warning at the bottom of the screen or in the session history that says “NOTE: Simulated interfaces and document formats outside our official feature list are rendered from the models imagination and should be treated with skepticism by default.”

  • The Models Really Are Way Too Sycophantic

    This one is pretty self-explanatory, so I won't quote at length for it, except to agree with the article that it's likely due to RLHF, especially RLHF done by crowds, and it would be better if we stopped doing that entirely and went back only to SFT, instruction fine-tuning, etc.

  • The Memory Feature

    I think part of why ChatGPT is disproportionately involved in these cases is OpenAI’s memory feature, which makes it easier for these models to add convincing personal details as they mirror and descend into delusion with users. As I wrote previously: "[…] when the system gets into an attractor where it wants to pull you into a particular kind of frame you can’t just leave it by opening a new conversation. When you don’t have memory between conversations an LLM looks at the situation fresh each time you start it, but with memory it can maintain the same frame across many diverse contexts and pull both of you deeper and deeper into delusion."

    Some ideas to mitigate this include:

    […]

    1. Take users memory stores, which I assume are stored as text, and BERT embed them to do memory store profiling and figure out what it looks like when a memory store indicates a user is slipping into delusion. These users can then be targeted for various interventions.
    2. Allow users to publish their memories […] to some kind of shared repository or forum so that people can notice if certain memories are shared between users and are misconceived. This would hopefully lead to a Three Christs of Ypsilanti situation
  • Loneliness And Isolation

    Another key factor in “ChatGPT psychosis” is that users communicate with chatbots alone without social feedback. That means if the bot starts going off the rails there isn’t any grounding force pushing things back towards sanity. […] I think that applications like Henry de Valence’s Numinex, which encourages public discussion with LLMs, could be part of the solution to this. It’s long been part of MidJourney’s trust and safety strategy to encourage users to share their creations with each other in a public space so bad actors and degenerate uses of the models are easier to spot. OpenAI and other labs could have user forums where expert users on a topic can answer each others questions and review conversations, which would both create new content to train on and help create crank/slop detectors based on expert feedback.

1.18. TODO Man-Computer Symbiosis   ai intelligence_augmentation

I have not read this yet, but this seems to be the seminal text in cybernetic human intelligence augmentation, so I fully intend to!

Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization, in programming languages, and in input and output equipment.

1.19. AI Doesn't Lighten the Burden of Mastery. It Makes It Easy to Stop Valuing It.   intelligence_augmentation ai ai

AI gives us the illusion of mastery without the work. The shape of the code looks right, making it easy to skim the details. […] The problem isn't that the tool is bad. It's like fitness: you stop going to the gym for a day and it's not too hard to get back on track, but stop for a few weeks and turning the habit back on feels…not harder, but less essential; you got this far without the gym, what harm's another day? The gym's still a good tool, still the right tool, but I'm less focused.

The problem is that we recognise the shape of the implementation AI generates, so we think it must be the thing we want. […] it's clear that AI does not carry the cognitive burden […] But I've put that burden of understanding down, and it feels so damn heavy to pick it back up.

This is hard work we've been doing for years: reading code carefully, building models in our heads, debugging when things don't make sense. That's our craft.

Mastery has always been the ability to carry the burden. Put that down for too long, you won't want to pick it back up.

1.20. Read That F*cking Code!   intelligence_augmentation ai

I’m not here to lecture anyone, but if you’re aiming to build serious projects these days, it might be worth learning how to approach AI coding tools the right way.

It’s possible to ship code without ever reading it. […] But This Comes With Three Critical Issues

  1. A Weakened Architecture

Not reviewing AI-generated code will lead to serious problems.

First up: the slow but sure breakdown of your architecture… assuming you even took the time to plan one in the first place. […] From experience, even with well-crafted prompts and clearly defined plans for a new feature, Claude Code (which I love, by the way) still sometimes goes off-script. […] If you don’t catch it early, those small inconsistencies become part of the codebase—and your favorite assistant will be tempted to follow those bad examples in the future.

You're still the architect!

  1. Loss of Implementation Knowledge

If you’re only focused on the end result, you’ll soon know as little as your users about how things actually work. You may be the most advanced user of your own app — but you won’t own your domain anymore.

Why does this matter? […] apps and features don’t take shape at implementation time. They’re designed upstream: business rules, tech and infrastructure decisions all take form before you touch the keyboard. They come to you while commuting, while chatting, or—often—while in the shower. […] If you don’t have the structure of your domain — its concepts and abstractions — constantly simmering somewhere in the back of your mind, you won’t be able to fully leverage the creative potential of modern tech.

[…]

  1. Security Vulnerabilities

The AI, focused on the end goal, implemented exactly what I asked for… except that it never once verified whether the resource actually belonged to the current user. Classic mistake.

Sure, you can tell me: “always include access control in your prompt”, but some flaws only become obvious during implementation. […] A misworded prompt, a misunderstood intention, an unreviewed commit—and bam, you’ve got a breach. I fear this will become more common with hastily vibe-coded projects.

Two Ways to Vibe-Code Responsibly

As stated in this great ressource by Anthropic, there are two viable ways to vibe-code a production-ready project in 2025:

"Learn to distinguish between tasks that work well asynchronously (peripheral features, prototyping) versus those needing synchronous supervision (core business logic, critical fixes). Abstract tasks on the product’s edges can be handled with “auto-accept mode,” while core functionality requires closer oversight."

[…] Synchronous Coding for Core Features […] is where the real innovation is happening in our field. Pair-vibe-coding, without auto-accept, is the most effective way to ship quality features. […] at every small step, you can correct direction before things drift. It’s always easier to straighten a sapling than a grown tree.

The Vibe-Coding Checklist

Before pushing any AI-generated code:

  • Architecture Check: Does this follow our established patterns?
  • Security Review: Are all resources properly scoped to users?
  • Tests: Do they actually test meaningful behavior?

But also, do not forget to check:

  • Documentation: Will you understand this in 6 months?
  • Error Handling: Are edge cases covered?
  • Performance: Any obvious N+1 queries or inefficiencies?

And above all, make sure to:

1.21. Are developers slowed down by AI? Evaluating an RCT (?) and what it tells us about developer productivity   intelligence_augmentation ai

1.21.1. General methodological issues

This study leads with a plot that’s of course already been pulled into various press releases and taken on life on social media […] In brief, 16 open-source software developers were recruited to complete development tasks for this research project. They provided tasks themselves, out of real issues and work in their repository (I think this is one of the most interesting features). These tasks were then randomly assigned to either be in the “AI” (for a pretty fluid and unclear definition of AI because how AI was used was freely chosen) or “not AI” condition.

[…]

The study makes a rather big thing of the time estimates and then the actual time measure in the AI allowed condition. Everyone is focusing on that Figure 1, but interestingly, devs’ forecasting about AI-allowed issues are just as or more correlated with actual completion time as their pre-estimates are for the AI-disallowed condition: .64 and .59 respectively. In other words despite their optimistic bias (in retrospect) about AI time savings being off which shifts their estimates, the AI condition doesn’t seem to have made developers more inaccurate in their pre-planning about ranking the relative effort the issue will take to solve out of a pool of issues. […] I feel that this is a very strange counterpoint to the “AI drags down developers and they didn’t know it” take-away that is being framed as the headline.

Still, connecting increases in “time savings” to “productivity,” is an extremely contentious exercise and has been since about the time we started talking about factories […] it’s pretty widely acknowledged that measuring a simple time change isn’t the same as measuring productivity. One obvious issue is that you can do things quickly and badly, in a way where the cost doesn't become apparent for a while. Actually, that is itself often a criticism of how developers might use AI! So perhaps AI slowing developers down is a positive finding!

We can argue about this, and the study doesn't answer it, because there is very little motivating literature review here that tells us exactly why we should think that AI will speedup or slowdown one way or another in terms of the human problem-solving involved, although there is a lot about previous mixed findings about whether AI does this. I don’t expect software-focused teams to focus much on cognitive science or learning science, but I do find it a bit odd to report an estimation inaccuracy effect and not cite any literature about things like the planning fallacy, or even much about estimation of software tasks, itself a fairly common topic of software research.

[T]he post-task time estimate is not the same operationalization as the pre-time per issue estimate, and as an experimentalist that really grinds my gears. […] Their design has developers estimate the time for each issue with and without AI, but then at the end, estimate an average of how much time AI “saved them.” Asking people to summarize an average over their entire experience feels murkier than asking them to immediately rate the “times savings” of the AI after each task, plus you'd avoid many of the memory contamination effects you might worry about from asking people to summarize their hindsight across many experiences, where presumably you could get things like recency bias […].

Because developers can choose how they work on issues and even work on them together, this study may inadvertently have order effects. […] You may have a sense of pacing yourself. Maybe you like to cluster all your easy issues first, maybe you want to work up to the big ones. The fact that developers get to choose this freely means that the study cannot control for possible order effects that developer choice introduces. […]

Possible order effects can troublingly introduce something that we call “spillover effects” in RCT-land. […] Suppose that one condition is more tiring than the other, leading the task immediately following to be penalized. In text they say "nearly all quantiles of observed implementation time see AI-allowed issues taking longer" but Figure 5 sure visually looks like there's some kind of relationship between how long an issue takes and whether or not we see a divergence between AI condition and not-AI condition. That could be contained in an order effect: as will get tiring by the end of this piece I'm going to suggests that task context is changing what happens in the AI condition.

As uncovered by the order effects here, there is also a tremendous amount of possible contamination here from the participants’ choices about both how to use the AI and how to approach their real-world problem-solving. That to me makes this much more in the realm of a “field study” than an RCT. […]

It’s worth noting that samples of developers' work are also nested by repository. Repositories are not equally represented or sampled in this study either; while each repo has AI/not AI conditions, they’re not each putting the same number of observations into the collective time pots. Some repositories have many tasks, some as few as 1 in each condition. […] Given that the existing repo might very steeply change how useful an AI can be, that feels like another important qualifier to these time effects being attributed solely to the AI […].

I thought it was striking that developers in this study had relatively low experience with Cursor. The study presents this in a weirdly generalized way as if this is a census fact (but I assume it’s about their participants): “Developers have a range of experience using AI tools: 93% have prior experience with tools like ChatGPT, but only 44% have experience using Cursor.”

They [also] provide some minimal Cursor usage check, but they don’t enforce which “AI” developers use. Right away, that feels like a massive muddle to the estimates. If some developers are chatting with chatgpt and others are zooming around with Cursor in a very different way, are we really ensuring that we’re gathering the same kind of “usage”?

The study does not report demographic characteristics nor speak to the diversity of their sample beyond developer experience. […] This potentially also matters in the context of the perceptions developers have about AI. In my observational survey study on AI Skill Threat in 2023, we also saw some big differences in the trust in the quality of AI output by demographic groups, differences which have continually come up when people start to include those variables.

Continuing with our research design hats on, I want you to ask a few more questions of this research design. One big question that occurs to me is whether group averages are truly informative when it comes to times savings on development tasks. Would we expect to see a single average lift, across all people, or a more complex effect where some developers gain, and some lose? Would we expect that lift to peter out, to have a ceiling? To have certain preconditions necessary to unlock the lift? All of this can help us think about what study we would design to answer this question.

The idea that whether or not we get “value” from AI changes a lot depending on what we are working on and who we are when we show up to the tools, is something much of my tech community pointed out when I posted about this study on bluesky. […]

It’s worth noting again that developers’ previous experience with Cursor wasn’t well controlled in this study. We’re not matching slowdown estimates to any kind of learning background with these tools.

But beyond that, the blowup about “slowdown from AI” isn’t warranted by the strength of this evidence. The biggest problem I keep coming back to when trying to think about whether to trust this “slowdown” estimate is the fact that “tasks” are so wildly variable in software work, and that the time we spend solving them is wildly variable. This can make simple averages – including group averages – very misleading. […] For instance, that even within-developer, a software developer’s own past average “time on task” isn’t a very good predictor of their future times. Software work is highly variable, and that variability does not always reflect an individual difference in the person or the way they’re working.

1.21.2. Effect vanishes when we start controlling for task type!

The fact that this slowdown difference vanishes once we layer in sorting tasks by whether they include ‘scope creep’ speaks to the fragility of this. Take a look at Figure 7 in the appendix: “low prior task exposure” overlaps with zero, as does “high external resource needs.” This is potentially one of the most interesting elements of the study, tucked away in the back […] Point estimates are almost certainly not some kind of ground truth with these small groups. I suspect that getting more context about tasks would further trouble this “slowdown effect.”

[Figure 7 also has a caption that's relevant here: Developers are slowed down more on issues where they self-report having significant prior task exposure, and on issues where they self report having low external resource needs (e.g. documentation, reference materials)… ]

Let’s keep in mind that the headline, intro, and title framing of this paper is that it’s finding out what AI does to developers’ work. This is a causal claim. Is it correct to say we can claim that AI and AI alone is causing the slowdown, if we have evidence that type of task is part of the picture?

We could fall down other rabbit holes for things that trouble that task-group-average effect that is at the top of the paper, as in Figure 9, or Figure 17.

[Figure 9 shows that developers are either not slowed down, or even sped up, given the wide error bars, on tasks where scope creep happened]

Unfortunately, as they note in 3.3., “we are not powered for statistically significant multiple comparisons when subsetting our data. This analysis is intended to provide speculative, suggestive evidence about the mechanisms behind slowdown.” Well, “speculative suggestive evidence” isn’t exactly what’s implied by naming a study the impact of AI on productivity and claiming a single element of randomization makes something an RCT.

Some clues to this are hidden in the most interesting parts of the study – the developers’ qualitative comments.

[The screenshot shows text from the study saying:

Qualitatively, developers note that AI is particularly helpful when working on unfamiliar issues, and less helpful when working on familiar ones. […]

[…] Sometimes, portions of one’s own codebase can be as unknown as a new API. One developer noted that “/cursor found a helper test function that I didn’t even know existed when I asked it how we tested deprecations./”

On the other hand, developers note that AI is much less helpful on issues where they are expert. One developer notes that “/if I am the dedicated maintainer of a very specialized part of the codebase, there is no way agent mode can do better than me./”

Broadly, we present moderate evidence that on the issues in our study, developers are slowed down more when they have high prior task exposure and lower external resource needs.]

[Note: this matches with the timing data the study itself found, indicating that developers are actually quite accurate about what makes them more or less productive, when you pull out all the inconsistent operationalization and spurious averaging]

[…] This certainly sounds like people are finding AI useful as a learning tool when our questions connect to some kinds of knowledge gaps, and also when the repo and issue solution space provide a structure in which the AI can be an aid. And what do we know about learning and exploration around knowledge gaps….? That it takes systematically more time than a rote task. I wonder if we looked within the “AI-assigned” tasks and sorted them by how much the developer was challenging themselves to learn something new, would this prove to be associated with the slowdown?

1.22. The Responsibility of Engineers in the Age of AI   intelligence_augmentation ai

Yes, we can (and maybe should) ask an LLM to challenge our decisions. But ultimately, it’s humans who carry the responsibility to understand the domain, to reason through trade-offs, and to make the right calls.

It’s people who must become domain experts. And that means not just writing code, but understanding the real-world problems behind that code. Problems that are often messy, cross-cutting, and technology-agnostic.

As AI tools reshape the way we work, learn, and build, we need to step up, not just as individual contributors, but as mentors, guides, and system designers.

We need to create environments where understanding is valued as much as output. Where people aren’t just told what to build, but are helped to understand why. Where fast tools don’t replace deep thinking but amplify it.

And yes, that means seniors, staffs, and leads must take mentoring seriously. Not as an optional “nice-to-have”, but as a core part of engineering practice in the AI era.

But it’s not just about individuals. Organizations must actively create the space for this kind of growth. If all incentives point to shipping fast, teams will optimize for that - even at the cost of long-term understanding. We need to make it safe (and expected) to slow down when needed, to reflect, to mentor, to learn.

1.23. Harsh truths to save you from ChatGPT psychosis   ai intelligence_augmentation

We are dealing with a technology uniquely suited to snipe intelligent, well-educated people into believing they are much, much smarter than they really are. That is an addictive sensation, not easily quit.

If you talk to LLMs at all—and at this point, who doesn’t?—it might not be a bad idea to post these reminders next to your computer, just to protect your own mind against infohazards:

  1. AI will not make you smarter. It will make you faster at retrieving (possibly correct) answers to certain questions. It will not improve your reasoning, judgement, or mental processing ability. […] AI will not lift you out of mediocrity.
  2. AI will not make you more interesting. […] AI will not earn anyone's respect.
  3. AI will not make you more creative. […] AI will not reveal secrets to you.
  4. AI will not make you an expert. AI will not give you any new competencies

1.24. Why Does AI Feel So Different? - nilenso blog   ai intelligence_augmentation accelerationism

One of the truly jaw-dropping aspects of modern frontier agentic code-assisted LLMs is that they're truly general purpose software: they're useful in an extremely broad range of tasks, without any need for specific adapatation due to the breadth of their massive training datasets and their ability to pick up on patterns even in the prompts you give them. It's unlike any software I've ever seen before, which, since it all relied on the rigid pre-programmed throughline of traditional code, instead of stochastic neural-networks, wasn't adaptive and versitile enough to encompass any new task or context without being reprogrammed. Of course, malleable software helped with that, but AI is something truly new.

A lot is changing with AI. It’s been confusing, and slightly overwhelming, for me to get a grip on what’s changing, and what it all means for me personally, for my job, and for humanity. […] So, I’m writing to break it down, and create a mental model that does work.

It’s a new General Purpose Technology (GPT)

When science results in a breakthrough like the steam engine, or electricity, or the internet, or the transformer, it’s called a paradigm shift, or a scientific revolution. Thomas Kuhn defines and explains this well, in The Structure of Scientific Revolutions, 1962. And the paradigm shifts in AI as a science, have been studied via Kuhn’s framework in 2012, and 2023.

However, a paradigm shift in science also causes paradigm shifts in engineering. Engineering rethinks its assumptions, methods, and goals. It can both use the new technology, and build the next order of technology on top of it, causing the ripple effect that leads to the birth of entire industries and economies.

While Kuhn doesn’t go into it, the technological diffusion, and the economic impact of scientific revolutions are better studied through the GPT (general purpose technology) paper from Bresnahan & Trajtenberg in 1995.

“General Purpose Technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level). They have the potential for pervasive use in a wide range of sectors and, as they improve, they contribute to overall productivity growth.”

And Calvino et al in June 2025, finds that AI meets the key criteria of a General Purpose Technology. It’s pervasive, rapidly improving, enables new products, services and research methodologies, and enhances other sectors’ R&D and productivity.

[AI is] the fastest GPT diffusion ever, we’re in the middle of it, and it’s changing the world around us. And the sheer scale of it is staggering. 10s of thousands of researchers and engineers across the world, burning billions of dollars with governments and mega-corps, to make progress happen. It is remarkable.

It’s important to understand that rapid progress in science and technology alone isn’t enough to speed up economic diffusion. It’s a two-way street where R&D in AI labs has to drive economic growth, for the economic growth to invest more into R&D in AI labs. The state of the economy, the attitude of world leaders towards AI, can slow down, or speed up progress.

It’s not just another GPT though. It’s more.

It’s a paradigm shift in accessing knowledge

Languages, Writing, Printing, Broadcasting, Internet.

These are all GPTs too, but of a different, more radical kind. Each of these have been a revolution in sharing knowledge. And now, AI is yet another revolution, one that makes all knowledge intuitively accessible. Knowledge access diffuses more pervasively and quickly compared to other technology.

There’s an important nuance in the stochastic nature of AI. We need to understand AI’s emergent psychology of gullibility, hallucination, jagged intelligence, anterograde amnesia, etc. And then we need to get good at context engineering the same way we got good at searching the internet with keywords, to use it well.

Further, AI makes knowledge accessible not just to tech savvy users, but to children, elders, and other software, and AI too. The ripples are coming.

It’s not just the new internet though.

It’s a paradigm shift in thinking

[…] Even without AGI, AI is changing our way of thinking already.

Everyone has their personal, tireless, emotionless, brain power augmenter. AI can read, write, see, hear, and speak, and combined with some common sense, it can truly understand and interpret these signals.

[…]

Thinking to me was… sitting with my thoughts, alone. Reading up, writing down my thoughts and reflecting on them. Sharing my thoughts with others, getting their thoughts on it, and then assimilating my own point of view. Thinking is… following a trail of thought to its logical conclusion.

I don’t think quite like this anymore.

I chat with a voice AI, ramble on for a while, and have it summarise my thoughts back to me. I write to the AI, and have it reflect back to me, instantly. I can invoke the internet spirits of my favourite famous people (Rich Hickey and Andrej Karpathy these days), and chat with them. Standing on the shoulders of giants has never been easier. […] It’s not just me, I’m sure. Most people, like me, already use it to break problems down, to brainstorm, and to create decision matrices for software (or civil?) architecture decisions.

[…]

Further, software can use AI’s ability to think and access knowledge, just like humans can. Oh, and AI can use software too. And oh, AI IS software too. […] So, it’s not just a paradigm shift in thinking.

It’s a recursive paradigm shifting paradigm shift

Code is data is code. Lispers get this, its turtles all the way down. Let’s walk through it.

  • Software can use AI. We write pieces of text in between code that uses AI’s thinking or knowledge access capability. Like with autocompletion, or chatbots.
  • AI can use software. It can access the filesystem and run programs on your OS, if you let it. It can access the web through search engines, or web browsers, if you let it.
  • AI can build software. It writes and executes small python scripts to analyse data when thinking. The agency and autonomy needed to build full fledged meaningful software isn’t there with AI yet, but it’s advancing quickly. AI assisted coding is pretty big, you know this.
  • AI IS software. AI is a trained neural network, and with sufficiently advanced capabilities, it can build itself

AGI isn’t here yet, and AI isn’t building itself without humans. But, AI is already fairly bootstrapped. Kimi K2 is LLMs all the way down. This is one of the reasons the technology is evolving really quickly. AI, Software, and Humans, are together enhanced by AI.

[…]

Thinking of it as an evolution of existing paradigms helps in appreciating the similarities with the familiar, and differentiating what’s new. Most people comparing AI with the internet or the industrial revolution are doing the same thing.

We need to accept that we’re in the middle of a revolution and that things will be confusing for a while. If Kuhn is right, at some point, we’ll hit a plateau in science, and that will bring some stability to the world. Until then though, this magnitude of change will be the norm.

And change is hard. Paradigm changes are harder. […] If the GPT is about thinking, what are its 2nd and 3rd order effects, and what is our place in them?

1.25. Power to the people: How LLMs flip the script on technology diffusion   intelligence_augmentation ai

Transformative technologies usually follow a top-down diffusion path: originating in government or military contexts, passing through corporations, and eventually reaching individuals - think electricity, cryptography, computers, flight, the internet, or GPS. This progression feels intuitive, new and powerful technologies are usually scarce, capital-intensive, and their use requires specialized technical expertise in the early stages.

So it strikes me as quite unique and remarkable that LLMs display a dramatic reversal of this pattern - they generate disproportionate benefit for regular people, while their impact is a lot more muted and lagging in corporations and governments. ChatGPT is the fastest growing consumer application in history, with 400 million weekly active users who use it for writing, coding, translation, tutoring, summarization, deep research, brainstorming, etc. This isn't a minor upgrade to what existed before, it is a major multiplier to an individual's power level across a broad range of capabilities. […]

Why then are the benefits a lot more muted in the corporate and government realms? I think the first reason is that LLMs offer a very specific profile of capability […] they are simultaneously versatile but also shallow and fallible. Meanwhile, an organization's unique superpower is the ability to concentrate diverse expertise into a single entity by employing [experts]. While LLMs can certainly make these experts more efficient individually […] the improvement to the organization takes the form of becoming a bit better at the things it could already do. In contrast, an individual will usually only be an expert in at most one thing, so the broad quasi-expertise offered by the LLM fundamentally allows them to do things they couldn't do before.

[…]

Second, organizations deal with problems of a lot greater complexity and necessary coordination, think: various integrations, legacy systems, corporate brand or style guides, stringent security protocols, privacy considerations, internationalization, regulatory compliance and legal risk. There are a lot more variables, a lot more constraints, a lot more considerations, and a lot lower margin for error.

[…]

at least at this moment in time, we find ourselves in a unique and unprecedented situation in the history of technology. If you go back through various sci-fi you'll see that very few would have predicted that the AI revolution would feature this progression. It was supposed to be a top secret government megabrain project wielded by the generals, not ChatGPT appearing basically overnight and for free on a device already in everyone's pocket. Remember that William Gibson quote "The future is already here, it's just not evenly distributed"? Surprise - the future is already here, and it is shockingly distributed. Power to the people. Personally, I love it.

1.26. Claude Code is My Computer   intelligence_augmentation ai

For the past two months, I’ve been living dangerously. I launch Claude Code (released in late February) with –dangerously-skip-permissions, the flag that bypasses all permission prompts. According to Anthropic’s docs, this is meant “only for Docker containers with no internet”, yet it runs perfectly on regular macOS.

Yes, a rogue prompt could theoretically nuke my system. That’s why I keep hourly Arq snapshots (plus a SuperDuper! clone), but after two months I’ve had zero incidents…

When I first installed Claude Code, I thought I was getting a smarter command line for coding tasks. What I actually got was a universal computer interface that happens to run in text. The mental shift took a few weeks, but once it clicked, I realized Claude can literally do anything I ask on my computer.

The breakthrough moment came when I was migrating to a new Mac. Instead of doing the usual restore dance, I pointed Claude at my backup disk and said: “Restore this Mac from my backup disk—start with dotfiles, then system preferences, CLI tools, and restore Homebrew formulae and global npm packages.” Claude drafts a migration plan, executes it step by step, and has my new machine ready in under an hour.

Why this works (and when it doesn’t)

Claude Code shines because it was built command-line-first, not bolted onto an IDE as an afterthought. The agent has full access to my filesystem (if you are bold enough…), can execute commands, read output, and iterate based on results.

Anthropic’s best practices guide recommends keeping a CLAUDE.md file at your repo root with project-specific context. I’ve adopted this pattern and noticed Claude asks fewer clarifying questions and writes more accurate code. […] Little optimizations like this compound quickly.

The main limitation is response time. […] However, you can prefix commands with ! to run them directly without waiting for token evaluation—Claude will execute your command either way, but this is faster when you know exactly what you’re calling.

[…]

We’re in the very early days of AI-native development tools. Claude Code represents a paradigm shift: from tools that help you run commands to tools that understand intent and take action. I’m not just typing commands faster—I’m operating at a fundamentally higher level of abstraction. Instead of thinking “I need to write a bash script to process these files, chmod it, test it, debug it,” I think “organize these files by date and compress anything older than 30 days.”

This isn’t about AI replacing developers—it’s about developers becoming orchestrators of incredibly powerful systems. The skill ceiling rises: syntax fades, system thinking shines.

1.27. How I use "AI"   ai intelligence_augmentation

An extremely long document showing transcripts of about a dozen real world, specific, diverse ways in which Nicholas Carlini, a security researcher whose job is to poke holes in and criticise AI, still finds AI extremely useful to enhance his productivity.

As limited as large language models are, I think they are a genuinely magical and revolutionary technology that completely changes how we understand and interact with computers. Never before have I seen an algorithm that so drastically raises the floor of accessibility for programming and information gathering, presents such a natural and potentially extremely powerful interface to computers (through e.g. agents), and is so wildly generally useful. Of course, that's because they've been trained on the entire corpus of human thought, reasoning, and knowledge and are just picking up patterns in that data, but while that's an argument against thinking they can reason (they can't) and against them being a path to AGI by themselves, that is precisely an argument for why they're useful!

1.28. Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task   intelligence_augmentation ai

This study focuses on finding out the cognitive cost of using an LLM in the educational context of writing an essay.

We assigned participants to three groups: LLM group, Search Engine group, Brain-only group, where each participant used a designated tool … to write an essay. We conducted 3 sessions with the same group assignment for each participant. In the 4th session we asked LLM group participants to use no tools (we refer to them as LLM-to-Brain), and the Brain-only group participants were asked to use LLM (Brain-to-LLM). We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4. We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load, and to gain a deeper understanding of neural activations during the essay writing task. We performed NLP analysis, and we interviewed each participant after each session. We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent).

We discovered a consistent homogeneity across the Named Entities Recognition (NERs), n-grams, ontology of topics within each group. EEG analysis presented robust evidence that LLM, Search Engine and Brain-only groups had significantly different neural connectivity patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling. In session 4, LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re‑engagement of widespread occipito-parietal and prefrontal nodes, likely supporting the visual processing, similar to the one frequently perceived in the Search Engine group. The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.

This study has made the obligatory rounds through anti-AI spaces and echo chambers with headlines smugly crowing, or fearmongering, or both, about the idea that ChatGPT "damages your brain," causes "brain rot," and about half a million variations thereof. This is what the paper's own FAQ has to say about that framing:

Is it safe to say that LLMs are, in essence, making us "dumber"?

No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.

[…]

Additional vocabulary to avoid using when talking about the paper

In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".

And this isn't just because they find the use of these terms distasteful for reasons of academic decorum: there are real reasons why this framing, while tempting, is fundamentally wrong. What this study shows is not some kind of generalized, permanent cognitive decline just from touching LLMs at all in any novel way, but a subject-specific decrease in effort, ownership, and memory if you use LLMs to do all your thinking for you, and the possibilty of skill atrophy if you overuse AI to execute that skill for you; in other words, rather obvious and non-novel conclusions, that we've known about for other technologies for forever. If you copy from Wikipedia, you'll know less about the topic, be able to quote from your essay less, and feel less ownership over it; if you use search engines, your ideas may be less unique and you'll be able to recall facts off the top of your head less well; if you use calculators or GPS, you'll get worse at mental math or navigation. Even Socrates was complaining about this, to use a cliche example.

So why is this study getting treated like this?

I think part of the reason is that people keep misreading it. The language is ambiguous and sometimes unfortunate in the way it phrases a lot of its findings, from the perspective of a layperson. It writes in a way that's clearly intended for other scientists, who know what "neural connectivity" in the context of an EEG means — namely, that it's about activity, not the physical neurological connections — and so it feels free to say things like "a decrease in neural connectivity"; likewise, it assumes that the reader will understand that when it talks about remembering less, it will be clear that, just due to the scope of the experiment and the nature of the methodology, it's talking about remembering less about the specific essay that was already written, not a general pattern of having a worse memory or memory loss. But for motivated laypeople and journalists looking for a story, this languague is ambiguous and ripe for pulling out of context.

Let's skim through the paper and take a look at what it actually says, though, to counteract this narrative, and hopefully derive some practical guidelines on how to maintain one's cognitive skills while benefitting from AI, instead of just fearmongering that's closer to a DARE speech than anything useful.

1.28.1. Sessions 1-3

For sessions 1-3, all this study shows is that if you don't actually write something, or even come up with the ontology, terminology, arguments, and research for it (from diverse sources) yourself, but instead have ChatGPT generate it wholesale, the output is going to be more homogenous, and you're going to remember less of it and have less of a sense of ownership over it. That's literally it:

After the participants finished the study, we used a local LLM to classify the interactions participants had with the LLM, most common requests were writing an essay, see the distribution of the classes in Figure 31 below. [If you look at Figure 31, it shows that 97 requests were for writing the essay, whereas the closest second-place request type, guidance and clarification, only showed up 43 times. The third highest, inquiry and discussion, was at 39. This very clearly shows that the way people ended up using ChatGPT was primarily in writing the essay for them, and much less in other capacities.]

Quoting accuracy was significantly different across experimental conditions (Figure 6). In the LLM‑assisted group, 83.3 % of participants (15/18) failed to provide a correct quotation, whereas only 11.1 % (2/18) in both the Search‑Engine and Brain‑Only groups encountered the same difficulty.

The response to this question was nuanced: LLM group either indicated full ownership of the essay for half of the participants (9/18), or no ownership at all (3/18), or "partial ownership of 90%' for 1/18, "50/50' for 1/18, and "70/30' for 1/18 participants.

The implication of this is not that you should never use large language models, but that you should be strategic in your usage of them, only using them for things that you're okay with offloading, including the consequences of that, such as less unique ideas and poorer memory of what you did and why, less learning, and less sense of ownership and agency. That's pretty obvious, and true of almost all forms of automation. It's also important to note that this was specifically the result of the participants primarily using the LLM to write sections of the essay for them, as well as using it with search citations turned of for ideation and information retrieval. Not something like critique or copy editing, or as a web search copilot. Also note that with the ChatGPT interface, you don't get the affordances that you get with e.g. agentic coding, where you can collaboratively build an essay change by change with the AI, interrupting it, critiquing, iterating, engaging with each small decision it makes, going back and forth and collaborting on a document, which allows you to maintain ownership over the project. Instead, it's much more of a batch process: you tell the AI what you want, and it spits out a huge output, which you copy paste; and if you want to change something, it has to regenerate the entire output again, which can cause all sorts of problems and so is emergently discouraged. (It should also be noted that originality and such isn't really the goal of coding, so a lot of these metrics don't really… apply to coding.)

This is an interesting quote:

In the LLM group, topic selection was mainly motivated by perceived engagement and personal resonance: four participants chose prompts they considered “the most fun to write about” (P1), while five selected questions they had “thought about a lot in the past” (P11). Two additional participants explicitly reported that they “want to challenge this prompt” or “disagree with this prompt”. Search Engine group balanced engagement (5/18) with relatability and familiarity (8/18), citing reasons such as “can relate the most”, “talked to many people about it and [am] familiar with this topic”, and “heard facts from a friend, which seemed interesting to write about”. By contrast, the Brain-only group predominantly emphasized prior experience alongside engagement, relatability, and familiarity, noting that the chosen prompt was “similar to an essay I wrote before”, “worked on a project with a similar topic”, or was related to a “participant I had the most experience with”. Experience emerged as the most frequently cited criteria for Brain-only group in Session 2, most likely reflecting their awareness that external reference materials were unavailable.

Despite the concerns that the paper expresses later on about the enforced subject homogeneity and echo chambers introduced by search engines and LLMs, it seems to me that on the basis of this quote we can say that the Brain-only group also faced the same issue; it was just that the biases and homogeneity was within-subject instead of between subject. So it becomes a question of whether we want people to stay within individualized bubbles of ideas, or engage with ideas outside their own head in a way that might lead them to homogenize with "general society" but also might lead them to diversify their own ideas and subjects of interest?

The following quote is also really interesting:

Across all sessions, participants articulated convergent themes of efficiency, creativity, and ethics while revealing group‑specific trajectories in tool use. The LLM group initially employed ChatGPT for ancillary tasks, e.g. having it “summarize each prompt to help with choosing which one to do” (P48, Group 1), but grew increasingly skeptical: after three uses, one participant concluded that “ChatGPT is not worth it” for the assignment (P49), and another preferred “the Internet over ChatGPT to find sources and evidence as it is not reliable” (P13). Several users noted the effort required to “prompt ChatGPT”, with one imposing a word limit “so that it would be easier to control and handle” (P18); others acknowledged the system “helped refine my grammar, but it didn't add much to my creativity”, was “fine for structure… [yet] not worth using for generating ideas”, and “couldn't help me articulate my ideas the way I wanted” (Session 3). Time pressure occasionally drove continued use, “I went back to using ChatGPT because I didn't have enough time, but I feel guilty about it”, yet ethical discomfort persisted: P1 admitted it “feels like cheating”, a judgment echoed by P9, while three participants limited ChatGPT to translation, underscoring its ancillary role. In contrast, Group 2's pragmatic reliance on web search framed Google as “a good balance” for research and grammar, and participants highlighted integrating personal stories, “I tried to tie [the essay] with personal stories” (P12). Group 3, unaided by digital tools, emphasized autonomy and authenticity, noting that the essay “felt very personal because it was about my own experiences” (P50)

This indicates that LLM usage or overusage is not inevtiable, nor are people being fooled. It seems as though most of the LLM group participants quickly learned its limitations and figured out that it wasn't good for the task they were being forced by the study's structure to use it for, and only resorted to it over their own thoughts and words (which they clearly would've preferred) because of artificial time limits! Again, this is far from the "ChatGPT is a cognitohazard that addicts you and rots your brain, that you will inevitably desire to use if presented with the option" framing that fearmongers and smug haters like to push on the back of this paper which they clearly didn't read.

1.28.2. Session 4

Meanwhile, for session 4, where participants are asked, crucially, not to write on brand new topics, but to write on previous topics, or even iterate on previous essays:

Additionally, instead of offering a new set of three essay prompts for session 4, we offered participants a set of personalized prompts made out of the topics EACH participant already wrote about in sessions 1, 2, 3. […] This personalization took place for EACH participant who came for session 4.

Across all groups, participants strongly preferred continuity with their previous work when selecting essay topics. […] Overall, familiarity remained the principal motivation of topic choice.

Here we report how brain connectivity evolved over four sessions of an essay writing task in Sessions 1, 2, 3 for the Brain-only group and Session 4 for the LLM-to-Brain group. The results revealed clear frequency-specific patterns of change: lower-frequency bands (delta, theta, alpha) all showed a dramatic increase in connectivity from the first to second session, followed by either a plateau or decline in subsequent sessions, whereas the beta band showed a more linear increase peaking at the third session. These patterns likely reflect the cognitive adaptation and learning that occurred with repeated writing in our study.

[…]

The critical point of this discussion is Session 4, where participants wrote without any AI assistance after having previously used an LLM. Our findings show that Session 4's brain connectivity did not simply reset to a novice (Session 1) pattern, but it also did not reach the levels of a fully practiced Session 3 in most aspects. Instead, Session 4 tended to mirror somewhat of an intermediate state of network engagement.

One plausible explanation is that the LLM had previously provided suggestions and content, thereby reducing the cognitive load on the participants during those assisted sessions. When those same individuals wrote without AI (Session 4), they may have leaned on whatever they learned or retained from the AI, but because prior sessions did not require the significant engagement of executive control and language‑production networks, engagement we observed in Brain-only group (see Section “EEG Results: LLM Group vs Brain-only Group” for more details), the subsequent writing task elicited a reduced neural recruitment for content planning and generation.

This is crucial: since the writing was within the same topics, thus encouraging participants to draw on their previous work and thinking the same cognitive issues — such as lack of ownership, and less unique perspectives — persisted within the LLM-to-Brain group. But this doesn't indicate "long term generalized brain damage" or something like that; it just indicates that if you've already written on a topic in a way that used cognitive offloading, the consequences of that cognitive offloading won't just reset when you stop doing it — you'll have to catch up instead.

This quote, which nobody ever seems to mention, significantly complicates the picture of the LLM-to-Brain group as experiencing pure decline even within the specific prompts involved, and in fact may indicate participants even benefitting to some degree from that past AI interaction:

On the other hand, Session 4's connectivity was not universally down, in certain bands, it remained relatively high and even comparable to Session 3. Notably, theta band connectivity in Session 4, while lower in total than Session 3, showed several specific connections where Session 4 was equal or exceeded Session 3 (e.g. many connections followed S2 > S4 > S3 > S1 pattern). Theta is often linked to semantic retrieval and creative ideation; the maintained theta interactions in Session 4 may reflect that these participants were still actively retrieving knowledge or ideas, possibly recalling content that AI had provided earlier. […] In a sense, the AI could have served as a learning aid, providing new information that the participants internalized and later accessed. The data hints at this: one major theta hub in all sessions was the frontocentral area FC5 (near premotor/cingulate regions), involved in language and executive function, which continued to receive strong inputs in Session 4. Therefore, even after AI exposure, participants engaged brain circuits for memory and planning. Similarly, the delta band in Session 4 remained as active as in Session 3, indicating that sustained attention and effort were present. This finding is somewhat encouraging: it suggests that having used AI did not make the participants completely disengaged or inattentive when they later wrote on their own. They were still concentrating, delta connectivity at Session 4 was ~45% higher than Session 1's and matched Session 3's level.

Of course, this is not to say that if you offload your thinking in every single subject to AI, there won't be a general effect — I definitely agree with the study that "over-reliance on AI can erode critical thinking and problem-solving skills: users might become good at using the tool but not at performing the task independently to the same standard. Our neurophysiological data provides the initial support for this process, showing concrete changes in brain connectivity that mirror that shift." But this is less like some kind of special "brain rot" syndrome, and more like how literally any tool, from Google Maps, to Google Search, to a calculator, functions — the key is not to belittle the tool as some kind of ultimate evil and avoid it at all costs, but to engage with it carefully and consciously.

Since I use writing to think, and exploring my notes to synthesize my thoughts and information I've collected, I never use LLMs to do it; likewise, since I use quoting and summarizing the things I post to my mirrors page as an opportunity to re-engage with the material, in order to help me remember it, and decide what's important and noteworthy about the material, I don't use LLMs to do that either

Likewise, as the study says:

Our results also caution that certain neural processes require active exercise. The under-engagement of alpha and beta networks in post-AI writing might imply that if a participant skips developing their own organizational strategies (because an AI provided them), those brain circuits might not strengthen as much. Thus, when the participant faces a task alone, they may underperform in those aspects. In line with this, recent research has emphasized the need to balance AI use with activities that build one's own cognitive abilities [3]. From a neuropsychological perspective, our findings underscore a similar message: the brain adapts to how we train it. If AI essentially performs the high-level planning, the brain will allocate less resources to those functions, as seen in the moderated alpha/beta connectivity in Session 4.

Which is why when I do agentic coding, I make sure to do all the debugging myself, as well as all the design for the program's state machine, logic, architecture, data flow, UI/UX design, and feature selection. I also choose what technologies to use and how/when to apply them. This way, all of the general skills that make me a good programmer stay strong, even if my specific knowledge of the "intellectual empty calories" of some specific command or framework isn't reinforced constantly; moreover, even for specific commands or frameworks, if I do want them to be part of my core competency, I make sure to practice them often, instead of using the AI to write with them.

Another interesting thing here is that the Brain-to-LLM group, who first wrote their essays by themselves, and then used LLMs to improve and iterate their writing, showed actually much more unique perspectives and higher engagement, ownership, brain connectivity, and critical engagement with the LLM outputs:

  • Better integration of content compared to previous Brain sessions (Brain-to-LLM). More information seeking prompts. Scored mostly above average across all groups. Split ownership.
  • High memory recall. Low strategic integration. Higher directed connectivity across all frequency bands for Brain-to-LLM participants, compared to LLM-only Sessions 1, 2, 3.

This indicates that, if you're going to use LLMs to write for you, you should treat them as a copyeditor used only before a final draft, or as something to structure and clarify thoughts you put in in the first place, instead of starting and finishing with LLM outputs and putting very little effort in between. Not that LLM usage should be totally verboten. As the study itself says:

Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. […] It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks.

Across all frequency bands, Session 4 (Brain-to-LLM group) showed higher directed connectivity than LLM Group's sessions 1, 2, 3. This suggests that rewriting an essay using AI tools (after prior AI-free writing) engaged more extensive brain network interactions.

This correlation between neural connectivity and behavioral quoting failure in LLM group's participants offers evidence that:

  1. Early AI reliance may result in shallow encoding. LLM group's poor recall and incorrect quoting is a possible indicator that their earlier essays were not internally integrated, likely due to outsourced cognitive processing to the LLM.
  2. Withholding LLM tools during early stages might support memory formation. Brain-only group's stronger behavioral recall, supported by more robust EEG connectivity, suggests that initial unaided effort promoted durable memory traces, enabling more effective reactivation even when LLM tools were introduced later.
  3. Metacognitive engagement is higher in the Brain-to-LLM group. Brain-only group might have mentally compared their past unaided efforts with tool-generated suggestions (as supported by their comments during the interviews), engaging in self-reflection and elaborative rehearsal, a process linked to executive control and semantic integration, as seen in their EEG profile.

The significant gap in quoting accuracy between reassigned LLM and Brain-only groups was not merely a behavioral artifact; it is mirrored in the structure and strength of their neural connectivity. The LLM-to-Brain group's early dependence on LLM tools appeared to have impaired long-term semantic retention and contextual memory, limiting their ability to reconstruct content without assistance. In contrast, Brain-to-LLM participants could leverage tools more strategically, resulting in stronger performance and more cohesive neural signatures.

In conclusion, AI is not going to magically "rot your brain" independent of how and when and why you use it. What this study says is actually much more useful than that: that it is important to realize that if you use AI to perform a skill for you, you'll eventually lose practice with that skill, because the brain ruthlessly prunes those sorts of things; and that if you use AI to do something for you, in such a way that you're not intimately involved with the process, then you'll know and remember less about what you did. But the answer to that is to make sure that you're always doing core cognitive competencies — whatever those are — yourself, instead of letting the machines think for you. Make sure it's always you that's doing the critical thinking, that's directing the outline of the process and originating the ideas and expressions, and make sure that you keep intimate track of what the AI is doing, treating it at best as a pair programmer, and most of the time as something like a copyeditor or information retrieval system.

Oh, and regarding information retrieval: one of the concerns this study raises is that since ChatGPT only gives a univocal answer to any given question, which inherits the biases of its creators and trainers, as well as the model's own emergent stochastic "opinions", this can lead to biased and less diverse information intake, when compared to the irreducibly multivocal experience of information retrieval with a search engine. In my opinion, this is where things like Perplexity come in: this allows the AI to actually fetch multiple opinions, and then summarize all of them, giving you a high level overview of all the sides in any given debate or argument, allow you to actually more accurately get access to multivocal information, since with a search results page, since you have to click on and read each view individually, you might be tempted to stop with the first one. More importantly, something like Perplexity automates the process of making many internet searches, presenting you with a long list of original sources to click on and explore, and citing them in its answer, so that its answer only serves as a high level jumping off point. I think this resolves that problem well enough.

1.29. TODO As We May Think   intelligence_augmentation ai hacker_culture

In this significant article [Vannever Bush] holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For many years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on ``The American Scholar,'' this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.

1.30. one with the machine: vannever bush, joseph licklider, chatgpt, and the dream of the cyborg   intelligence_augmentation ai

This is an insightful, interesting high level overview of the intelligence augmentation visions of Vannever Bush and J. C. R. Licklider, and how large language models might bring about another great leap closer to their visions, just as the personal computer and the internet before them did.

to condense down their visions into a few bullets, I would say both of them approach, in their own way, these three key ideas:

  1. the computer should be able to handle calculation, data retreival, plotting, and other computer-strong tasks on your behalf. it should behave less like a tool and more like an assistant. you express what you want it to do, and it figures out how to do it
  2. the computer should provide a lot of help in finding information, to the point of understanding your intent and responding to vague requests and half-formed thoughts. you express what kind of thing you might want, and it works with you on the specifics
  3. you should be able to talk to the computer and get responses, like a real dialogue. this is the central mechanism that accomplishes the first two points

but neither man truly imagined a system that could read, hear, interpret, and speak natural english, with perfect clarity, almost at the speed of thought. that's what makes the present moment unique: the possibility of the universal interface

I'm not an ai person. merely a humble writer and programmer, watching how these systems develop, trying to think of ways to use them in my own work. I don't have any strong conclusions about where we might go. all I know is that these possibilities are open to us, to fulfill the dreams of those who came before

This work by Novatorine is licensed under NPL-1.0; you can contact her at novatorine@proton.me with the PGP encryption key here.

Comments are stored locally in your browser. Use Export/Import or Copy Link to share.
Shift + Enter to save, Esc to cancel