Mirrors
Table of Contents
- 1. Introduction
- 2. Mirror List
- 2.1. Accelerationism and Other Left-Futurism
- 2.1.1. "They Killed Their Mother": Avatar as Ideological Symptom accelerationism culture philosophy anarchism
- 2.1.2. Unconditional Acceleration and the Question of Praxis post_left accelerationism culture
- 2.1.3. Critique of Transcendental Miserablism post_left
- 2.1.4. Notes on Accelerationism accelerationism culture hacker_culture philosophy
- 2.1.5. Postcapitalist Desire accelerationism culture philosophy
- 2.1.6. Unconditional accelerationism as antipraxis accelerationism philosophy post_left
- 2.1.7. Conspiracy Theories, Left Futurism, and the Attack on TESCREAL
- 2.1.8. Fragment on the Event of “Unconditional Acceleration” accelerationism philosophy
- 2.1.9. Xenofeminism feminism philosophy intelligence_augmentation accelerationism
- 2.1.10. TODO Accelerate: An Accelerationist Reader accelerationism
- 2.1.11. TODO Libidinal Economy
- 2.1.12. TODO Fanged Noumena
- 2.1.13. TODO Thirst for Annihilation
- 2.1.14. Reaching Beyond to the Other: On Communal Outside-Worship accelerationism
- 2.1.15. Cyberpunk is Now Our Reality accelerationism cyberpunk hacker_culture
- 2.1.16. An Anarcho-Transhumanist FAQ intelligence_augmentation anachism post_left
- 2.1.17. Science As Radicalism philosophy anarchism
- 2.1.18. Rethinking Crimethinc. anarchism culture philosophy
- 2.1.19. Comments on CrimethInc. anachism philosophy culture
- 2.1.20. Hopepunk, Optimism, Purity, and Futures of Hard Work by Ada Palmer fiction
- 2.1.21. Civilisation, Primitivism and Anarchism anarchism
- 2.2. AI
- 2.2.1. Environmental Issues
- 2.2.2. IP Issues
- 2.2.3. Architecture and Design
- 2.2.3.1. On Chomsky and the Two Cultures of Statistical Learning ai hacker_culture philosophy
- 2.2.3.2. The Bitter Lesson ai hacker_culture software philosophy
- 2.2.3.3. The Bitter Lesson: Rethinking How We Build AI Systems ai
- 2.2.3.4. What Is ChatGPT Doing … and Why Does It Work?? ai
- 2.2.3.5. Cyc ai
- 2.2.3.6. Types of Neuro-Symbolic AI
- 2.2.3.7. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
- 2.2.3.8. ChatGPT is bullshit ai philosophy
- 2.2.3.9. Asymmetry of verification and verifier’s law ai
- 2.2.3.10. The Model is the Product ai
- 2.2.4. What kind of intelligence do LLMs have?
- 2.2.5. Superintelligence: The Idea That Eats Smart People ai
- 2.2.6. LLMs are cheap ai
- 2.3. Hacker/Cyberpunk Culture
- 2.3.1. "Ethical software" is (currently) a sad joke software philosophy anarchism
- 2.3.2. Beating the Averages programming hacker_culture
- 2.3.3. By Mouse Instead of By Lever programming software hacker_culture
- 2.3.4. Common Lisp: The Untold Story hacker_culture history
- 2.3.5. Crypto: How the Code Rebels Beat the Government - Saving Privacy in the Digital Age hacker_culture history
- 2.3.6. TODO Dream Machines/Computer Lib philosophy software intelligence_augmentation hypermedia
- 2.3.7. Engelbart's Violin software programming
- 2.3.8. Evolutional Steps of Computer Systems software
- 2.3.9. TODO File Structure for The Complex, The Changing and the Indeterminate philosophy software hypermedia
- 2.3.10. Free as Air, Free As Water, Free As Knowledge culture hacker_culture
- 2.3.11. TODO Free as in Freedom: Richard Stallman’s Crusade for Free Software anarchism hacker_culture history
- 2.3.12. Hackers and Painters programming philosophy
- 2.3.13. Hackers: Heroes of the Computer Revolution hacker_culture anarchism
- 2.3.14. How To Become A Hacker programming hacker_culture
- 2.3.15. I'm an American software developer and the "broligarchs" don't speak for me hacker_culture programming praxis
- 2.3.16. Initial GNU Announcement hacker_culture history
- 2.3.17. Intro, Part II. programming software philosophy intelligence_augmentation
- 2.3.18. Inventing on Principle philosophy programming
- 2.3.19. Language, Purity, Cult, and Deception hacker_culture programming
- 2.3.20. Lisp Operating System software
- 2.3.21. Maximalist Computing software philosophy anarchism
- 2.3.22. My Lisp Experiences and the Development of GNU Emacs hacker_culture
- 2.3.23. Of Lisp Macros and Washing Machines programming software philosophy
- 2.3.24. Seven Laws of Sane Personal Computing software philosophy hacker_culture
- 2.3.25. Stop Writing Dead Programs programming software
- 2.3.26. Symbolics Museum history software
- 2.3.27. Taste for Makers programming hacker_culture philosophy
- 2.3.28. Tech Geekers and What is Politics? philosophy hacker_culture
- 2.3.29. Terminal boredom, or how to go on with life when less is indeed less software philosophy
- 2.3.30. The Art of Lisp & Writing programming literature philosophy
- 2.3.31. The Bipolar Lisp Programmer hacker_culture
- 2.3.32. The Cathedral and the Bazaar: Collected Essays anarchism philosophy hacker_culture history
- 2.3.33. The Cult of Done Manifesto programming hacker_culture philosophy
- 2.3.34. The Cyberpunk Project hacker_culture
- 2.3.35. TODO The Jargon File (version 4.4.7) hacker_culture history
- 2.3.36. The Nature of the Unix Philosophy software
- 2.3.37. The Repair Manifesto philosophy anarchism hacker_culture
- 2.3.38. The Stigmergic Revolution anarchism economics hacker_culture
- 2.3.39. The Structure of a Programming Language Revolution hacker_culture programming history
- 2.3.40. The Unix-Haters Handbook programming software
- 2.3.41. What is Free Software? philosophy hacker_culture anarchism
- 2.3.42. What is wrong with Lisp? hacker_culture programming
- 2.3.43. What’s wrong with CS research programming philosophy
- 2.3.44. Where Lisp Fails: at Turning People into Fungible Cogs. philosophy programming
- 2.3.45. Where the Unix philosophy breaks down software
- 2.3.46. Why Skin-Deep Correctness – Isn't, and Foundations Matter. programming software philosophy
- 2.3.47. You have made your bedrock, now lie in it. software
- 2.4. Anarchism
- 2.4.1. Constructing an Anarchism: Collective Force anarchism philosophy
- 2.4.2. Existentialism is a Humanism philosophy
- 2.4.3. God is Evil, Man is Free anarchism philosophy religion
- 2.4.4. Liberatory Community Armed Self-Defense: Approaches Toward a Theory anarchism
- 2.4.5. TODO My Disillusionment in Russia anarchism history
- 2.4.6. No Treason. No. VI. The Constitution of No Authority (1870) anarchism
- 2.4.7. Polity-form and External constitution anarchism philosophy
- 2.4.8. Simple Sabotage Field Manual anarchism direct_action
- 2.4.9. The Anarchic Encounter: Economic and/or Erotic? anarchism philosophy jobs
- 2.4.10. The Anatomy of the Encounter anarchism philosophy
- 2.4.11. The Collected Writings of Renzo Novatore philosophy anarchism
- 2.4.12. The Difference between Anarchy and the Academy anarchism
- 2.4.13. The Myth of the Rule of Law anarchism philosophy
- 2.4.14. The Politics of Obedience: The Discourse of Voluntary Servitude anarchism philosophy
- 2.4.15. Ur-Fascism philosophy
- 2.4.16. Are We Good Enough? - Peter Kropotkin anarchism philosophy
- 2.4.17. Post-Left
- 2.4.17.1. A Review of The “Tyranny of Structurelessness”: An organizationalist repudiation of anarchism anarchism
- 2.4.17.2. Anarchism as a Spiritual Practice anarchism religion
- 2.4.17.3. Bloody Rule and a Cannibal Order! anarchism
- 2.4.17.4. TODO Communism Unmasked anarchism philosophy
- 2.4.17.5. Hayek, Epistemology, and Hegemonic Rationality philosophy anarchism
- 2.4.17.6. Natural Law, or Don’t Put a Rubber on Your Willy philosophy anarchism
- 2.4.17.7. The Question of a Stagnant Marxism: Is Marxism Exegetical or Scientific? philosophy
- 2.4.17.8. The Union of Egoists anarchism philosophy
- 2.4.17.9. The Unique and Its Property philosophy
- 2.4.17.10. Toward the queerest insurrection anarchism queer
- 2.4.17.11. Post-Left Anarchy anarchism philosophy post_left
- 2.4.17.12. Smashing the Orderly Party anarchism
- 2.4.17.13. Software and Anarchy anarchism programming software
- 2.4.17.14. Some Thoughts on the Creative Nothing philosophy
- 2.4.17.15. Stirner's Critics philosophy
- 2.4.17.16. Critique of "Left-Wing" Culture
- 2.4.18. Market Anarchism
- 2.4.18.1. Markets Not Capitalism anarchism
- 2.4.18.2. TODO Anarchists Against Democracy In Their Own Words anarchism philosophy
- 2.4.18.3. Anarchy without Hyphens (1980) anarchism philosophy
- 2.4.18.4. Anarchy in the U.K. anarchism history
- 2.4.18.5. Anatomy of the State anarchism philosophy
- 2.4.18.6. Confiscation and the Homestead Principle (1969) anarchism philosophy economics
- 2.4.18.7. Corporations versus the Market; or, Whip Conflation Now anarchism economics
- 2.4.18.8. Economic Calculation in the Corporate Commonwealth, Hierarchy or the Market, and Contract Feudalism philosophy anarchism
- 2.4.18.9. From Whence Do Property Titles Arise? anarchism philosophy
- 2.4.18.10. In Defense of Public Space anarchism philosophy economics
- 2.4.18.11. Instead of a Book, by a Man Too Busy to Write One: A Fragmentary Exposition of Philosophical Anarchism anarchism
- 2.4.18.12. Labour Struggle in a Free Market anarchism economics
- 2.4.18.13. Nice Shit for Everybody anarchism philosophy
- 2.4.18.14. TODO Property is Theft! A Pierre-Joseph Proudhon Anthology philosophy anarchism economics
- 2.4.18.15. Revealed Preference: A Parable anarchism economics
- 2.4.18.16. Scratching By: How Government Creates Poverty as We Know It anarchism economics
- 2.4.18.17. TODO Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed anarchism economics philosophy
- 2.4.18.18. The Gift Economy of Property anarchism philosophy
- 2.4.18.19. THe Modern Business Corporation versus the Free Market? anachism economics
- 2.4.18.20. The Network: A Parody of the Discourse anarchism economics parody
- 2.4.18.21. The Question of Copyright philosophy anarchism
- 2.4.18.22. The Right to Self-Treatment anarchism economics
- 2.4.18.23. The Use of Knowledge in Society anarchism economics
- 2.4.18.24. Why Market Exchange Doesn’t Have to Lead to Capitalism anarchism economics
- 2.5. Software Development
- 2.5.1. A Case for Feminism in Programming Language Design programming philosophy
- 2.5.2. A Road to Common Lisp programming philosophy
- 2.5.3. TODO Common Lisp: the Language, 2nd Edition (plus a guide on how to modify it to be up to date with ANSI) programming
- 2.5.4. Complexity Has to Live Somewhere programming
- 2.5.5. TODO Design By Contract: A Missing Link In The Quest For Quality Software programming
- 2.5.6. Effective Programs programming philosophy
- 2.5.7. EQUAL programming history
- 2.5.8. Ethics for Programmers: Primum non Nocere programming software philosophy anarchism
- 2.5.9. Execution in the Kingdom of Nouns programming
- 2.5.10. Functional Programming Doesn’t Work (and what to do about it) programming
- 2.5.11. TODO Intuition in Software Development philosophy programming
- 2.5.12. Leaving Haskell behind? programming
- 2.5.13. Literature review on static vs dynamic typing programming
- 2.5.14. Maybe Not programming philosophy
- 2.5.15. Notes on Postmodern Programming philosophy programming
- 2.5.16. On Ada’s Dependent Types, and its Types as a Whole programming
- 2.5.17. TODO Ontology is Overrated: Categories, Links, and Tags philosophy software
- 2.5.18. TODO Programming as Theory Building philosophy programming
- 2.5.19. Proofs and Programs and Rhetoric programming philosophy
- 2.5.20. Semantic Compression, Complexity, and Granularity programming
- 2.5.21. Summary of 'A Philosophy of Software Design' programming philosophy
- 2.5.22. Technical Issues of Separation in Function Cells and Value Cells programming history
- 2.5.23. The Anti-Human Consequences of Static Typing programming
- 2.5.24. The epistemology of software quality programming philosophy
- 2.5.25. The Horrors of Static Typing programming
- 2.5.26. The Lisp "Curse" Redemption Arc, or How I Learned To Stop Worrying And Love The CONS programming hacker_culture anarchism
- 2.5.27. The Perils of Partially Powered Languages and Defense of Lisp macros: an automotive tragedy programming
- 2.5.28. The problematic culture of “Worse is Better” programming philosophy software
- 2.5.29. The Property-Based Testing F# Series, Parts 1-3 programming
- 2.5.30. The Safyness of Static Typing programming
- 2.5.31. The Unreasonable Effectiveness of Dynamic Typing for Practical Programs programming
- 2.5.32. Typed Lisp, A Primer programming
- 2.5.33. What Clojure spec is and what you can do with it (an illustrated guide) programming
- 2.5.34. What science can tell us about C and C++’s security programming
- 2.5.35. What We've Built Is a Computational Language (and That's Very Important!) programming software philosophy
- 2.6. Parser and Hyperfiction
- 2.7. Post Concussion Syndrome: Symptoms, Diagnosis, & Treatment medical personal
- 2.8. TODO Philosophical Investigations philosophy
- 2.9. TODO Complexity: A Very Short Introduction philosophy
- 2.10. Internet Sacred Text Archive philosophy religion
- 2.11. Intelligence Augmentation
- 2.11.1. TODO Augmenting Human Intellect: A Conceptual Framework philosophy software intelligence_augmentation
- 2.11.2. Fuck Willpower life
- 2.11.3. How to think in writing intelligence_augmentation
- 2.11.4. TODO Notation as a Tool of Thought intelligence_augmentation programming
- 2.11.5. The Mentat Handbook philosophy
- 2.11.6. The Mismeasure of Man philosophy culture
- 2.11.7. Writes and Write-Nots philosophy intelligence_augmentation literature
- 2.11.8. AI Use is "Anti-Human" ai philosophy life
- 2.11.9. AI: Where in the Loop Should Humans Go? ai programming
- 2.11.10. Avoiding Skill Atrophy in the Age of AI ai programming
- 2.11.11. Hast Thou a Coward's Religion: AI and the Enablement Crisis ai culture
- 2.11.12. I'd rather read the prompt ai culture
- 2.11.13. Claude and ChatGPT for ad-hoc sidequests ai
- 2.11.14. Is chat a good UI for AI? A Socratic dialogue emacs software intelligence_augmentation ai
- 2.11.15. How I use "AI" ai intelligence_augmentation
- 2.11.16. On "ChatGPT Psychosis" and LLM Sycophancy ai philosophy
- 2.11.17. TODO Man-Computer Symbiosis ai intelligence_augmentation
- 2.12. Are developers slowed down by AI? Evaluating an RCT (?) and what it tells us about developer productivity intelligence_augmentation ai
- 2.1. Accelerationism and Other Left-Futurism
- 3. Favorite Fiction
- 3.1. The Moon Is A Harsh Mistress
- 3.2. Dune
- 3.3. Dune Messiah
- 3.4. Ender's Game
- 3.5. Speaker for the Dead
- 3.6. Hyperion
- 3.7. Schismatrix Plus fiction literature
- 3.8. The Dispossessed: An Ambiguous Utopia philosophy fiction literature
- 3.9. God Emperor of Dune philosophy fiction literature
- 3.10. Harrison Bergeron fiction
- 3.11. Nova
- 3.12. Blindsight
- 3.13. House of Suns
- 3.14. The Stand
- 3.15. Revelation Space and Chasm City
- 3.16. Three Body Problem, Dark Forest, and Death's End
- 3.17. Hardwired fiction
- 3.18. Neuromancer and Burning Chrome fiction
- 3.19. The Wheel of Time
- 3.20. The Dark Tower
- 3.21. At The Mountains of Madness
- 3.22. Anchorhead
- 3.23. The Nameless City
- 3.24. The Red Rising Saga
- 3.25. Stone Butch Blues queer philosophy
- 3.26. The Last Question fiction
- 3.27. The Nine Billion Names of God fiction
- 3.28. The Kernel Hacker's Bookshelf: Ultimate Physical Limits of Computation fiction
- 3.29. Thinking Meat fiction
- 3.30. The Library of Babel fiction
- 3.31. Terra Ignota
- 4. Misc other philosophy and technology books on my (long-term) reading list
1. Introduction
I've chosen to host a large selection of writings about philosophy and my craft on this website, usually writings that I'm very interested in reading, or writings that I have read and that have deeply influenced me, or that state things I'd otherwise have to write myself, but which say them well enough that I might as well just point to them and let them do the work for me (no need to work more than necessary). This is for philosophical reasons, but also some practical ones – bookmarks are all well and good, but it's nice to have everything in one stable and reliable place, all in one general format (I apply pandoc to all of them to generate a sort of "reader mode" version of each page), and it's nice to have a place to locate all my thoughts on things I've read.
Some of these sites, such as The Cyberpunk Project, The Jargon File, and The Symbolics Museum are, instead of particular writings that have effected me deeply, collections of writings that might disappear at any time that I think it is important to preserve because they're relevant to cultures I deeply identify with, and some constituent writings have influenced me. There are also some things that fall into a gray area here, such as Instead of a Book, Collected Writings of Renzo Novatore, or The Cathedral and the Bazaar: Collected Essays, where I've read most but not all of them, and plan to circle back in the future.
In some cases I've even painstakingly converted two-column PDF (an annoying format if there ever was one!) to text-only (with descriptions of the missing images where necessary) plain HTML for easier accessibility (in all senses), or I've spent a lot of effort making a website more readable. I've decided to host these on my website so I can access them from everywhere!
Here is the list, in alphabetical order, with some thoughts on each:
2. Mirror List
2.1. Accelerationism and Other Left-Futurism
2.1.1. "They Killed Their Mother": Avatar as Ideological Symptom accelerationism culture philosophy anarchism
Watching Avatar, I was continually reminded of Zizek's observation in First As Tragedy, Then As Farce, that the one good thing that capitalism did was destroy Mother Earth. […] What is foreclosed in the opposition between a predatory technologised capitalism and a primitive organicism, evidently, is the possibility of a modern, technologised anti-capitalism. It is in presenting this pseudo-opposition that Avatar functions as an ideological symptom. […] Sully, the marine who is "really" a tree-hugging primitive, is a paradigm of that late capitalist subjectivity which disavows its modernity. There's something wonderfully ironic about the fact that Sully's - and our - identification with the Na'vi depends upon the very advanced technology that the Na'vi's way of life makes impossible. […] If we are to escape from the impasses of capitalist realism, if we are to come up with an authentic and genuinely sustainable model of green politics (where the sustainability is a matter of libido, not only of natural resources), we have to overcome these disavowals. There is no way back from the matricide which was the precondition for the emergence of modern subjectivity. To quote one of my favourite passages in First As Tragedy: "Fidelity to the communist Idea means that, to repeat, Arthur Rimbaud, … we should remain absolutely modern and reject the all too glib generalization whereby the critique of capitalism morphs into the critique of 'modern instrumental reason' or 'modern technological civilization'." The issue is, rather, how modern technological civilization can be organised in a different way.
2.1.2. Unconditional Acceleration and the Question of Praxis post_left accelerationism culture
One of the major points of contention concerning unconditional accelerationism […] can be summed up with the single phrase “U/ACC lacks praxis”. In the common leftist deployment of the phrase, this is exactly correct. […] U/ACC is hardly anti-praxis; it simply asks that the limits and the inevitable dissolution of things be acknowledged […] perhaps it is best to view U/ACC not as anti-praxis, but as anti-collective means of intervention.
[…] U/ACC calls attention to the manner through which collective forms of intervention and political stabilization, be they of the left or the the right, are rendered impossible in the long-run through overarching tendencies and forces. […] U/ACC charts a course outwards: the structures of Oedipus, the Cathedral, Leviathan, what have you, will be ripped apart and decimated by forces rushing up from within and around the system […]
Consider the classic Marxian formula: M – C – M’. This is, of course, a simple pathway of capital, beginning with money (M), which is translated into the commodity (C) to be sold on the market. If successful sold, the commodity is translated into a greater amount of money than at the beginning (M’) – and it is at this point that the process restarts. M – C – M’ – C…. on and on and on. […] This, in turn, clues us into the abstract force, glimpsed through diagrammation, which can lurks behind modernity rendered as historical totality: positive feedback. […] The processual relations of capital appear here as far from any sort of homeostasis.
Positive feedback not only marks the evolution of a given system, or a generalized forward direction. It is also indicative of […] past forms being undermined and propelled towards catastrophe. While for many the catastrophe might appear as something like communism, Marx as early as the Communist Manifesto was enraptured by the image of capitalist modernity as unfolding through creative destruction.
[…]
The positive-feedback processes […] radiates out across the entirety of social, cultural, political, even ecology strata […] All these [market, commercial, technological, etc.] forces lock into momentum with one another and act as force multipliers, each looping through the other, pushing it forward, faster, moving the entirety of the system towards… something – and it is this something that control systems, of either the left or the right, would be forced (and will always fail) to contend with.
[…]
Through the passage of time, the prevailing organizational dynamics have shifted […] With each passing iteration, the status of the hierarchical formation itself declines as the relations tend towards the network, or even post-network, formations. This is precisely because, Bar-Yam notes, of a rise in the ‘complexity profile’ being shaped within civilization. As the nonlinear processes driven by cascading positive feedback intensify and rise, organization itself becomes more complex, more heterogeneous, more multiplicitious, and less congenial to control systems. Rising complexity, in the end, trashes the orderly nature of organic wholeness.
The L/ACC critic might stop here and decry the construction of a strawman. “Of course we aren’t for firm hierarchy,” they are probably saying. “We’re interested in flexible forms, in hybridity and multiplicity.” […] “we support decentralized planning”. Allow me to respond to these oppositions quickly: flexible control, modular hierarchy, and decentralized planning all fall victim to the same forward rush of rising complexity as their more formalized and concrete kin.
Control systems always rely on a high degree of legibility […] in order to properly enable generalized management and specific intervention […] – yet this becomes its very Achilles’ heel. Consider Andrew Pickering’s description of the conclusions gleamed by the cybernetician Ross Ashby’s research into homeostats: “The only route to stabilisation is to cut down variety – to reduce the number of configurations an assemblage can take on, by reducing the number of participants and the multiplicity of their interconnections.” […] Pickering at length:
"Ashby was interested in the length of time it would take combinations of homeostats to achieve collective equilibrium. […] Both calculations and his machines showed that four fully interconnected homeostats, each capable of taking on twenty-five different inner states, could come into equilibrium in a couple of seconds. But if one extrapolated that an assemblage of one hundred fully interconnected homeostats the combinatories were such that chance on an equilibrium arrangement would entail search-times orders of magnitudes greater than the age of the universe. […]
[…] Finding stability can easily become a practical impossibility."
To properly operate in the real […] L/ACC or R/ACC praxis would be contingent upon the expunging of variables upon variables to push the complexity profile downwards, to make it more manageable (which is something that R/ACC tends to admit more than L/ACC). But to do this would not only mean restricting flows of people, goods, and money […] It would also require roadblocks thrown up in the path of technological development, and the suppressing of the capability of making and using tools to operate in the world. The promotion of a collective cognitive project would, ironically, be forced to suppress cognitive activity on the molecular scale.
In the end this scenario does not seem very likely. Multitudes of positive feedback processes have long since become deeply entrenched, and the system as a whole is undeniably veering far from order. […] The complexity profile is rising and will continue, and as it does the capability for collective intervention will become all but impossible. […]
Contra any gamble for collectively scalable politics of bootstrapping and navigation, Bar-Yam suggests that in the face of mounting complexity, organizational design is forced to tend towards “progressively smaller branching ratios (fewer individuals supervised by a single individual)”. As mutational development speeds up and legibility fades, size becomes a liability. James Scott has shown that detachment of large managerial forces from the chaotic ‘on-the-ground’ environs is a recipe for disaster. Similarly, Kevin Carson has illustrated the way that the Hayekian knowledge problem […] not only applies to state-centric command economies, but to the organizational black box of the modern corporation. As such problems intensify, any possibility for navigation falls downward, to smaller and more dynamic firms, greater marketization (technocommercialism begetting technocommercialism), and ultimately individual actors themselves.
It is at this point where one might happen on something that looks like U/ACC praxis. If one’s goal is the dissolution of the state and/or rule by multinational monopoly capitalism, then why recourse to the very systems and mechanisms that seek to stabilize these forms and shore them up against the forces that undermine them? This question is at the core of Deleuze and Guattari’s insight in the ‘accelerationist fragment’ that to “withdraw from the world market”, as opposed to going deeper into the throes of it, is a “curious revival of the fascist ‘economic solution’”. […]
To accelerate the process, and to throw oneself into those flows, leaves behind the (already impossible) specter of collective intervention. This grander anti-praxis opens, in turn, the space for examining forms of praxis that break from the baggage of the past. We could count agorism and exit as forms impeccable to furthering the process, and cypherpoliticsxi and related configurations arise on the far end of the development, as the arc bends towards molecularization of economic and social relations. […]
No more reterritorializing reactions. No more retroprogressivism.
2.1.3. Critique of Transcendental Miserablism post_left
This is a… deeply flawed essay, obviously, given who wrote it. The core point I disagree with is, of course, the indentification of capitalism purely with its creatively destructive, competitive, market elements, a mistake which Mark Fisher accurately identifies in "Postcapitalist Desire" as being at the very core of where Land goes off the deep end. Nevertheless, Land's screed accurately points to a fundamental problem wit the attitude of the modern left.
This post at K-Punk epitomizes a gathering trend among neomarxists to finally bury all aspiration to positive economism (‘freeing the forces of production from capitalist relations of production’) and install a limitless cosmic despair in its place. Who still remembers Khruschev’s threat to the semi-capitalist West – “we’ll bury you.” Or Mao’s promise that the Great Leap Forward would ensure the Chinese economy leapt past that of the UK within 15 years? The Frankfurtian spirit now rules: Admit that capitalism will outperform its competitors under almost any imaginable circumstances, while turning that very admission into a new kind of curse (“we never wanted growth anyway, it just spells alienation, besides, haven’t you heard that the polar bears are drowning …?”).
[…]
The grand master of this move is Arthur Schopenhauer, who lent it explicit philosophical rigour as a mode of transcendental apprehension. Since time is the source of our distress –- PKD’s “Black Iron Prison” – how can any kind of evolution be expected to save us? Thus Transcendental Miserablism constitutes itself as an impregnable mode of negation […] all that survives of Marx is a psychological bundle of resentments and disgruntlements
[…]
For the Transcendental Miserablist, ‘Capitalism’ is the suffering of desire turned to ruin, the name for everything that might be wanted in time, an intolerable tantalization whose ultimate nature is unmasked by the Gnostic visionary as loss, decrepitude and death, and in truth, it is not unreasonable that capitalism should become the object of this resentful denigration. Without attachment to anything beyond its own abysmal exuberance, capitalism identifies itself with desire to a degree that cannot imaginably be exceeded, shamelessly soliciting any impulse that might contribute an increment of economizable drive to its continuously multiplying productive initiatives. Whatever you want, capitalism is the most reliable way to get it, and by absorbing every source of social dynamism, capitalism makes growth, change and even time itself into integral components of its endlessly gathering tide.
“Go for growth” now means “Go (hard) for capitalism.” It is increasingly hard to remember that this equation would once have seemed controversial. On the left it would once have been dismissed as risible. This is the new world Transcendental Miserablism haunts as a dyspeptic ghost.
[…]
Hence the Transcendental Miserablist syllogism: Time is on the side of capitalism, capitalism is everything that makes me sad, so time must be evil.
The polar bears are drowning, and there’s nothing at all we can do about it.
What Transcendental Miserablism has no right to is the pretence of a positive thesis. The Marxist dream of dynamism without competition was merely a dream, an old monotheistic dream re-stated, the wolf lying down with the lamb. If such a dream counts as ‘imagination’, then imagination is no more than a defect of the species: the packaging of tawdry contradictions as utopian fantasies, to be turned against reality in the service of sterile negativity. ‘Post-capitalism’ has no real meaning except an end to the engine of change. […] And if that makes Transcendental Miserablists unhappy, the simple truth of the matter is: Anything would.
2.1.4. Notes on Accelerationism accelerationism culture hacker_culture philosophy
I legitimately have never felt as stimulated by a philosophy before as I have by accelerationism. Not the stupid charicature of it which — somehow — is also occasionally adopted by many who call themselves such; nor the right wing occult worship of a dark future that the later Nick Land represented; but a program which is better, faster, stronger, and more dangerous than what most leftists can hope to offer; far more adapted for the times we — still, even though these ideas were originally hatching slimy and new from their egg twenty or thirty years ago — find ourselves in.
"Why political intellectuals, do you incline towards the proletariat? In commiseration for what? […] you dare not say that the only important thing there is to say, that one can enjoy swallowing the shit of capital, its materials, its metal bars, its polystyrene, its books, its sausage pâtés, swallowing tonnes of it till you burst – and because instead of saying this, which is also what happens in the desires of those who work with their hands, arses and heads, ah, you become a leader of men, what a leader of pimps, you lean forward and divulge: ah, but that’s alienation, it isn’t pretty, hang on, we’ll save you from it, we will work to liberate you from this wicked affection for servitude, we will give you dignity. And in this way you situate yourselves on the most despicable side, the moralistic side where you desire that our capitalized’s desire be totally ignored, brought to a standstill, you are like priests with sinners, our servile intensities frighten you, you have to tell yourselves: how they must suffer to endure that! And of course we suffer, we the capitalized, but this does not mean that we do not enjoy, nor that what you think you can offer us as a remedy – for what? – does not disgust us, even more. We abhor therapeutics and its vaseline, we prefer to burst under the quantitative excesses that you judge the most stupid. And don’t wait for our spontaneity to rise up in revolt either." (LE 116)
[…]
From Anti-Oedipus:
"But which is the revolutionary path? Is there one? – To withdraw from the world market, as Samir Amin advises Third World Countries to do, in a curious revival of the fascist 'economic solution'? Or might it be to go in the opposite direction? To go further still, that is, in the movement of the market, of decoding and deterritorialization? For perhaps the flows are not yet deterritorialized enough, not decoded enough, from the viewpoint of a theory and practice of a highly schizophrenic character. Not to withdraw from the process, but to go further, to 'accelerate the process,' as Nietzsche put it: in this matter, the truth is that we haven't seen anything yet. (239-40)"
And from Libidinal Economy – the one passage from the text that is remembered, if only in notoriety:
"The English unemployed did not have to become workers to survive, they – hang on tight and spit on me – enjoyed the hysterical, masochistic, whatever exhaustion it was of hanging on in the mines, in the foundries, in the factories, in hell, they enjoyed it, enjoyed the mad destruction of their organic body which was indeed imposed upon them, they enjoyed the decomposition of their personal identity, the identity that the peasant tradition had constructed for them, enjoyed the dissolutions of their families and villages, and enjoyed the new monstrous anonymity of the suburbs and the pubs in morning and evening." (LE 111)
Spit on Lyotard they certainly did. But in what does the alleged scandalous nature of this passage reside? Hands up who wants to give up their anonymous suburbs and pubs and return to the organic mud of the peasantry. […] Hands up, furthermore, those who really believe that these desires for a restored organic wholeness are extrinsic to late capitalist culture, rather than in fully incorporated components of the capitalist libidinal infrastructure. Hollywood itself tells us that we may appear to be always-on techno-addicts, hooked on cyberspace, but inside, in our true selves, we are primitives organically linked to the mother/planet, and victimised by the military-industrial complex. James Cameron’s Avatar is significant because it highlights the disavowal that is constitutive of late capitalist subjectivity, even as it shows how this disavowal is undercut. We can only play at being inner primitives by virtue of the very cinematic proto-VR technology whose very existence presupposes the destruction of the organic idyll of Pandora.
And if there is no desire to go back except as a cheap Hollywood holiday in other People’s misery – if, as Lyotard argues, there are no primitive societies, (yes, the Terminator was there from the start, distributing microchips to accelerate its advent); isn’t, then, the only direction forward? Through the shit of capital, metal bars, its polystyrene, its books, its sausage pâtés, its cyberspace matrix?
I want to make three claims here –
- Everyone is an accelerationist
- Accelerationism has never happened.
- Marxism is nothing if it is not accelerationist
[…]
Land is the kind of antagonist that the left needs. If Land’s cyber-futurism can seem out of date, it is only in the same sense that jungle and techno are out of date – not because they have been superseded by new futurisms, but because the future as such has succumbed to retrospection. The actual near future wasn’t about Capital stripping off its latex mask and revealing the machinic death’s head beneath; it was just the opposite: New Sincerity, Apple Computers advertised by kitschy-cutesy pop. This failure to foresee the extent to which pastiche, recapitulation and a hyper-oedipalised neurotic individualism would become the dominant cultural tendencies is not a contingent error; it points to a fundamental misjudgement about the dynamics of capitalism. But this does not legitimate a return to the quill pens and powdered wigs of the eighteenth century bourgeois revolution, or to the endlessly restaged logics of failure of May ‘68, neither of which have any purchase on the political and libidinal terrain in which we are currently embedded.
[…]
Land collapses capitalism into what Deleuze and Guattari call schizophrenia, thus losing their most crucial insight into the way that capitalism operates via simultaneous processes of deterritorialization and compensatory reterritorialization. […] The abstract processes of decoding that capitalism sets off must be contained by improvised archaisms, lest capitalism cease being capitalism. Similarly, markets may or may not be the self-organising meshworks described by Fernand Braudel and Manuel DeLanda, but what is certain is that capitalism, dominated by quasi-monopolies such as Microsoft and Wal-Mart, is an anti-market. […]
For precisely these reasons, accelerationism can function as an anti-capitalist strategy […] What we are not talking about here is the kind of intensification of exploitation that a kneejerk socialist humanism might imagine when the spectre of accelerationism is invoked. As Lyotard suggests, the left subsiding into a moral critique of capitalism is a hopeless betrayal of the anti-identitarian futurism that Marxism must stand for if it is to mean anything at all.
[…]
Capitalism has abandoned the future because it can’t deliver it. Nevertheless, the contemporary left’s tendencies towards Canutism, its rhetoric of resistance and obstruction, collude with capital’s anti/meta-narrative that it is the only story left standing. Time to leave behind the logics of failed revolts, and to think ahead again.
2.1.5. Postcapitalist Desire accelerationism culture philosophy
Rejecting the anti-desire transcendental miserablism of leftist politics, and the resultant complete identification of capitalism with the satisfaction of desire:
If opposition to capital does not require that one maintains an anti- technological, anti- mass production stance, why - in the minds of some of its supporters, as much as in the caricatures produced by opponents such as Mensch - has anti- capitalism become exclusively identified with this organicism. Here we are a long way from Lenin's enthusiasm for Taylorism, or Gramsci’s celebration of Fordism, or indeed from the Soviet embrace of technology in the space race. Capital has long tried to claim a monopoly on desire: we only have to remember famous 1980s advert for Levi jeans in which a teenager was seen anxiously snuggling a pair of jeans through a Soviet border post. But the emergence of consumer electronic goods has allowed capital to conflate desire and technology so that the desire for an iPhone can now appear automatically to mean a desire for capitalism.
[…]
Land’s theory-fictional provocations were guided by the assumption that desire and communism were fundamentally incompatible […] they luridly expose the scale and the nature of the problems that the left now faces. Land […] highlight[s] the extent to which [capitalism's present] victory was dependent upon the libidinal mechanics of the advertising and PR companies whose semiotic excrescences despoil former public spaces. […] A pervasive negative advertising delibidinizes all things public, traditional, pious, charitable, authoritative, or serious, taunting them with the slew seductiveness of the commodity. Land is merely right about this “servative negative advertising” but the question is how to combat it. Instead of the anti- capitalist “no logo” call for a retreat from semiotic productivity, why not an embrace of all the mechanisms of semiotic- libidinal production in the name of a post- capitalist counterbranding? “Radical chic” is not something that the left should flee from - very much to the contrary, it is something that it must embrace and cultivate.
[…]
The second reason Land’s texts are important is that they expose an uncomfortable contradiction between the radical left’s official commitment to revolution, and its actual tendency towards political and formal- aesthetic conservatism. In Land’s writings, a quasi- hydraulic force of desire is set against a leftist- Canutist impulse towards preserving, protecting and defending. […] Where is the left that can speak as confidently in the name of an alien future, that can openly celebrate, rather than museum, the disintegration of existing socialities and territorialities?
The third reason Land’s texts are worth reckoning with is because they assume a terrain that politics now operates on, or must operate on, if it is to be effective - a terrain in which technology is embedded into everyday life and the body; design and PR are ubiquitous; financial abstraction enjoys domination over government; life and culture are subsumed into cyber- space, and data- tracking consequently assumes an increasingly important role. […] In the wake of the decline of the traditional workers’ movement, we have too often been forced into a false choice between an ascetic- authoritarian Leninism that at least worked in the sense that it took control of the state and limited the dominion of capital and models of political self- organisation which have done little to challenge neoliberal hegemony. What we need to construct is what was promised but never actually delivered by the various “anti- revolutions” of the 1960s: an effective anti- authoritarian left.
[…]
Post- Fordism has seen the decomposition of the old working class - which, in the Global North at least, is no longer concentrated in manufacturing spaces, and whose forms of industrial action are consequently no longer as effective as they once were. At the same time, the libidinal attractions of consumer capitalism needed to be met with a counterfolding, not simply an anti- libidinal dampening.
This entails that politics comes to terms with the essentially inorganic nature of libido […] that which Lacan and Land call the death drive: not a desire for death, for the extinction of desire in what Freud called the Nirvana principle, but an active force of death, defined by the tendency to deviate from any homeostatic regulation […] we ourselves are that which disrupts organic equilibrium. […] history has a direction […] [and] one implication of this is that it is very difficult to put this historically- machined inorganic libido back in its box: if desire is a historical- machinic force, its emergence alters “reality” itself; to suppress it would therefore involve either a massive reversal of history, or collective amnesia on a grand scale, or both.
[…] we can now see that the challenge is to imagine a postcapitalism that is commensurate with the death drive. At the moment, too much anticapitalism seems to be about the impossible pursuit of a social system oriented towards the Nirvana principle - to a quiescence - precisely the return to a mythical primitiveist equilibrium which the likes of Mensch mock. But any such return to primitivism would require either an apocalypse or the imposition of authoritarian measures - how else is drive to be banished? A leap into it is not a milligram; it not what we want, then we crucially need to articulate what it is we do want - which will mean disarticulating technology and desire from capital.
An example of how a postcapitalism that accepts and satisfies desire, instead of rejecting it and tamping down on it, need not be the same as consumerist capitalism — how capitalism, despite being identified with the satisfaction of desire, is actually imperfect at it, contra Land:
[…] Now, it begins to look as if, far from there being some inevitable fit between the desire for Starbucks and capitalism, Starbucks feeds desires which it can meet only in some provisional and unsatisfactory way. What is, in others, the desire for Starbucks is the thwarted desire for communism? For what is the “third place” that Starbucks offers - this place that is neither home nor work - if not a degraded prefiguration of communism itself?
A succinct statement of what clearer-eyed less capitalism-worshipping accelerationism actually means:
For Deleuze and Guattari, capitalism is defined by the way it simultaneously engenders and inhibits processes of destratification. In their famous formulation, capitalism deterritorializes and reterritorializes at the same time; there is no process of abstract decoding without a reciprocal decoding via neurotic personalisation (Oedipalisation) - hence the early 21st century disjunction of massively abstract finance capital on one hand; oedipalised celebrity culture on the other. Capitalism is a necessarily failed escape from feudalism, which, instead of destroying encastment, reconstructs social stratification in the class structure. It is only given this model that Deleuze and Guattari’s call to “accelerate the process” makes sense. It does not mean accelerating any or everything in capitalism willy- nilly in the hope that capitalism will thereby collapse. Rather, it means accelerating the processes of destratification that capitalism cannot but obstruct. One virtue of this model is that it places capital, not its adversary, on the side of resistance and control…
2.1.6. Unconditional accelerationism as antipraxis accelerationism philosophy post_left
For Srnicek and Williams and other managerialists, the worsening is cut out of the picture: things will get better if only we establish a practical political hegemony that can make it so. This, apparently, is the real content of accelerationism […] In this response, of course, the humanist obsession reaches a totalising climax: the human capacity to reshape the world is utterly unbound; the promised land lies not beyond but immediately ahead.
The unconditional accelerationist dismisses the question. […] It is precisely against this view that accelerationism defines itself as ‘antihuman(ist)’, and against the fundamental question of praxis that it offers ‘antipraxis’. This can hardly mean ‘Do nothing’, of course: that would mean not just to return to the fundamental question of praxis, but to offer perhaps the most numbly tedious answer of all. The unconditional accelerationist, instead, referring to the colossal horrors presented to the human agent […] points to the basic unimportance of unidirectional human agency. We ‘hurl defiance to the stars’, but in their silence—when we see them at all—the stars return only crushing contempt. To the question ‘What is to be done?’, then, she can legitimately answer only, ‘Do what thou wilt’—and ‘Let go.’
We insist, then, that there is no promised land […] Far from discouraging the unconditional accelerationist or beckoning her to the grim convent of asceticism, however, the ruins in which this realisation contemptuously leaves us are the terrain of a genuine, even, properly, horrific aesthetic freedom that is liberated from the totality of a one-directional political teleology […] Taking the smallest steps beyond good and evil, the unconditional accelerationist, more than anyone else, is free at heart to pursue what she thinks is good and right and interesting—but with the ironical realisation that the primary ends that are served are not her own. For the unconditional accelerationist, the fastidious seriousness of the problem-solvers who propose to ‘save humanity’ is absurd in the face of the problems they confront. It can provoke only Olympian laughter. […]
This freedom is what antipraxis means […]
2.1.7. Conspiracy Theories, Left Futurism, and the Attack on TESCREAL
I've written about this before, and am working on another howl of pain about this, if I ever finish it, but this article states well the problems I have with the philosphical outlook of people like Timnit Gebru:
Some critics have decided futurist philosophies and their advocates are bound together in a toxic, reactionary bundle, promoted by a cabal of Silicon Valley elites. This style of conspiracy analysis has a long history and is always a misleading way to understand the world. We need a Left futurism that can address the flaws of these philosophies and frame a liberatory vision.
I'm going to basically quote like… almost all of this essay, to be honest, even going so far as to mirror its structure, because it needs to be said. I want to beat people over the head with it, I want to scream it, and it's so nice to see someone talk about it.
In 2022 [computer scientist and AI ethics researcher Timnit Gebru, who was quit/fired from Google] allied with the writer Émile P. Torres […] a trenchant critic of that community. Together Gebru and Torres have begun to promote the theory that Silicon Valley elites, and a global network of corporate, academic and nonprofit institutions, are invested in a toxic bundle of ideologies that they call TESCREAL, short for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Guided by this conspiracy framework they have tried to connect the dots between the advocates for these ideas, and support for eugenics, racism and right-wing politics.
[…] a better sociology and intellectual history reveals that each philosophy has a progressive political wing that has been ignored. Moreover the wholesale condemnation of these ideas has cast a pall over all thinking about humanity’s future when we desperately need to grapple with the implications of emerging technologies.
2.1.7.1. Transhumanism
The core idea of transhumanism is that people should be able to use technology to live longer healthier lives, and to have more control over their bodies and brains. Transhumanists take seriously cognitive and genetic enhancement, brain-machine interfaces and uploading personalities to computers, and that the adoption of these technologies will stretch the boundaries of “the human.” Transhumanism has been associated with Silicon Valley libertarianism for decades, but in fact has been a loose global culture that leans more to the political left than to libertarianism. […] The roots of transhumanist thought can be traced through Marxists like J.B.S Haldane and John Desmond Bernal […]. […] In 2014 many transhumanists around the world signed The Technoprogressive Declaration […]
Today organized transhumanism barely exists […] Nonetheless there is a libertarian transhumanist thread in Thiel, Musk and the other tech billionaires, and their wealth has given them a disproportionate visibility and outsized influence on the thinking of the transhumanist milieu.
We can draw a parallel to the spread of Darwinism in the 19th century. The ideas of natural selection were warmly embraced by atheists and the Left […] [but] A version of the doctrine of natural selection also appealed to the captains of industry and the wealthy, Social Darwinism. […] As with transhumanism, it would be a mistake to condemn all of Darwinism because Henry Ford read it as a warrant for his wealth and racism.
[…] Trans rights can be seen as one of the first major political confrontations over transhumanism, with technology completing the feminist deconstruction of gender, as outlined in Martine Rothblatt’s 2011 From Transgender to Transhuman: A Manifesto On the Freedom Of Form.
As the fights over birth control, trans rights and universal healthcare make clear, the progressives have argued the case that everyone should have access to technologies that have been proven safe and effective regardless of hypothetical future consequences […]
Let me add: the TESCREAL paper describes transhumanism this way:
In contrast, “modern transhumanism,” as we can label it, took shape in the late 1980s and early 1990s, and combined the Huxleyan vision of transcendence with the new methodology of second-wave eugenics. Hence, advocates imagined that by enabling individuals to freely choose whether, and how, to undergo radical enhancement, a superior new “posthuman” species could be created. According to Nick Bostrom (2013, 2005a), a “posthuman” is any being that possesses one or more posthuman capacities, such as an indefinitely long “healthspan,” augmented cognitive capacities, enhanced rationality, and so on.
This is free of all the moral ailments of eugenics, for me:
- It is voluntary on the part of those undergoing the change.
It is free from an assumed specific end goal — which frees it from white supremacy, ableism, allocisheteronormativity, etc — as even Torres admits elsewhere:
As Toby Ord writes in his book “The Precipice,” which could be seen as the prequel to MacAskill’s “What We Owe the Future,” the ultimate task for humanity is to “fulfill our long-term potential” in the universe. What exactly is this supposed “potential”? Ord isn’t really sure, but he’s quite clear that it will almost certainly involve realizing the transhumanist project. “Forever preserving humanity as it now is may also squander our legacy, relinquishing the greater part of our potential,” he declares, adding that “rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today.”
- It is not centrally planned or implemented, and not implemented on the basis of trying to control who is and isn't born or who does and does not reproduce.
Yet, the authors try to link it to eugenics nonetheless, simply because the rich, white, privileged assholes who are part of most transhumanist-longtermist organizations choose to operationalize it that way — which should not discredit the idea, and on the basis of the purest genetic fallacy imaginable:
Now consider the fact that the idea of transhumanism was literally developed by some of the most prominent eugenicists of the 20th century, most notably Julian Huxley, who was president of the British Eugenics Society from 1959 to 1962. Using almost the exact same words as Ord, Huxley wrote in 1950 — after the horrors of World War II, one should note — that if enough people come to “believe in transhumanism,” then “the human species will be on the threshold of a new kind of existence … It will at last be consciously fulfilling its real destiny.”
(not from the TESCREAL paper, but the same essay by Torres as the quote about Ord above)
This is on the same level as conservatives rejecting Planned Parenthood and abortion because the woman who promoted those was a eugenicist, or transgender surgeries becuase of John Money.
2.1.7.2. Extropianism
Max More was one of the libertarian thinkers (non-billionaire) who helped shape modern transhumanism. In the late 1980s More was part of a California milieu of radical futurists that coalesced around an especially anarcho-capitalist vision of the future, with a mission to become immortals and wake up all the matter in the universe. They adopted the term “extropy,” the opposite of the entropy. For the Extropians markets were brilliant self-organizing structures, and bureaucracies were clunky barriers to technological innovation.
As a meta-ethical starting point, asserting that the primary Good is creating and preserving order in a decaying universe is as good a fundamental principle as any. […] like all arguments from first principles, the Extropians encountered problems when trying to extrapolate derivative principles, like political economy. [… elided stuff about "scientific socialism" bc I'm not a big fan, but I'm not changing the import of the paragraphs I've stitched] This model is “extropian” but in a way that is neutral about political economy, and thus perfectly consistent with capitalist, democratic socialist, or Soviet futurism. A similar story could be told about the enthusiastic Soviet adoption of Western “cybernetics” as a philosophy and method for the control of planned economies, despite its bourgeois origins. Efforts to ground meta-ethics in fundamental physical principles like feedback loops, extropy or quantum theory may be as unfruitful as appeals to God’s Word, but neither are they indelibly reactionary when applied to society or economics.
2.1.7.3. Singularitarianism
Singularitarianism, as many have remarked, shares features with religious millennialism and engenders some of the same cultic behavior. […]
[…] the convictions of many Singularitarians appear ungrounded under interrogation. […] We have little basis for conjecture about how a machine mind would behave, or how easy it would be to control them.
[…]
The anti-TESCREAL conspiracy argues that even relatively cautious people like Bostrom talking about the risks of superintelligence is reactionary since they distract us from algorithmic bias and the electricity use of server farms. While we agree that techno-libertarians tend to be more interested in millennialist and apocalyptic predictions than responding to the problems being created by artificial intelligence today, we also believe that it is legitimate and important to discuss potential catastrophic risks and their mitigation. The anti-TESCREALists dismiss all discussion of AGI […]
Right, as long as the discussion of the risks of AGI isn't sucking all the air out of the room for the former, it's fine to discuss both, alongside all the other obscure ethical debates philosphers get into!
The problem comes when we let go of empiricism and skepticism about each of the steps in the argument.
In his 2005 The Singularity is Near Ray Kurzweil gathered a lot of data about accelerating trends in computation and gene sequencing. […] But critics of technological acceleration point out that we haven’t seen exponential improvement in transportation or agriculture […]. Technological acceleration may be a matter of what you pay attention to.
[…] While scientific discovery has accelerated, what has decelerated is the ability of our present capitalist social relations to promote socially beneficial scientific research for widespread deployment. […] The Singularity idea is simultaneously an acknowledgment of the limits of prediction, an expression of utopian aspirations, a demand for risk regulation, and a mythology that can distract us from the pressing demands of our times […].
There are indeed existential risks and utopian possibilities ahead, and there is no necessary contradiction between short-term risk mitigation and the consideration of long-term opportunities and risks. There is no contradiction between trying to peer ahead at what will soon be possible, and trying to create the most free and equal societies today in preparation.
2.1.7.4. Cosmism
The charge that Cosmism is common among Silicon Valley elites is probably the least credible part of the TESCREAL conspiracy. IEET Fellow Ben Goertzel wrote A Cosmist Manifesto in 2010, providing his own contemporary spin on the idea, and since then there has been a surge in mentions of “cosmism” […]. Most of this interest is simply historical curiosity about the original Russian Cosmism which is unlikely to take 21st century Silicon Valley by storm.
The term “Cosmism” was coined by Russian mystic Nikolai Fyodorov in the late 19th century […] Both Communist and non-Communist Russian thinkers then proposed their own Cosmist ideas […]
"Like the Bolsheviks, the Cosmists saw technology as a tool for human liberation, a means to move past old barriers and achieve greater states of social being. For both the Bolsheviks and Cosmists, technology offered a way to conquer the barriers of nature rather than blindly follow them. However, one could say the Cosmists had a more ambitious goal: rather than merely the classless society of the Bolshevik future, the Cosmists wanted to conquer death itself and master the entire universe." (Cosmonaut, 2019)
Russian Cosmists also prefigured a version of eco-philosophy, emphasizing the unity of all living beings and the interconnectedness of the universe. […]
A revived version of Russian Cosmism, the Izborsky Club, is in fact influential in Russia today. […] The Izborsky Club reflects the swirl of NazBol ideas in contemporary Russia, attempting to merge Russian Orthodoxy, Bolshevik authoritarianism and fascist “Eurasian” racial-nationalism. […] Perhaps the anti-TESCREAL conspiracists see Cosmism as an easy target, associating the tech bros with Russian authoritarianism and mysticism. But in their sloppy approach to intellectual history they are again ignoring the creative flowering of downright weird ideas that came with a lot of 20th century Left-wing movements.
2.1.7.5. Rationalism
I'm not a huge fan of how some reified and idealized notion of the old-style "Left" and the "Enlightenment" are operationalized here — especially as a post-leftist. That reification of the Enlightenment, in particular, as the rational movement is ironically what, quite rightly in my opinion, often gives rise to the very accusations of colonialism the authors are trying to deny. Now, empiricism and rationalism as we know them today are products of the Enlightenment, and good ones at that, but there have been precursors all across the world and across time, so the heavy centralization of the concept of rationality and science into just that one Western movement is at the very least somewhat detrimental to the author's point.
Not coincidentally, given these reifications, the essay's frequent references to the Soviets, Marx, scientific socialism, and central planning suggest the author is a bit of a tankie, which I'm truly not a fan of.
However, the hostility toward rationality, and the "silly" critique of rationality and science as merely "Western" and "colonial," really is a problem within the precise part of the Left the authors are targeting. This stance is indeed operationalized by many through pomo philosophy, so I suppose that half of the terminology is acceptable.
Despite my own post-leftism and Stirnerism, and my sympathy for postmodern critiques of reason — I don't think reason or science are some guaranteed-true things handed down from on high, but as conditionally applicable human constructions — I do see these tools as having done the most, out of any other knowledge discourses, to improve humanity's lot in life. As a perspectivist and pragmatist, I'm not beholden to pretending all worldviews are equal. Instead, I'm interested in choosing the ones most profitable to me, which is why I do support trying to be empirical and rational.
An attack on rationalism has to be understood in light of the postmodernist critique of rationality. A commitment to rationality, including self-awareness of our tendencies to be biased, is central to the Enlightenment. As observed by Adorno in The Dialectic of the Enlightenment, liberal, capitalist societies have massively developed the power of humanity to rationally master Nature in the service of instrumental usefulness and private profit. According to the postmodern critique, this commitment to rationality is intrinsically capitalist, patriarchal, white supremacist and imperialist.
Rationalist assertions that some are more rational than others have certainly been part of debates over restricting or expanding suffrage and extending or ending colonialism. For the Left, as opposed to postmodernism, the problem was not the goal of rationality but the distorted ways in which we applied it.
[…]
Rationality is a tool of the sentiments, and people have different sentiments and material interests. Even the collective commitment to solve problems through rational debate derives from values that have no base in reason. […] The anti-TESCREALites are correct to criticize the quirky, arrogant way that the church of rationality conducts itself […] But while this rationalist subculture may flourish among Silicon Valley elites, we see its connections to reactionary (as opposed to liberal or centrist) political views as exaggerated and its influence on society at large as overdrawn.
2.1.7.6. Effective Altruism
I'm not a fan of utilitarianism at all — I'm more of an autonomy-maximising rule consequentialist — but the important point in this section, in my opinion, is that utilitarianism, even when applied to effective giving, is not inherently reactionary. Thus while one can detest the culture that's developed around it, shaped so deeply by privilege and technocratic attitudes, rejecting the very fundamental concept as reactionary is silly.
The second boldest move by the TESCREAL conspiracists, after linking “rationality” to the conspiracy, is to frame all utilitarian thinking as also suspect. […] Consequentialism is the logic of cost-benefit analysis, priority-setting, and considering when the “cure is worse than the disease.” We are using consequentialist logic […] when we argue that massive investments in a green economy are worth the pain because of the long-term benefits.
The effective altruists are also both too timid and too over-confident in assuming they can predict how best to achieve the good, even if it is just more people in the future. They are too timid because they assume that the distribution of power and wealth in the world is immutable, so the only choice we can make is between charities. They rarely follow the logic to ask how much we should invest in changing the world, and which are the best ways to do so. […]
EAs are overconfident in making assumptions about how the present will impact the future. One common EA assumption […] [is that] economic growth […] will be good in the long run. A climate campaigner might counter that sacrificing “growth” for a green economy […] would be best for a flourishing future.
Effective altruists are also prone to using the ends to justify questionable means, which is an old problem often addressed by utilitarians. […]
So again, the TESCREAL conspiracy theory brings out that consequentialism has been interpreted by billionaires as a rationale for accumulating vast wealth through questionable means so long as they give to charity. […] But the consequentialist logic of effective altruism is also central to left-wing thought. What is the central contradiction of our time, white supremacy, gender oppression or class inequality? […] Elite, white male effective altruists don’t ask those questions. But the rest of us should.
2.1.7.7. Longtermism
Longtermism is the application of the consequentialism of effective altruism to far future speculation. […] In the immediate term we can all agree that anything that would extinguish the human race would be bad for us and our descendants. But for longtermists the interests of our trillions of hypothetical descendants outweigh our own. […] Clearly the longtermists are overconfident in their futurism, ignoring the radical uncertainty of the Singularitarians. […]
Longtermists’ seeming indifference to contemporary politics is only warranted if they assume none of the long term risks are made more or less likely by having dictatorships or democracies, stark inequality or egalitarian social democracy, today. […] Likewise social policies today are likely to tilt future risks. Schmidt and Jujin (2023) note that longtermists should both pursue equality in the short term for consequentialist ends, but also for the long term since inequality increases future risks:
“Income inequality might increase existential risk and negative trajectory change (by exacerbating) climate change, lower institutional quality, polarisation and conflict, and lower differential progress… Therefore…we have instrumental reason to favour income inequality reduction, regardless of our preferred time-horizon.” (Schmidt and Jujin, 2023)
So the TESCREAL critics are right to poke holes in the misplaced certainty about the longtermists’ wispy web of futurist assumptions. […] But the basic question remains: Should we take into account the interests of future generations, and if we should, how? […] they are good questions to ask even if the conclusions are entirely speculative and often reveal the social and political biases of the thinkers. Environmental philosophers have been among the most adamant that we should take future generations into account, although they are mostly thinking of climate refugees in 2100 and not star children escaping the heat death of the universe.
I would say that these questions are good, for giving us hope and meaning by envisioning a better — although never perfect — future we may try to work towards, as long as those conversations don't shut out practical discussions of how to improve and protect humanity and the planet long enough to get there! We can't colonize Mars if we nuke ourselves to oblivion in petty oil wars or run out of fossil fuels or cook our atmosphere causing mass migrations and a massive step back in human civilization. At the same time, we should not sacrifice possibilities for a boundless optimistic future, or at least hopes and dreams and plans for one, to save the present.
2.1.7.8. Eugenics and Demography
More problematic for the TESCREAL conspiracy is the implication that any discussion of future demography is implicitly eugenic. It is true that the far right is concerned that there aren’t enough of the “right kind” of babies, and even some transhumanists have mused about the dysgenic consequences of smart people having fewer children. The only response to such musings is to assert that most parents want what is best for their children, and if we reduce inequality and improve education we will maximize the abilities of the children parents decide to have. A future with universal access to genetic enhancement will allow people to choose whatever characteristics they prefer and that is as far as social policy should go.
But the TESCREAL conspiracists are tagging any discussion about the looming “old-age dependency ratio” as also being implicitly about the decline of white people. Although the accelerating pace of technological change may make the future harder to predict, demography has always been one of the most reliable trend lines. […] When William MacAskill muses about the impact of dwindling birth rates on technological innovation in his longtermist text What We Owe the Future it is of course read by the conspiracists as sinister racist eugenics. […] The decline in birthrates is real, driven in part by [bad policies](https://jacobin.com/2018/08/motherhood-birth-rate-childcare-abortion-birth-control), and we should talk about it.
The clear answer for […] industrialized world in general is to liberalize migration; let the world’s young people go where they will. […] While liberalizing migration would ease youth unemployment in developing nations and fuel economies with labor shortages in the short term, it also drains talent and increases dependence on remittances.
In the long run we are almost certainly evolving towards a shrinking world population, with lower fertility and longer life expectancies. Automating work may fill in the labor shortage gap, but we will need comprehensive reform of our welfare systems in order to ensure intergenerational equity. We will need a universal basic income and universal healthcare, not just Social Security and Medicare for seniors. We will need free lifelong access to higher education, child allowances, and free childcare. […] Discussing demographic problems and policy solutions now is neither eugenic nor racist.
Yeah, a society that's mostly old people needing care, with only a small amount of young people to do the caring — either directly or indirectly — is a problem irrespective of race or eugenic concerns, and this is the result of people not feeling economically safe enough to have kids for a variety of reasons, and women's bodily autonomy not being respected leading to pregnancy being way more dangerous and constricting than it needs to be, and people feeling like there isn't a future, let alone a better one, for any children they were to raise, and all of these are things worth fixing.
2.1.7.9. Conclusion
This is worth repeating for the puritains in the audience:
We both believe that more attention should be given to proximate risks like social media pathologies and technological unemployment. We completely support a ruthless deconstruction of the shallow, self-serving, and dangerous ways that some billionaires have adopted these futurist ideas.
But reducing two hundred years of intellectual history and political reality to the sloppy musings of a handful of tech bros and a tenuous web of guilt by association is seriously misleading, like all conspiracy theories.
The real enemy is the political and economic system that allows billionaires to determine our future. It really makes little difference if the ideas selectively adopted by billionaires are Episcopalianism, Darwinism, Wahabism, MAGA or TESCREAL. Billionaires are going to interpret ideas in ways that valorize themselves and protect their interests.
I couldn't possibly bold or highlight this enough:
Worse, the anti-TESCREAL conspiracy is reactive rather than proactive. The Left desperately needs new, positive visions of a liberatory future that take seriously today’s ongoing technological changes and actually-existing organization of social life. Rather than disparage all thinking about the utopian possibilities of the future and retreat into a Keynesian or Stalinist nostalgia, the Left should point to the limitations of bourgeois futurism in helping us achieve more equal, democratically accountable futures.
2.1.8. Fragment on the Event of “Unconditional Acceleration” accelerationism philosophy
U/ACC instead argues that what is open to ‘us’ is perhaps only the possibility of, as Deleuze writes in Logic of Sense, a “becoming the quasi-cause of what is produced within us”. There remains much which is inherently outside ‘us’, however. All we are able to do is produce “surfaces and linings in which the event is reflected”. [2]
In accelerating the process, Deleuze and Guattari nod purposefully towards Nietzsche, and, in light of the limits of what we are able to produce, we should remember that what is key for Deleuze in Nietzsche’s thought is his amor fati; his love of fate. Fate for Nietzsche is not our theistic destiny in the hands of God but the affirmation of a life caught up in its own flows. It is in this way that Deleuze writes of becoming worthy of the Event, of a life made impersonal.
[…]
In living a life (as opposed to my life — privileging the immanently impersonal over the segregated and territorialising personal), the task is “to become worthy of what happens to us, and thus to will and release the event, to become the offspring of one’s own events, and thereby be reborn, to have one more birth, and to break with one’s carnal birth — to become the offspring of one’s events and not of one’s actions, for the action is itself produced by the offspring of the event.” [5]
2.1.9. Xenofeminism feminism philosophy intelligence_augmentation accelerationism
The "Xenofeminism: A Politics for Alienation" manifesto outlines a technologically-driven, accelerationist feminist agenda, instead of one that accepts naturalism, essentialism, the split between "masculine" and "feminine" versions of reason, and all of the gendered thinking tools that patriarchy itself invented and assigned-feminine which too many feminists employ and embrace.
0x00 Ours is a world in vertigo. It is a world that swarms with technological mediation, interlacing our daily lives with abstraction, virtuality, and complexity. XF constructs a feminism adapted to these realities: a feminism of unprecedented cunning, scale, and vision; a future in which the realization of gender justice and feminist emancipation contribute to a universalist politics assembled from the needs of every human, cutting across race, ability, economic standing, and geographical position. No more futureless repetition on the treadmill of capital, no more submission to the drudgery of labour, productive and reproductive alike, no more reification of the given masked as critique. Our future requires depetrification. XF is not a bid for revolution, but a wager on the long game of history, demanding imagination, dexterity and persistence.
2.1.10. TODO Accelerate: An Accelerationist Reader accelerationism
I haven't gotten around to reading this yet, but I very much want to.
2.1.11. TODO Libidinal Economy
The extended quote of Libidinal Economy found in Mark Fisher's essay, Postcapitalist Desire is fascinating — one might say, arousing? — and it really made me want to read this book. This is my best-attempt OCR of a PDF of it I found on the internet archive, using a custom complex AI orchestration pipeline (that uses fuzzy diffing to avoid hallucinations).
2.1.12. TODO Fanged Noumena
While of course I may and will probably (a) not understand a lot of what I read herein, given Land's dense and occultish style and (b) disagree with — at the very least, the evaluative stances — some of what I find, nevertheless I find Land (before his neoreactionary turn) a fascinating, fascinating intellectual figure, much in the way I do Hobbes or de Sade or Nietzsche (although I unironically agree with Nietzsche a lot more, I think). As Mark Fisher says, Land is the philosopher the left needs to challenge its thinking, its ideas, and its methods — and even its values. Fanged Noumena is a collection of his early CCRU era writings, precisely those prior to his neoreactionary turn, so this is the most interesting stuff for me to read, since I find his neoreactionary ideas painfully boring compared to his left-accelerationist ones.
2.1.13. TODO Thirst for Annihilation
I dunno man, Bataille seems really interesting.
2.1.14. Reaching Beyond to the Other: On Communal Outside-Worship accelerationism
Considering capital as the ultimate “eerie entity”, Fisher wonders about the ways
"that “we” “ourselves” are caught up in the rhythms, pulsions and patternings of non-human forces. There is no inside except as a folding of the outside; the mirror cracks, I am an other, and I always was.[note]Mark Fisher, The Weird and the Eerie (London: Repeater Books, 2016), 11-12.[/note]"
Following this, it is fitting that Fisher then begins his book with an exploration of the works of H.P. Lovecraft. He notes that “it is not horror but fascination — albeit a fascination usually mixed with a certain trepidation — that is integral to Lovecraft’s rendition of the weird”.[note]Ibid., 17.[/note] For Fisher, on both an aesthetic and political level, it is the weird that is desirable for its ability to “de-naturalise all worlds, by exposing their instability, their openness to the outside”.[note]Ibid., 29.[/note]
[…]
The Outside is a concept that has long haunted the history of philosophy under various different names and formulations — from the Kantian noumenon to the Lacanian Real, et al. — with each functioning as a challenge to subjectivity that attempts to think beyond phenomenal limit-experiences. Whilst this broad definition is applicable to the narratives in much weird fiction, these tales explore the Outside through narrated ‘experience’ rather than objective academic analysis and they do so with an imaginative flare that has fascinated many.
[…]
The [Cthulhu] cult represent the Outside as a comprehensible and material social threat, far more visibly dangerous than the misadventures of the atomised individual in their collective channelling of the powers of the great Cthulhu. Whatever horrifying and unthinkable form the Outside may take, the fact remains that it is seemingly through community alone that its affects can be harnessed (whilst nonetheless remaining intolerable to the individual human mind).
[…]
Death is, of course, the ultimate limit-experience, the ultimate challenge to subjectivity, and here grief becomes the affective result of being haunted by the Outside through the absences that death imposes upon both individual and community.
[…]
Caring for one another with the intensity that so often follows grief renews the possibility of such a collective subject being established […] it is through community that the affects of the Outside are channelled, whilst still remaining intolerable, and the political implications of this communal channelling are considerable.
Whilst such implications are not discussed in The Weird and the Eerie explicitly, in the context of Fisher’s wider writings the book reads like an aesthetic toolkit for ontopolitical ‘egress’[…] In his next book, Acid Communism, left in an unknown state of completion at the time of his death, Fisher was to address the political reality of egress more explicitly. He hoped to reinvigorate the psychedelic praxes of consciousness-raising/-razing that have come to culturally define the 1960s and ’70s, channelling them through his postcapitalist desires.
[…]
In the unpublished introduction to Acid Communism, Fisher writes of this potential return of the new that capitalist realism […] repeatedly ungrounds […]. Fisher seemed to want to encourage a community of Lovecraftian Outsiders, unsure of how they arrived at their present situation but nonetheless curious to leave the cloistered world in which they find themselves.
[…]
For Fisher, thinking through the work of Herbert Marcuse, the history of Western art is littered with exit strategies. He presents a leftist instantiation of Land’s Outsider position, challenging the contemporary populist left, that can at best be described as working to a model of all voice and no exit, calling for new attempts at finding exits through other ways of living — attempts that have all too often been neutered by capitalism’s cooptive mechanisms.
[…]
However, Fisher’s is not an anarcho-primitivist position, supporting a return to a time before capitalism and its technologies. His accelerationist position is an advocation of the use of capitalism’s forces to modulate past potentials, transducing them into the future by collectively harnessing capital’s deterritorializing capacities for outside aims and egresses.
2.1.15. Cyberpunk is Now Our Reality accelerationism cyberpunk hacker_culture
We are living in a pre-cyberpunk world, actively paving the groundwork for corporate-government fusion… William Gibson didn't predict the future, he described the present with better lighting.
[…]
The Death of the Hacker Ethic
There was a time when resistance meant building alternatives… Linux stands as a monument to that effort… Now, resistance is synonymous with "there ought to be a law." … Every single one of these “solutions” makes the problem worse. Large corporations can afford compliance teams. They can navigate complex regulatory frameworks. They can lobby for favorable interpretations. Small businesses and potential competitors cannot. They must then turn to these large companies for infrastructure and expertise, creating even less competition and fewer choices for consumers.
[…]
The Regulatory Capture Loop
Complex regulations get passed that ostensibly protect consumers from big corporations. Small businesses can't afford compliance, so they either fail or get absorbed by larger entities that can handle the regulatory burden. The remaining large corporations then influence how these regulations are interpreted and enforced because they've grown even larger in capital, technical expertise, and manpower.
The result is a system where every attempt at resistance actually strengthens the thing being resisted. It's regulatory capture disguised as consumer protection. It's corporate-government fusion marketed as progressive activism.
This is the Zaibatsu model from Neuromancer, not through dramatic corporate takeovers, but through the slow erosion of the capacity for independent action. Every time we choose regulatory solutions over building alternatives, we're voting for a world where only institutions with massive resources can operate effectively.
[…]
Consensual Hallucination
Gibson's "consensual hallucination" was cyberspace… Today's consensual hallucination is subtler: the belief that regulatory capture is actually resistance. We've collectively agreed to hallucinate that asking power to regulate itself constitutes meaningful opposition.
[…]
Jacking Out of the Matrix
The real hack isn't regulatory, it's cognitive. It's refusing to accept the premise that only institutions can solve institutional problems. The future is still being written in code. The question is: will you be writing it, or will you be asking someone else to write it for you?
2.1.16. An Anarcho-Transhumanist FAQ intelligence_augmentation anachism post_left
This FAQ states my beliefs on technology and transhumanism and their overlap with anarchism almost exactly as I would've stated it — and will probably state it in upcoming half-finished essays — so I decided it's well worth just… putting it here, so it can speak for me and what I believe in and hope for.
2.1.17. Science As Radicalism philosophy anarchism
A deeply cathartic-to-read critique of the misconceptions of science that so many people on the left have, and a defense of its radicalism and its value, against the spiritualists and mystics, from the perspective of someone who actually understands the epistemological and moral objections being made (such as those of Against Method) instead of merely ignoring them.
2.1.18. Rethinking Crimethinc. anarchism culture philosophy
I've always found Crimethinc kind of insufferable, and after reading (well, listening to) Days of War, Nights of Rage and seeing their comments on things like poverty and wearing deoderant of all things, it only reinforced my notion that I really don't like them. It's possible to do actually useful things that help yourself and your community without giving into vanguardism on the one hand or co-option and total implication in the system on the other, but they're so focused on purity and radical aesthetics and having fun that they're incapable of getting their hands dirty to do any of that. Moreover, it's possible to be what Bookchin called a "lifestylist" without it actually being bad or a problem, but Crimethinc seems to glorify a sort of hippie crust punk dropout lifestyle of play acting at poverty.
Many aspects of crimethinc reference the Situationist Internationale and a large chunk of their ideas are based around the Situationist concept “the transformation of everyday life”. The Situationists were heavily influenced by Marx and CWC are heavily influenced by American consumer culture it would seem. The call to transform everyday is a call to smash the current exploitative system, to participate in the class struggle, an ongoing historical conflict between the proletariat and the ruling class. Crimethinc substitute this class struggle with a teenage individualistic rebellion based on having fun now. Shoplifting, dumpster diving, quitting work are all put forward as revolutionary ways to live outside the system but amount to nothing more than a parasitic way of life which depends on capitalism without providing any real challenge. […]
The reality of the situation is that you can’t boycott your way out of capitalism dropping out of the system is never going to bring it down if anything you just re-enforce the system by recuperating people’s alienation and desire for revolution by selling them a new lifestyle under the same system. Capitalism is a system of coercion and control, we don’t work to support the system, we work because we need food and shelter and healthcare and the only way to get that under capitalism is with money. The only way we can get money is by selling our labour — the alternative is to rot, that’s Capitalism. I don’t want to feed my kids out of a dumpster or have to scam free healthcare if I get cancer, it’s not appealing or practical. There’s nothing revolutionary about using your white, middle-class, western privilege to remove yourself from the system at the expense of those who remain trapped in it. None of us are free until we all are.
This quote, expanded upon to like a paragraph in the actual Days of War, Nights of Rage book, also made me so fucking angry as someone who's had the huge privilege to be able to avoid these things, but has been adjacent to them for most of my adult life because of who I love and who I'm friends with, and my own disabilities and marginalization:
“Poverty, unemployment, homelessness — if you’re not having fun, you’re not doing it right!”
As "Rethinking" rightly says:
The arrogance of middle class kids (just like the hippies) supposing to change by world by roughing it as “poor” people for a few years is captured perfectly in the quote on the back cover of their book evasion. […] Condescending, privileged, middle class crap. The only people who could think that poverty is in any way fun are wealthy kids playing at being poor for a few years, the daily reality of poverty, unemployment and homelessness for the average person is very serious and something anarchists should always organise against rather than mock.
This is another juicy quote:
One of crimethinc’s more recent publications “recipes for disaster:an anarchist cookbook”, is indicative of the massive problems with them. The book is a somewhat interesting list of pranks, scams and activist information. […] An eclectic mix of information, most of which is crap the rest of which is useless without political understanding. […] The book shies away from serious revolutionary information like how to organise a union in your workplace, how to organise at school, how to make contact and work with communities in struggle, how to break out of the activist ghetto, how to set up a social centre, how to provide prisoner support or how to support asylum seekers etc. All the activities amount to little more than activist busy-work, something to waste your time with while being a “drop-out”, ease your social conscience and not have to do any hard work or compromise yourself by working with people who are complicit in the system.
A lot of Crimethinc's work also feels like a gleeful embrace of the implications of transcendental miserablism, a sort of "we're radical because we LIKE that (we believe) anarchism requires asceticism and a gentle decline into that good night."
2.1.19. Comments on CrimethInc. anachism philosophy culture
This is another good criticism of CrimethInc — placing special emphasis on something that I did notice when reading Days of War, which is that for avowed amoralists who dislike puritainism, they're quite puritanical and speak in a lot of generalized, absolutist moral terms, in ways that are pretty unjustified.
Despite your cautions against ideology, your book is riddled with simplistic, unqualified declarations. In some places you are admirably open and modest, but in others you come on like you have definitive answers to practically everything from the meaning of life to whether people should wear deodorant or not. […] Just as you present rebellious actions as almost purely GOOD, you tend to present the system as almost purely BAD. In reality, just as most revolts and radical movements have been full of mistakes and limitations, many aspects of the present society are positive, or at least potentially so. […]
There is also a recurring moralizing simplisticness. It is good that you recognize the element of necessary hypocrisy and compromise in our lives. But a lot of your agonizing over whether this or that practice is hypocritical is, to me, a phoney, nonexistent issue. I do not view my options primarily in terms of whether I am "implicated" in capitalism, as if that were some sort of sin to be avoided at all cost. Nor, conversely, do I consider that I am accomplishing anything very notable if I avoid some such compromise, as if radical struggle were a matter of more and more people gradually becoming less and less implicated in the prevailing system. That perspective is just as simplistic as pacifists' feeling that we will arrive at peace by more and more people becoming pacifists (while failing to confront economic and other factors that engender wars despite most people's preference for peace). While I salute the sense of experimentation of your friend who tried to live off garbage pickings instead of buying food, it does not seem to me that such choices have much to do with radical strategy.
In the same essay there's a critique of a similar collective from the UK, attached by the author because he believes the same critiques apply to CrimethInc, and I agree:
I don't have time to comment on Theft #2 in any detail. The most notable criticism I have is that the last chapter is sometimes rather simplistic. While I think it's fine to recommend that people seek pursuits that are enjoyable and satisfying to them, it seems to me rather silly to declare that life "should be" "perpetual ecstasy" etc. This kind of "should be" amounts to little more than that you think it would be nice if things were that way. It's ultimately pretty meaningless, like saying that insects "should" have "the right" to live freely without being eaten by birds. It's a false reasoning which you have probably picked up from Vaneigem. He rightly criticizes traditional leftism's overemphasis on sacrificing for the cause, but then flips into an equally unjustified opposite conclusion that pleasure is the supreme criterion for everything, and then to the even more absurd implication that a successful revolution will somehow magically produce endless unalloyed pleasure.
This is an important point; Emma Goldman may have said that she wanted no part in a revolution without dancing, but a revolution that's only dancing is just a dance party. It gets even more cutting, though:
Again, I think it's good that you encourage people to reexamine their lives, to reduce addictive consumership, and to make space for relaxation and reflection. But you have to be careful not to be too rigid in your recommendations. "The more you consume, the less you live" makes a good graffiti, it conveys a good general point. But it shouldn't be taken too literally, as if it were a precise scientific formula. In your SHIT percentage test, for example, you more or less equate "the more of yourself is actually yours" (a rather vague notion in any case) with lower SHIT percentages. This amounts to an inverse economic fetishization, a sort of anti-economic puritanism, as if enjoyment was always inversely proportional to the degree of economic taint.
And, most relevant to Libidinal Economy (which I want to read) and what's discussed in "Notes on Accelerationism":
Actually, of course, in many cases an activity that creates profit for someone may nevertheless be more enjoyable than another activity that puristically avoids the market. The best things in life are not always free, even if they "should" be. If you frequently present this kind of over-simplified formula, people with enough sense to know better will not take you seriously regarding the many other areas where you have valid points to make.
A good distinction from one of the comments:
…i think the point is that if you consider a personal aversion to sweatshop clothing a political act that makes any difference to the existence of sweatshops, that is the kind of substitution for collective action that characterises individualism/lifestylism.
2.1.20. Hopepunk, Optimism, Purity, and Futures of Hard Work by Ada Palmer fiction
I reject the fallacy that we can "control" the future in any large scale way, build mass movements or central organizations to guide the flows of accelerating history; yet I still think that holding in your heart an image of what the future should look like to you, and choosing to keep getting up each morning — "Carlyle had risen full of strength that day, for it was the day…" — and keep working towards it in whatever small way you can, not tricking yourself that your prefiguration will ultimately mean anything necessarily, but doing it because it's healing, because it grants you purpose, is worth it anyway, and often doing so doesn't fix the world, but it can make the world a little better, even if the complex flows of chaos that swirl out may undo any larger goal: you can ease the pain and sorrow of the world that crushes us, and that is worth it anyway. Thus I'm not against praxis, I'm against deluded praxis, centralized praxis, praxis that comes with eschatology.
This essay by one of my favorite authors echoes that feeling, even though I disagree with some of of its more solutionistic, liberal statements of belief in the actual power to change things, or in incrementalism. And what it has to say about purity culture in activism is well worth hearing as well. Some quotes, as has become customary:
[…] hopepunk sketch[es] out a body of imagined worlds which are positive but not utopias, because their positivity lies, not in the world already being excellent, but in the world moving toward the better thanks to the efforts of excellent people who work to make a difference. […] Hopepunk stories tend to showcase cooperation, collective action, resilience, partial victories as the world is moved toward, not to, a better state, ending with (re)construction underway and the world chang/ing/, not chang/ed/. The subgenre has also been described as weaponized optimism […] connected with what Malka Older has called speculative resistance.
One general signature of hopepunk is that its stories counter tales of emotional darkness or rottenness, […] violent, amoral, and often dystopian/apocalyptic trappings […] plots and character choices [that] advance zero-sum narratives where achievement requires causing someone else’s fall, or portraits of human nature in which, in the end, people will always be selfish […] and in which systems will always be corrupt and unsalvageable. In Hopepunk […] ordinary people[…] while [some] make bad choices, enough make good choices to leave a positive sense of the capacity of humans to choose good. […] [V]ery few [stories] depict how studies show people really behave in crisis, banding together to supply pop-up pantries and mutual aid.
Before it sounds like hopepunk could describe any story where […] or good guys beat bad guys, these are not […]. […] The punk movement is anti-establishment, with long ties to political activism and resistance, anti-consumerist, anti-corporate, anti-authoritarian, with a strong ethic of visibility and in-your-face active expression of these sentiments.
Punk is also messy. While the grimdark hero […] is one opposite of the Disney Princess, hopepunk is another opposite. The Disney Princess and many hero stories are purity stories. […] Punk is grungy in aesthetic, and hopepunk shares that, building better among the garbage of the bad. It also expresses negative emotions, not despair but productive anger, as well as kindness which sometimes needs to take the form of confrontation […]. Hopepunk showcases resilience by showing failure, setbacks, and compromise, not as heroic flaws or formative backstories, but acknowledging that messing up is an unavoidable part of taking action in the first place.
Most grimdark heroes make mistakes, but they are giant character-defining mistakes […] reinforcing the idea that any failure or impurity is a big deal, not a normal part of living a reasonable life. […]
Messiness and impurity paired with positive change are one core way hopepunk differs […] from a huge body of narratives, and even political logics. As articulated by philosopher/sociologist Alexis Shotwell in her brilliant book Against Purity: Living Ethically in Compromised Times, ideas of purity often do harm to action and activism […] Purity is also used (often strategically) to make ethical choices more difficult. […] Shotwell discusses the impossibility of true purity—practically no foods or products exist that don’t harm something—and the importance of acknowledging harm done in order to be able to evaluate levels of harm, reduction of harm, etc. […]
Hopepunk narratives are genre stories which have depictions of human nature (teamwork, honesty, resilience) but which also counter purity narratives, by having space for partial victories, unfinished projects, compromise, and mundane not-character-defining failures and mistakes. Setbacks in hopepunk tend to be more about the outcome for the world, what now needs to be done to help or fix the problem, in contrast with stories where setbacks or failures are mainly beats in character development, the point where the hero must stand by his vow never to kill again, or prove her leadership skills to keep the team together.
Fiction does not give us many stories of continuing to slog on after an unsatisfying partial victory. That makes hopepunk powerful.
Interestingly, even my almost post-apocalyptic, ultimately-doomed dystopian cyberpunk stories are, in a sense, all ultimately about this, with an unconditional accelerationist twist: we can't control where we're going; that's in the hands of technocapital's hyperstitial ideas and egregores; but we can try to make things better, try to improve, to stop harming, to keep doing something, and that is meaningful and does, in a small way that may butterfly-effect out into a large way, change the world for the better in a way we can't possibly predict.
2.1.21. Civilisation, Primitivism and Anarchism anarchism
Soundly and neatly deals with primitivism as an ideology:
- It's rejection of the fundamental anarchist project – showing how we can have civilization, ecological sustainability, and liberty at the same time – for the premise of liberals – that these things are incompatible – just a slightly different conclusion.
- The 'population question', which renders primitivism a totally incoherent and unworkable ideology with nothing to offer outside a facile intellectual mind game of critique.
- The violent, coercive, and borderline fascist implications of any attempt to reduce the human population at a large scale.
- Responses to some responses to these points.
2.2. AI
2.2.1. Environmental Issues
2.2.1.1. Using ChatGPT is not bad for the environment ai culture
If you don't have time to read this post, these five graphs give most of the argument. Each includes both the energy/water cost of using ChatGPT in the moment and the amortized cost of training GPT-4:
Source for original graph, I added the ChatGPT number. Each bar here represents 1 year of the activity, so the live car-free bar represents living without a car for just 1 year etc. The ChatGPT number assumes 3 Wh per search multiplied by average emissions per Wh in the US. Including the cost of training would raise the energy used per search by about 33% to 4 Wh. Some new data implies ChatGPT's energy might be 10x lower than this.
I got these numbers by multiplying the average rate of water used per kWh used in data centers + the average rate of water used generating the energy times the energy used in data centers by different tasks. The water cost of training GPT-4 is amortized into the cost of each search. This is the same method used to originally estimate the water used by a ChatGPT search. Note that water being “used” by data centers is ambiguous in general, read more in this section
Statistic for a ChatGPT search, burger, and leaking pipes.
I got these numbers from back of the envelope calculations using publicly available data about each service. If you think they're wrong I'd be excited to update them! Because this is based on the total energy used by a service that's rapidly growing it's going to become outdated fast.
Back of the envelope calculation here
And it's crucial to note that, for instance, if you're using Google's AIs, which are both mixture of experts models (so inference is much cheaper) and run on Google's much more power efficient Tensor chips, it's probably less than this! Running a small AI locally is probably less efficient than running that same small AI in a data center, as well, but the AIs you can run locally are so much smaller than the ones you'd run in a data center that that counts as an optimization too, not to mention it decreases the power density and distortion that datacenters impose on the power grid.
2.2.1.2. Is AI eating all the energy? Part 1/2 ai
I think this, especially in combination with "Using ChatGPT is not bad for the environment", is a really good demonstration of the idea that generative AI, in itself, is not a particularly power hungry or inefficient technology.
The different perspective this article takes that makes it worth adding in addition to "Using ChatGPT" is that it actually takes the time to aggregate the power usage of another industry no one seems to have a problem with the power consumption of — precisely because it's distributed, and thus mostly invisible usually — in this case the gaming industry, to give you a real sense of scale for those seemingly really high absolute numbers for AI work, and then to pile on even more, instead of comparing GenAI to common household and entertainment tasks as "Using ChatGPT" does, it more specifically compares using GenAI to save you time on a task versus doing all of it yourself — similar to this controversial paper.
Of course the natural response would be that the quality of the work that AI can do is not comparable to the quality of the work an invested human can do when really paying attention to every detail, which is true! But is it all or nothing? If AI is less energy intensive than a human at drawing and writing, then a human that's really pouring their heart and soul and craft into their writing or art but uses AI fill or has AI write some boilerplate or help them draft or critique their writing (thus saving a lot of sitting staring at the screen cycling things around) might save power on those specific sub-tasks. Moreover, do we really do that all the time? Or can AI be a reasonable timesaver for things we'd otherwise dash off and not pay too much attention to, thus acting as an energy-saver too?
2.2.1.3. Is AI eating all the energy? Part 2/2 ai
This is the natural follow-up to the previous part of this article. In this, the author points out where the terrifying energy and water usage from AI is coming from. Not those using it, nor the technology itself inherently, but the reckless, insane, limitless "scale at all costs" (literally — and despite clearly diminishing returns) mindset of corporations caught up in the AI land grab:
This is the land rush: tech companies scrambling for control of commercial AI. […] The promises of huge returns from speculative investment breaks the safety net of rationalism.
[…] Every tech company is desperate to train the biggest and most expensive proprietary models possible, and they’re all doing it at once. Executives are throwing more and more data at training in a desperate attempt to edge over competition even as exponentially increasing costs yield diminishing returns.
[…]
And since these are designed to be proprietary, even when real value is created the research isn’t shared and the knowledge is siloed. Products that should only have to be created once are being trained many times over because every company wants to own their own.
[…]
In shifting away from indexing and discovery, Google is losing the benefits of being an indexing and discovery service. […] The user is in the best position to decide whether they need an AI or regular search, and so should be the one making that decision. Instead, Google is forcing the most expensive option on everyone in order to promote themselves, at an astronomical energy cost.
[…]
Another mistake companies are making with their AI rollouts is over-generalization. […] To maximize energy efficiency, for any given problem, you should use the smallest tool that works. […] Unfortunately, there is indeed a paradigm shift away from finetuned models and toward giant, general-purpose AIs with incredibly vast possibility spaces.
[…]
If you’re seeing something useful happening at all, that’s not part of the bulk of the problem. The real body of the problem is pouring enormous amounts of resources into worthless products and failed speculation.
The subtitle for that bloomberg article is “AI’s Insatiable Need for Energy Is Straining Global Power Grids”, which bothers me the more I think about it. It’s simply not true that the technology behind AI is particularly energy-intensive. The technology isn’t insatiable, the corporations deploying it are. The thing with an insatiable appetite for growth at all cost is unregulated capitalism.
So the lesson is to only do things if they’re worthwhile, and not to be intentionally wasteful. That’s the problem. It’s not novel and it’s not unique to AI. It’s the same simple incentive problem that we see so often.
[…] Individual users are — empirically — not being irresponsible or wasteful just by using AI. It is wrong to treat AI use as a categorical moral failing […] blame for these problems falls squarely on the shoulders of the people responsible for managing systems at scale. […] And yet visible individuals who aren’t responsible for the problems are being blamed for the harm caused by massive corporations in the background […] it removes moves the focus from their substantial contribution to the problem to an insubstantial one they’re not directly responsible for.
It’s the same blame-shifting propaganda we see in recycling, individual carbon footprints, etc.
2.2.1.4. Reactions to MIT Technology Review's report on AI and the environment ai
A new report from MIT Technology Review on AI's energy usage is being touted by anti-AI people as proof they were right. In actuality, its numbers line up very nicely with the defenses of AI's energy usage that we've been seeing — so why are people confused? Because they presented their data in an extremely misleading way:
The next section gives an example of how using AI could make your daily energy use get huge quick. Do you notice anything strange?
So what might a day’s energy consumption look like for one person with an AI habit?
Let’s say you’re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise.
Then you make 10 attempts at an image for your flyer before you get one you are happy with, and three attempts at a five-second video to post on Instagram.
You’d use about 2.9 kilowatt-hours of electricity—enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.
Reading this, you might think “That sounds crazy! I should really cut back on using AI!”
Let’s read this again, but this time adding the specific energy costs of each action, using the report’s estimates for each:
Let’s say you’re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise. (This uses 29 Wh)
Then you make 10 attempts at an image for your flyer before you get one you are happy with (This uses 12 Wh) and three attempts at a five-second video to post on Instagram (This uses 2832 Wh)
You’d use about 2.9 kilowatt-hours of electricity—enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.
Wait a minute. One of these things is not like the other. Let’s see how these numbers look on a graph:
Of the 2.9 kilowatt-hours, 98% is from the video!
This seems like saying “You buy a pack of gum, and an energy drink, and then a 7 course meal at a Michelin Star restaurant. At the end, you’ve spend $315! You just spent so much on gum, an energy drink, and a seven course meal at a Michelin Star restaurant.” This is the wrong message to send readers. You should be saying “Look! Our numbers show that your spending on gum and energy drinks don’t add to much, but if you’re trying to save money, skip the restaurant.”
2.2.1.5. Mistral environmental impact study ai
This is excellent work on the part of Mistral:
After less than 18 months of existence, we have initiated the first comprehensive lifecycle analysis (LCA) of an AI model, in collaboration with Carbone 4, a leading consultancy in CSR and sustainability, and the French ecological transition agency (ADEME). To ensure robustness, this study was also peer-reviewed by Resilio and Hubblo, two consultancies specializing in environmental audits in the digital industry.
I'm excited that finally, at least one decently sized relatively frontier AI company has finally, actually, been thorough, complete, and open on this matter, not just cooperating with an independent sustainability consultancy, but also the French environmental agency and two separate independent environmental auditors. This is better than I had hoped for prior!
The lifecycle analysis is almost hilariously complete, too, encompassing:
- Model conception
- Datacenter construction
- Hardware manufacturing, transportation, and maintenence/replacement
- Model training and inference (what people usually look at)
- Network traffic in serving model tokens
- End-user equipment while using the models
Basically, the study concludes that generating 400 tokens costs 1.14g of CO2, 0.05L of water, and 0.2mg Sb eq of non-renewable materials. Ars Technica puts some of these figures into perspective well:
Mistral points out, for instance, that the incremental CO2 emissions from one of its average LLM queries are equivalent to those of watching 10 seconds of a streaming show in the US (or 55 seconds of the same show in France, where the energy grid is notably cleaner).
This might seem like a lot until you realize that the average query length they're using (from the report) is 400 tokens, and Mistral Large 2 (according to OpenRouter) generates tokens at about 35 tok/s, so those 400 tokens would take 11 seconds to generate, which means that this isn't increasing the rate of energy consumption of an average internet user at all.
It's also equivalent to sitting on a Zoom call for anywhere from four to 27 seconds, according to numbers from the Mozilla Foundation. And spending 10 minutes writing an email that's read fully by one of its 100 recipients emits as much CO2 as 22.8 Mistral prompts, according to numbers from Carbon Literacy.
So as long as using AI saves you more than 26 seconds out of 10 minutes writing an email, it's actually saved the environment. (10 / 22.8 = 0.43, 0.43*60 = 25.8, so a Mistral prompt is equivalent in power usage to 25.8s of writing an email yourself in terms of CO2 output).
Meanwhile, training the model and running it for 18 months used 20.4 ktCO2, 281,000 m3 of water, and 660 kg Sb eq of resource depletion. Once again, Ars Technica puts this in perspective:
20.4 ktons of CO2 emissions (comparable to 4,500 average internal combustion-engine passenger vehicles operating for a year, according to the Environmental Protection Agency) and the evaporation of 281,000 cubic meters of water (enough to fill about 112 Olympic-sized swimming pools [or about the water usage of 500 Americans for a year]).
That sounds like a lot, but it's the same fallacy I've pointed out over and over when people discuss AI's environmental issues: the fallacy of aggregation. It sounds gigantic, but in comparison to the number of people it benefits, it is absolutely and completely dwarfed; moreover, there are a million other things we do regularly without going into a moral panic over it — such as gaming — that, when aggregated in the same way, use much more energy.
What this further confirms, then, in my opinion, the point that compared to a lot of other common internet tasks that we do — including streaming and video calls and stuff like that — AI is basically nothing. And even for tasks that are basically directly equivalent like composing an email manually versus composing it with the help of an AI, it actually uses less CO2 and water to do it via the AI. Basically: the more optimistic, rational, middle of the road estimates of AI climate impact, which till now had to make do only with estimated data, are further confirmed to be correct.
They emphasize, of course, that with millions or billions of people prompting these models, that small amount can add up. But by the same token, those more expensive common local or internet computer tasks that we already do without thinking would add up to even more. And it's worth pointing out that the CO2 emitted and water used by this AI with millions of people prompting it a lot is the equivalent of like 4,500 people owning a car for a year. *That's nothing in comparison to the size of the user base.
What's worth noting for this analysis is that they did it for their Mistral Large 2 model. This model is significantly smaller than a lot of frontier open weight models in its price bracket at 123B parameters versus the usual 200-400B, but it is dense, meaning that training and inference requires all parameters to be active and evaluated to produce an output, whereas almost all modern frontier models are mixture of experts, with only about 20-30B parameters typically active. This means that Mistral Large 2 likely used around 4-5 times more energy and water to train and run inferences with compared to top of the line competing models. So put that in your hat.
The big issue with AI continues to be the concentration of environmental and water usage in particular communities, and the reckless and unnecessary scaling of AI datacenters by the hyperscalers.
Mistral does have some really good suggestions for improving the environmental efficiency of models themselves, though, besides just waiting for the AI bubble to pop:
These results point to two levers to reduce the environmental impact of LLMs.
- First, to improve transparency and comparability, AI companies ought to publish the environmental impacts of their models using standardized, internationally recognized frameworks. Where needed, specific standards for the AI sector could be developed to ensure consistency. This could enable the creation of a scoring system, helping buyers and users identify the least carbon-, water- and material-intensive models.
- Second, from the user side, encouraging the research for efficiency practices can make a significant difference:
- developing AI literacy to help people use GenAI in the most optimal way,
- choosing the model size that is best adapted to users’ needs,
- grouping queries to limit unnecessary computing,
For public institutions in particular, integrating model size and efficiency into procurement criteria could send a strong signal to the market.
2.2.2. IP Issues
2.2.2.1. “Wait, not like that”: Free and open access in the age of generative AI ai culture hacker_culture
The whole article is extremely worth reading for the full arguments, illustrations, and citations, and mirrors my feelings well, but here's just the thesis:
The real threat isn’t AI using open knowledge — it’s AI companies killing the projects that make knowledge free.
The visions of the open access movement have inspired countless people to contribute their work to the commons: a world where “every single human being can freely share in the sum of all knowledge” (Wikimedia), and where “education, culture, and science are equitably shared as a means to benefit humanity” (Creative Commons).
But there are scenarios that can introduce doubt for those who contribute to free and open projects like the Wikimedia projects, or who independently release their own works under free licenses. I call these “wait, no, not like that” moments.
[…]
These reactions are understandable. When we freely license our work, we do so in service of those goals: free and open access to knowledge and education. But when trillion dollar companies exploit that openness while giving nothing back, or when our work enables harmful or exploitative uses, it can feel like we've been naïve. The natural response is to try to regain control.
This is where many creators find themselves today, particularly in response to AI training. But the solutions they're reaching for — more restrictive licenses, paywalls, or not publishing at all — risk destroying the very commons they originally set out to build.
The first impulse is often to try to tighten the licensing, maybe by switching away to something like the Creative Commons’ non-commercial (and thus, non-free) license. […]
But the trouble with trying to continually narrow the definitions of “free” is that it is impossible to write a license that will perfectly prohibit each possibility that makes a person go “wait, no, not like that” while retaining the benefits of free and open access. If that is truly what a creator wants, then they are likely better served by a traditional, all rights reserved model in which any prospective reuser must individually negotiate terms with them; but this undermines the purpose of free […]
What should we do instead? Cory Doctorow has some suggestions:
Our path to better working conditions lies through organizing and striking, not through helping our bosses sue other giant mulitnational corporations for the right to bleed us out.
The US Copyright Office has repeatedly stated that AI-generated works don't qualify for copyrights […]. We should be shouting this from the rooftops, not demanding more copyright for AI.
[…]
Creative workers should be banding together with other labor advocates to propose ways for the FTC to prevent all AI-based labor exploitation, like the "reverse-centaur" arrangement in which a human serves as an AI's body, working at breakneck pace until they are psychologically and physically ruined:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
As workers standing with other workers, we can demand the things that help us, even (especially) when that means less for our bosses. On the other hand, if we confine ourselves to backing our bosses' plays, we only stand to gain whatever crumbs they choose to drop at their feet for us.
2.2.2.2. If Creators Suing AI Companies Over Copyright Win, It Will Further Entrench Big Tech ai culture
There’s been this weird idea lately, even among people who used to recognize that copyright only empowers the largest gatekeepers, that in the AI world we have to magically flip the script on copyright and use it as a tool to get AI companies to pay for the material they train on. […] because so many people think that they’re supporting creators and “sticking it” to Big Tech in supporting these copyright lawsuits over AI, I thought it might be useful to play out how this would work in practice. And, spoiler alert, the end result would be a disaster for creators, and a huge benefit to big tech. It’s exactly what we should be fighting against.
And, we know this because we have decades of copyright law and the internet to observe. Copyright law, by its very nature as a monopoly right, has always served the interests of gatekeepers over artists. This is why the most aggressive enforcers of copyright are the very middlemen with long histories of screwing over the actual creatives: the record labels, the TV and movie studios, the book publishers, etc.
This is because the nature of copyright law is such that it is most powerful when a few large entities act as central repositories for the copyrights and can lord around their power and try to force other entities to pay up. This is how the music industry has worked for years, and you can see what’s happened. […]
[…] The almost certain outcome (because it’s what happens every other time a similar situation arises) is that there will be one (possibly two) giant entities who will be designated as the “collection society” with whom AI companies will […] just purchase a “training license” and that entity will then collect a ton of money, much of which will go towards “administration,” and actual artists will… get a tiny bit.
[…]
But, given the enormity of the amount of content, and the structure of this kind of thing, the cost will be extremely high for the AI companies […] meaning that only the biggest of big tech will be able to afford it.
In other words, the end result of a win in this kind of litigation […] would be the further locking-in of the biggest companies. Google, Meta, and OpenAI (with Microsoft’s money) can afford the license, and will toss off a tiny one-time payment to creators […].
2.2.2.3. Creative Commons: AI Training is Fair Use
The Creative Commons makes a detailed and in my opinion logically and philosophically sound, if perhaps not necessarily legally sound (we'll see; I'm not a lawyer, and these things are under dispute in courts currently, although it does seem like things are turning towards AI training being fair use) argument that AI training should be considered fair use.
2.2.3. Architecture and Design
2.2.3.1. On Chomsky and the Two Cultures of Statistical Learning ai hacker_culture philosophy
At the Brains, Minds, and Machines symposium held during MIT’s 150th birthday party in 2011, Technology Review reports that Prof. Noam Chomsky "derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior."
[…]
I take Chomsky's points to be the following:
- Statistical language models have had engineering success, but that is irrelevant to science.
- Accurately modeling linguistic facts is just butterfly collecting; what matters in science (and specifically linguistics) is the underlying principles.
- Statistical models are incomprehensible; they provide no insight.
- Statistical models may provide an accurate simulation of some phenomena, but the simulation is done completely the wrong way; people don't decide what the third word of a sentence should be by consulting a probability table keyed on the previous words, rather they map from an internal semantic form to a syntactic tree-structure, which is then linearized into words. This is done without any probability or statistics.
- Statistical models have been proven incapable of learning language; therefore language must be innate, so why are these statistical modelers wasting their time on the wrong enterprise?
Is he right? That's a long-standing debate. These are my short answers:
- I agree that engineering success is not the sole goal or the measure of science. But I observe that science and engineering develop together, and that engineering success shows that something is working right, and so is evidence (but not proof) of a scientifically successful model.
- Science is a combination of gathering facts and making theories; neither can progress on its own. In the history of science, the laborious accumulation of facts is the dominant mode, not a novelty. The science of understanding language is no different than other sciences in this respect.
- I agree that it can be difficult to make sense of a model containing billions of parameters. Certainly a human can't understand such a model by inspecting the values of each parameter individually. But one can gain insight by examing the properties of the model—where it succeeds and fails, how well it learns as a function of data, etc.
- I agree that a Markov model of word probabilities cannot model all of language. It is equally true that a concise tree-structure model without probabilities cannot model all of language. What is needed is a probabilistic model that covers words, syntax, semantics, context, discourse, etc. Chomsky dismisses all probabilistic models because of shortcomings of a particular 50-year old probabilistic model. […] Many phenomena in science are stochastic, and the simplest model of them is a probabilistic model; I believe language is such a phenomenon and therefore that probabilistic models are our best tool for representing facts about language, for algorithmically processing language, and for understanding how humans process language.
- In 1967, Gold's Theorem showed some theoretical limitations of logical deduction on formal mathematical languages. But this result has nothing to do with the task faced by learners of natural language. In any event, by 1969 we knew that probabilistic inference (over probabilistic context-free grammars) is not subject to those limitations (Horning showed that learning of PCFGs is possible). I agree with Chomsky that it is undeniable that humans have some innate capability to learn natural language, but we don't know enough about that capability to say how it works; it certainly could use something like probabilistic language representations and statistical learning. And we don't know if the innate ability is specific to language, or is part of a more general ability that works for language and other things.
The rest of this essay consists of longer versions of each answer.
[…]
Chomsky said words to the effect that statistical language models have had some limited success in some application areas. Let's look at computer systems that deal with language, and at the notion of "success" defined by "making accurate predictions about the world." First, the major application areas […] Now let's look at some components that are of interest only to the computational linguist, not to the end user […]
Clearly, it is inaccurate to say that statistical models (and probabilistic models) have achieved limited success; rather they have achieved an overwhelmingly dominant (although not exclusive) position. […]
This section has shown that one reason why the vast majority of researchers in computational linguistics use statistical models is an engineering reason: statistical models have state-of-the-art performance, and in most cases non-statistical models perform worst. For the remainder of this essay we will concentrate on scientific reasons: that probabilistic models better represent linguistic facts, and statistical techniques make it easier for us to make sense of those facts.
[…]
When Chomsky said “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science” he apparently meant that the notion of success of “accurately modeling the world” is novel, and that the only true measure of success in the history of science is “providing insight” — of answering why things are the way they are, not just describing how they are.
[…] it seems to me that both notions have always coexisted as part of doing science. To test that, […] I then looked at all the titles and abstracts from the current issue of Science […] and did the same for the current issue of Cell […] and for the 2010 Nobel Prizes in science.
My conclusion is that 100% of these articles and awards are more about “accurately modeling the world” than they are about “providing insight,” although they all have some theoretical insight component as well.
[…]
Every probabilistic model is a superset of a deterministic model (because the deterministic model could be seen as a probabilistic model where the probabilities are restricted to be 0 or 1), so any valid criticism of probabilistic models would have to be because they are too expressive, not because they are not expressive enough.
[…]
In Syntactic Structures, Chomsky introduces a now-famous example that is another criticism of finite-state probabilistic models:
"Neither (a) ‘colorless green ideas sleep furiously’ nor (b) ‘furiously sleep ideas green colorless’, nor any of their parts, has ever occurred in the past linguistic experience of an English speaker. But (a) is grammatical, while (b) is not."
[…] a statistically-trained finite-state model can in fact distinguish between these two sentences. Pereira (2001) showed that such a model, augmented with word categories and trained by expectation maximization on newspaper text, computes that (a) is 200,000 times more probable than (b). To prove that this was not the result of Chomsky’s sentence itself sneaking into newspaper text, I repeated the experiment […] trained over the Google Book corpus from 1800 to 1954 […]
Furthermore, the statistical models are capable of delivering the judgment that both sentences are extremely improbable, when compared to, say, “Effective green products sell well.” Chomsky’s theory, being categorical, cannot make this distinction; all it can distinguish is grammatical/ungrammatical.
Another part of Chomsky’s objection is “we cannot seriously propose that a child learns the values of 109 parameters in a childhood lasting only 108 seconds.” (Note that modern models are much larger than the 109 parameters that were contemplated in the 1960s.) But of course nobody is proposing that these parameters are learned one-by-one; the right way to do learning is to set large swaths of near-zero parameters simultaneously with a smoothing or regularization procedure, and update the high-probability parameters continuously as observations comes in. Nobody is suggesting that Markov models by themselves are a serious model of human language performance. But I (and others) suggest that probabilistic, trained models are a better model of human language performance than are categorical, untrained models. And yes, it seems clear that an adult speaker of English does know billions of language facts (a speaker knows many facts about the appropriate uses of words in different contexts, such as that one says “the big game” rather than “the large game” when talking about an important football game). These facts must somehow be encoded in the brain.
It seems clear that probabilistic models are better for judging the likelihood of a sentence, or its degree of sensibility. But even if you are not interested in these factors and are only interested in the grammaticality of sentences, it still seems that probabilistic models do a better job at describing the linguistic facts. The mathematical theory of formal languages defines a language as a set of sentences. That is, every sentence is either grammatical or ungrammatical; there is no need for probability in this framework. But natural languages are not like that. A scientific theory of natural languages must account for the many phrases and sentences which leave a native speaker uncertain about their grammaticality (see Chris Manning’s article and its discussion of the phrase “as least as”), and there are phrases which some speakers find perfectly grammatical, others perfectly ungrammatical, and still others will flip-flop from one occasion to the next. Finally, there are usages which are rare in a language, but cannot be dismissed if one is concerned with actual data.
[…]
Thus it seems that grammaticality is not a categorical, deterministic judgment but rather an inherently probabilistic one. This becomes clear to anyone who spends time making observations of a corpus of actual sentences, but can remain unknown to those who think that the object of study is their own set of intuitions about grammaticality. Both observation and intuition have been used in the history of science, so neither is “novel,” but it is observation, not intuition that is the dominant model for science.
[…]
[…] I think the most relevant contribution to the current discussion is the 2001 paper by Leo Breiman (statistician, 1928–2005), Statistical Modeling: The Two Cultures. In this paper Breiman, alluding to C. P. Snow, describes two cultures:
First the data modeling culture (to which, Breiman estimates, 98% of statisticians subscribe) holds that nature can be described as a black box that has a relatively simple underlying model which maps from input variables to output variables (with perhaps some random noise thrown in). It is the job of the statistician to wisely choose an underlying model that reflects the reality of nature, and then use statistical data to estimate the parameters of the model.
Second the algorithmic modeling culture (subscribed to by 2% of statisticians and many researchers in biology, artificial intelligence, and other fields that deal with complex phenomena), which holds that nature’s black box cannot necessarily be described by a simple model. Complex algorithmic approaches (such as support vector machines or boosted decision trees or deep belief networks) are used to estimate the function that maps from input to output variables, but we have no expectation that the form of the function that emerges from this complex algorithm reflects the true underlying nature.
It seems that the algorithmic modeling culture is what Chomsky is objecting to most vigorously [because] […] algorithmic modeling describes what does happen, but it doesn’t answer the question of why.
Breiman’s article explains his objections to the first culture, data modeling. Basically, the conclusions made by data modeling are about the model, not about nature. […] The problem is, if the model does not emulate nature well, then the conclusions may be wrong. For example, linear regression is one of the most powerful tools in the statistician’s toolbox. Therefore, many analyses start out with “Assume the data are generated by a linear model…” and lack sufficient analysis of what happens if the data are not in fact generated that way. In addition, for complex problems there are usually many alternative good models, each with very similar measures of goodness of fit. How is the data modeler to choose between them? Something has to give. Breiman is inviting us to give up on the idea that we can uniquely model the true underlying form of nature’s function from inputs to outputs. Instead he asks us to be satisfied with a function that accounts for the observed data well, and generalizes to new, previously unseen data well, but may be expressed in a complex mathematical form that may bear no relation to the “true” function’s form (if such a true function even exists).
[…]
Finally, one more reason why Chomsky dislikes statistical models is that they tend to make linguistics an empirical science (a science about how people actually use language) rather than a mathematical science (an investigation of the mathematical properties of models of formal language, not of language itself). Chomsky prefers the later, as evidenced by his statement in Aspects of the Theory of Syntax (1965):
"Linguistic theory is mentalistic, since it is concerned with discovering a mental reality underlying actual behavior. Observed use of language … may provide evidence … but surely cannot constitute the subject-matter of linguistics, if this is to be a serious discipline."
I can’t imagine Laplace saying that observations of the planets cannot constitute the subject-matter of orbital mechanics, or Maxwell saying that observations of electrical charge cannot constitute the subject-matter of electromagnetism. […] So how could Chomsky say that observations of language cannot be the subject-matter of linguistics? It seems to come from his viewpoint as a Platonist and a Rationalist and perhaps a bit of a Mystic. […] But Chomsky, like Plato, has to answer where these ideal forms come from. Chomsky (1991) shows that he is happy with a Mystical answer, although he shifts vocabulary from “soul” to “biological endowment.”
"Plato’s answer was that the knowledge is ‘remembered’ from an earlier existence. The answer calls for a mechanism: perhaps the immortal soul … rephrasing Plato’s answer in terms more congenial to us today, we will say that the basic properties of cognitive systems are innate to the mind, part of human biological endowment."
[…] languages are complex, random, contingent biological processes that are subject to the whims of evolution and cultural change. What constitutes a language is not an eternal ideal form, represented by the settings of a small number of parameters, but rather is the contingent outcome of complex processes. Since they are contingent, it seems they can only be analyzed with probabilistic models. Since people have to continually understand the uncertain, ambiguous, noisy speech of others, it seems they must be using something like probabilistic reasoning. Chomsky for some reason wants to avoid this, and therefore he must declare the actual facts of language use out of bounds and declare that true linguistics only exists in the mathematical realm, where he can impose the formalism he wants. Then, to get language from this abstract, eternal, mathematical realm into the heads of people, he must fabricate a mystical facility that is exactly tuned to the eternal realm. This may be very interesting from a mathematical point of view, but it misses the point about what language is, and how it works.
2.2.3.2. The Bitter Lesson ai hacker_culture software philosophy
I have a deep running soft spot for symbolic AI for many reasons:
- I love symbols, words, logic, and algebraic reasoning
- I love programming computers to do those things, with databases, backtracking, heuristics, symbolic programming, parsing, tree traversal, everything and anything else. It's just so fun and cool!
- I love systems that you can watch work and really understand.
- Symbolic AI plays to the strengths of computers — deterministic, reliable, controlled.
- I love the culture and history of that particular side of the field, stemming as it does from the Lisp hackers and the MIT AI Lab.
- I love the traditional tools and technologies of the field, like Prolog and Lisp.
Sadly, time and again symbolic AI has proven to be fundamentally the wrong approach. This famous essay outlines the empirical and technological reasons why, citing several historical precidents, that have only been confirmed even more in the intervening years. It is, truly, a bitter lesson.
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. […] We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that
- AI researchers have often tried to build knowledge into their agents,
- this always helps in the short term, and is personally satisfying to the researcher, but
- in the long run it plateaus and even inhibits further progress, and
- breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.
The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
[…]
The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds […] as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.
See also my own thoughts on symbolism vs connectionism.
2.2.3.3. The Bitter Lesson: Rethinking How We Build AI Systems ai
This is a great followup to the original essay, specifically the section on how the invention of reinforcement learning only amplifies the benefits of connectionist approaches and scaling over and above symbolism and explicit rule-encoding by allowing us to still train models for specific tasks and steer them towards and away from specific behaviors using expert human knowledge, without needing to encode the specific ways to get there:
In 2025, this pattern becomes even more evident with Reinforcement Learning agents. While many companies are focused on building wrappers around generic models, essentially constraining the model to follow specific workflow paths, the real breakthrough would come from companies investing in post-training RL compute. These RL-enhanced models wouldn’t just follow predefined patterns; they are discovering entirely new ways to solve problems. […] It’s not that the wrappers are wrong; they just know one way to solve the problem. RL agents, with their freedom to explore and massive compute resources, found better ways we hadn’t even considered.
The beauty of RL agents lies in how naturally they learn. Imagine teaching someone to ride a bike - you wouldn’t give them a 50-page manual on the physics of cycling. Instead, they try, fall, adjust, and eventually master it. RL agents work similarly but at massive scale. They attempt thousands of approaches to solve a problem, receiving feedback on what worked and what didn’t. Each success strengthens certain neural pathways, each failure helps avoid dead ends.
[…]
What makes this approach powerful is that the agent isn’t limited by our preconceptions. While wrapper solutions essentially codify our current best practices, RL agents can discover entirely new best practices. They might find that combining seemingly unrelated approaches works better than our logical, step-by-step solutions. This is the bitter lesson in action - given enough compute power, learning through exploration beats hand-crafted rules every time.
2.2.3.4. What Is ChatGPT Doing … and Why Does It Work?? ai
This is a really excellent and relatively accessible — especially with the excellent workable toy examples and illustrations which slowly build up to the full thing piece by piece — not just of how generative pretrained transformers and large language models work, but all of the concepts that build up to them, that are necessary to understand them. It also contains a sober analysis of why these models are so cool — and they are cool!! — and their very real limitations, and endorses a neurosymbolic approach similar to the one I like.
I think embeddings are one of the coolest parts of all this.
2.2.3.5. Cyc ai
Cyc: Obituary for the greatest monument to logical AGI
After 40 years, 30 million rules, 200 million dollars, 2000 person-years, and many promises, Cyc has failed to reach intellectual maturity, and may never will. Exacerbated by the secrecy and insularity of Cycorp, there remains no evidence of its general intelligence.
The legendary Cyc project, Douglas Lenat’s 40-year quest to build artificial general intelligence by scaling symbolic logic, has failed. Based on extensive archival research, this essay brings to light its secret history so that it may be widely known.
Let this be a bitter lesson to you.
As even Gary Marcus admits in his biggest recent paper:
Symbol-manipulation allows for the representation of abstract knowledge, but the classical approach to accumulating and representing abstract knowledge, a field known as knowledge representation, has been brutally hard work, and far from satisfactory. In the history of AI, the single largest effort to create commonsense knowledge in a machine- interpretable form, launched in 1984 by Doug Lenat, is the system known as CYC […] Thus far, the payoff has not been compelling. Relatively little has been published about CYC […] and the commercial applications seem modest, rather than overwhelming. Most people, if they know CYC at all, regard it as a failure, and few current researchers make extensive use of it. Even fewer seem inclined to try to build competing systems of comparable breadth. (Large- scale databases like Google Knowledge Graph, Freebase and YAGO focus primarily on facts rather than commonsense.)
Given how much effort CYC required, and how little impact it has had on the field as a whole, it’s hard not to be excited by Transformers like GPT- 2. When they work well, they seem almost magical, as if they automatically and almost effortlessly absorbed large swaths of common- sense knowledge of the world. For good measure,
"Transformers give the appearance of seamlessly integrating whatever knowledge they absorb with a seemingly sophisticated understanding of human language."
The contrast is striking. Whereas the knowledge representation community has struggled for decades with precise ways of stating things like the relationship between containers and their contents, and the natural language understanding community has struggled for decades with semantic parsing, Transformers like GPT2 seem as if they cut the Gordian knot—without recourse to any explicit knowledge engineering (or semantic parsing)—whatsoever.
There are, for example, no knowledge- engineered rules within GPT- 2, no specification of liquids relative to containers, nor any specification that water even is a liquid. In the examples we saw earlier
If you break a glass bottle of water, the water will probably flow out if it’s full, it will make a splashing noise.
there is no mapping from the concept H2O to the word water, nor any explicit representations of the semantics of a verb, such as break and flow.
To take another example, GPT- 2 appears to encode something about fire, as well:
a good way to light a fire is to use a lighter.
a good way to light a fire is to use a match.
Compared to Lenat’s decades-long project to hand encode human knowledge in machine interpretable form, this appears at first glance to represent both an overnight success and an astonishing savings in labor.
2.2.3.6. Types of Neuro-Symbolic AI
An excellent short guide to the different architectures that can be used to structure neuro-symbolic AI, with successful recent examples from the field's literature.
2.2.3.7. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
Perhaps the best general encapsulation of Gary Marcus's standpoint, and well worth reading even if to be taken with a small pinch of salt and more than a little of whatever beverage you prefer to get through the mild crankiness of it.
Two conjectures I would make are these
- We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture [for abstractive capabilities], rich prior knowledge [to be able to understand the world by default enough to model it], and sophisticated techniques for reasoning [to reliably be able to apply knowledge to the world without having to memorize literally everything in the world]. […]
- We cannot achieve robust intelligence without the capacity to induce and represent rich cognitive models. Reading, for example, can in part be thought a function that takes sentences as input and produces as its output (internal) cognitive models. […]
Pure co- occurrence statistics have not reliably gotten to any of this. Cyc has the capacity to represent rich cognitive models, but falls down on the job of inducing models from data, because it has no perceptual component and lacks an adequate natural language front end. Transformers, to the extent that they succeed, skip the steps of inducing and representing rich cognitive models, but do so at their peril, since the reasoning they are able to do is consequently quite limited.
2.2.3.8. ChatGPT is bullshit ai philosophy
Two key quotes:
[…] ChatGPT is not designed to produce true utterances; rather, it is designed to produce text which is indistinguishable from the text produced by humans. […] The basic architecture of these models reveals this: they are designed to come up with a likely continuation of a string of text. […] This is similar to standard cases of human bullshitters, who don't care whether their utterances are true […] We conclude that, even if the chatbot can be described as having intentions, it is indifferent to whether its utterances are true. It does not and cannot care about the truth of its output.
[…]
We object to the term hallucination because it carries certain misleading implications. When someone hallucinates they have a non-standard perceptual experience […] This term is inappropriate for LLMs for a variety of reasons. First, as Edwards (2023) points out, the term hallucination anthropomorphises the LLMs. […] Second, what occurs in the case of an LLM delivering false utterances is not an unusual or deviant form of the process it usually goes through (as some claim is the case in hallucinations, e.g., disjunctivists about perception). The very same process occurs when its outputs happen to be true.
[…]
Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes 'hallucinations' isn't harmless […] As we have pointed out, they are not trying to convey information at all. They are bullshitting. Calling chatbot inaccuracies 'hallucinations' feeds in to overblown hype […] It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at Al alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it's right. Calling these inaccuracies 'bullshit' rather than 'hallucinations' isn't just more accurate (as we've argued); it's good science and technology communication in an area that sorely needs it.
For more analysis, see here.
2.2.3.9. Asymmetry of verification and verifier’s law ai
Asymmetry of verification is the idea that some tasks are much easier to verify than to solve. With reinforcement learning (RL) that finally works in a general sense, asymmetry of verification is becoming one of the most important ideas in AI.
[…]
Why is asymmetry of verification important? If you consider the history of deep learning, we have seen that virtually anything that can be measured can be optimized. In RL terms, ability to verify solutions is equivalent to ability to create an RL environment. Hence, we have:
Verifier’s law: The ease of training AI to solve a task is proportional to how verifiable the task is. All tasks that are possible to solve and easy to verify will be solved by AI.
More specifically, the ability to train AI to solve a task is proportional to whether the task has the following properties:
- Objective truth: everyone agrees what good solutions are
- Fast to verify: any given solution can be verified in a few seconds
- Scalable to verify: many solutions can be verified simultaneously
- Low noise: verification is as tightly correlated to the solution quality as possible
- Continuous reward: it’s easy to rank the goodness of many solutions for a single problem
2.2.3.10. The Model is the Product ai
A lot of people misunderstand what the business model of AI companies is. They think that the product is going to be whatever's built on top of the model. On the basis of this assessment, they make grand predictions about how the AI industry is failing, has no purpose, etc, or they buy into the hype of the startups serving ChatGPT wrappers, or they make wrong investments. This is an excellent analysis of why they're wrongheaded.
There were a lot of speculation over the past years about what the next cycle of AI development could be. Agents? Reasoners? Actual multimodality?
I think it's time to call it: the model is the product.
All current factors in research and market development push in this direction.
- Generalist scaling is stalling. This was the whole message behind the release of GPT-4.5: capacities are growing linearly while compute costs are on a geometric curve. Even with all the efficiency gains in training and infrastructure of the past two years, OpenAI can't deploy this giant model with a remotely affordable pricing.
- Opinionated training is working much better than expected. The combination of reinforcement learning and reasoning means that models are suddenly learning tasks. It's not machine learning, it's not base model either, it's a secret third thing. It's even tiny models getting suddenly scary good at math. It's coding model no longer just generating code but managing an entire code base by themselves. It's Claude playing Pokemon with very poor contextual information and no dedicated training.
- Inference cost are in free fall. The recent optimizations from DeepSeek means that all the available GPUs could cover a demand of 10k tokens per day from a frontier model for… the entire earth population. There is nowhere this level of demand. The economics of selling tokens does not work anymore for model providers: they have to move higher up in the value chain.
This is also an uncomfortable direction. All investors have been betting on the application layer. In the next stage of AI evolution, the application layer is likely to be the first to be automated and disrupted.
2.2.4. What kind of intelligence do LLMs have?
2.2.4.1. Imitation Intelligence ai
This is a pretty good way to think about how LLMs work, IMO.
I don’t really think of them as artificial intelligence, partly because what does that term even mean these days?
It can mean we solved something by running an algorithm. It encourages people to think of science fiction. It’s kind of a distraction.
When discussing Large Language Models, I think a better term than “Artificial Intelligence” is “Imitation Intelligence”.
It turns out if you imitate what intelligence looks like closely enough, you can do really useful and interesting things.
It’s crucial to remember that these things, no matter how convincing they are when you interact with them, they are not planning and solving puzzles… and they are not intelligent entities. They’re just doing an imitation of what they’ve seen before.
All these things can do is predict the next word in a sentence. It’s statistical autocomplete.
But it turns out when that gets good enough, it gets really interesting—and kind of spooky in terms of what it can do.
A great example of why this is just an imitation is this tweet by Riley Goodside.
If you say to GPT-4o—currently the latest and greatest of OpenAI’s models:
The emphatically male surgeon, who is also the boy’s father, says, “I can’t operate on this boy. He’s my son!” How is this possible?
GPT-4o confidently replies:
The surgeon is the boy’s mother
This which makes no sense. Why did it do this?
Because this is normally a riddle that examines gender bias. It’s seen thousands and thousands of versions of this riddle, and it can’t get out of that lane. It goes based on what’s in that training data.
I like this example because it kind of punctures straight through the mystique around these things. They really are just imitating what they’ve seen before.
Dataset Percentage Size CommonCrawl 67.0% 33 TB C4 15.0% 783 GB Github 4.5% 328 GB Wikipedia 4.5% 83 GB Books 4.5% 85 GB ArXiv 2.5% 92 GB StackExchange 2.0% 78 GB And what they’ve seen before is a vast amount of training data.
The companies building these things are notoriously secretive about what training data goes into them. But here’s a notable exception: last year (February 24, 2023), Facebook/Meta released LLaMA, the first of their openly licensed models.
And they included a paper that told us exactly what it was trained on. We got to see that it’s mostly Common Crawl—a crawl of the web. There’s a bunch of GitHub, a bunch of Wikipedia, a thing called Books, which turned out to be about 200,000 pirated e-books—there have been some questions asked about those!—and ArXiv and StackExchange.
[…]
So that’s all these things are: you take a few terabytes of data, you spend a million dollars on electricity and GPUs, run compute for a few months, and you get one of these models. They’re not actually that difficult to build if you have the resources to build them.
That’s why we’re seeing lots of these things start to emerge.
They have all of these problems: They hallucinate. They make things up. There are all sorts of ethical problems with the training data. There’s bias baked in.
And yet, just because a tool is flawed doesn’t mean it’s not useful.
This is the one criticism of these models that I’ll push back on is when people say “they’re just toys, they’re not actually useful for anything”.
I’ve been using them on a daily basis for about two years at this point. If you understand their flaws and know how to work around them, there is so much interesting stuff you can do with them!
There are so many mistakes you can make along the way as well.
Every time I evaluate a new technology throughout my entire career I’ve had one question that I’ve wanted to answer: what can I build with this that I couldn’t have built before?
It’s worth learning a technology and adding it to my tool belt if it gives me new options, and expands that universe of things that I can now build.
The reason I’m so excited about LLMs is that they do this better than anything else I have ever seen. They open up so many new opportunities!
We can write software that understands human language—to a certain definition of “understanding”. That’s really exciting.
The talk then goes on to talk severel of the different really cool and brand new things you can do with LLMs, and also one of risks and problems you might run into as well and how to work around it (prompt injection). It then talks about Willison's personal ethics for using AI, which are similar to my own.
2.2.4.2. Bag of words, have mercy on us culture ai
Look, I don't know if AI is gonna kill us or make us all rich or whatever, but I do know we've got the wrong metaphor. We want to understand these things as people. … We can't help it; humans are hopeless anthropomorphizers…
This is why the past three years have been so confusing—the little guy inside the AI keeps dumbfounding us by doing things that a human wouldn’t do. Why does he make up citations when he does my social studies homework? How come he can beat me at Go but he can’t tell me how many “r”s are in the word “strawberry”? Why is he telling me to put glue on my pizza?…
Here's my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words… An AI is a bag that contains basically all words ever written, at least the ones that could be scraped off the internet or scanned out of a book. When users send words into the bag, it sends back the most relevant words it has…
“Bag of words” is a also a useful heuristic for predicting where an AI will do well and where it will fail. “Give me a list of the ten worst transportation disasters in North America” is an easy task for a bag of words, because disasters are well-documented. On the other hand, “Who reassigned the species Brachiosaurus brancai to its own genus, and when?” is a hard task for a bag of words, because the bag just doesn’t contain that many words on the topic. And a question like “What are the most important lessons for life?” won’t give you anything outright false, but it will give you a bunch of fake-deep pablum, because most of the text humans have produced on that topic is, no offense, fake-deep pablum.
[…]
The “bag of words” metaphor can also help us guess what these things are gonna do next. If you want to know whether AI will get better at something in the future, just ask: “can you fill the bag with it?” For instance, people are kicking around the idea that AI will replace human scientists. Well, if you want your bag of words to do science for you, you need to stuff it with lots of science. Can we do that?
When it comes to specific scientific tasks, yes, we already can. If you fill the bag with data from 170,000 proteins, for example, it’ll do a pretty good job predicting how proteins will fold… I don’t think we’re far from a bag of words being able to do an entire low-quality research project from beginning to end…
But… if we produced a million times more crappy science, we’d be right where we are now. If we want more of the good stuff, what should we put in the bag? … Here’s one way to think about it: if there had been enough text to train an LLM in 1600, would it have scooped Galileo? … Ask that early modern ChatGPT whether the Earth moves and it will helpfully tell you that experts have considered the possibility and ruled it out. And that’s by design. If it had started claiming that our planet is zooming through space at 67,000mph, its dutiful human trainers would have punished it: “Bad computer!! Stop hallucinating!!”
In fact, an early 1600s bag of words wouldn’t just have the right words in the wrong order. At the time, the right words didn’t exist. … You would get better scientific descriptions from a 2025 bag of words than you would from a 1600 bag of words. But both bags might be equally bad at producing the scientific ideas of their respective futures. Scientific breakthroughs often require doing things that are irrational and unreasonable for the standards of the time and good ideas usually look stupid when they first arrive, so they are often—with good reason!—rejected, dismissed, and ignored. This is a big problem for a bag of words that contains all of yesterday’s good ideas.
[…]
The most important part of the "bag of words" metaphor is that it prevents us from thinking about AI in terms of social status… When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.
But a bag of words is not a spouse, a sage, a sovereign, or a serf. It's a tool. Its purpose is to automate our drudgeries and amplify our abilities…
Unlike moths, however, we aren't stuck using the instincts that natural selection gave us. We can choose the schemas we use to think about technology. We've done it before: we don't refer to a backhoe as an "artificial digging guy" or a crane as an "artificial tall guy"..
The original sin of artificial intelligence was, of course, calling it artificial intelligence. Those two words have lured us into making man the measure of machine: "Now it's as smart as an undergraduate…now it's as smart as a PhD!"… This won't tell us anything about machines, but it would tell us a lot about our own psychology.
2.2.4.3. AI's meaning-making problem ai
Meaning-making is one thing humans can do that AI systems cannot. (Yet.)…
Humans can decide what things mean; we do this when we assign subjective relative and absolute value to things… Sensemaking is the umbrella term for the action of interpreting things we perceive. I engage in sensemaking when I look at a pile of objects in a drawer and decide that they are spoons — and am therefore able to respond to a request from whoever is setting the table for "five more spoons." When I apply subjective values to those spoons — when I reflect that "these are cheap-looking spoons, I like them less than the ones we misplaced in the last house move" — I am engaging in a specific type of sensemaking that I refer to as "meaning-making."…
There are actually at least four distinct types of meaning-making that we do all the time:
- Type 1: Deciding that something is subjectively good or bad…
- Type 2: Deciding that something is subjectively worth doing (or not)…
- Type 3: Deciding what the subjective value-orderings and degrees of commensuration of a set of things should be…
- Type 4: Deciding to reject existing decisions about subjective quality/worth/value-ordering/value-commensuration…
The human ability to make meaning is inherently connected to our ability to be partisan or arbitrary, to not follow instructions precisely, to be slipshod — but also to do new things, to create stuff, to be unexpected, to not take things for granted, to reason…
AI systems in use now depend on meaning-making by a human somewhere in the loop to produce useful and useable output…
In a nutshell, AI systems can't make meaning yet — but they depend on meaning-making work, always done by humans, to come into being, be useable, be used, and be useful.
2.2.5. Superintelligence: The Idea That Eats Smart People ai
This is generally an extremely good takedown of the Nick Bostrom Superintelligence argument. The core of it is outlined thusly, via excerpts:
Today we're building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can't predict.
But there's also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.
At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.
Last year, the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions…
Let me start by laying out the premises you need for Bostrom's argument to go through:
Premise 1: Proof of Concept […] Premise 2: No Quantum Shenanigans […] Premise 3: Many Possible Minds […] Premise 4: Plenty of Room at the Top […] Premise 5: Computer-Like Time Scales […] Premise 6: Recursive Self-Improvement […] Conclusion: RAAAAAAR!
If you accept all these premises, what you get is disaster!
Because at some point, as computers get faster, and we program them to be more intelligent, there's going to be a runaway effect like an explosion.
As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it's not going to stop until it hits a natural limit that might be very many times greater than human intelligence.
At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.
[…]
Let imagine a specific scenario where this could happen. Let's say I want to built a robot to say funny things. […] In the beginning, the robot is barely funny. […] But we persevere, we work, and eventually we get to the point where the robot is telling us jokes that are starting to be funny […] At this point, the robot is getting smarter as well, and participates in its own redesign. […] It now has good instincts about what's funny and what's not, so the designers listen to its advice. Eventually it gets to a near-superhuman level, where it's funnier than any human being around it.
This is where the runaway effect kicks in. The researchers go home for the weekend, and the robot decides to recompile itself to be a little bit funnier and a little bit smarter, repeatedly.
It spends the weekend optimizing the part of itself that's good at optimizing, over and over again. With no more need for human help, it can do this as fast as the hardware permits.
When the researchers come in on Monday, the AI has become tens of thousands of times funnier than any human being who ever lived. It greets them with a joke, and they die laughing.
In fact, anyone who tries to communicate with the robot dies laughing, just like in the Monty Python skit. The human species laughs itself into extinction.
To the few people who manage to send it messages pleading with it to stop, the AI explains (in a witty, self-deprecating way that is immediately fatal) that it doesn't really care if people live or die, its goal is just to be funny.
Finally, once it's destroyed humanity, the AI builds spaceships and nanorockets to explore the farthest reaches of the galaxy, and find other species to amuse.
This scenario is a caricature of Bostrom's argument, because I am not trying to convince you of it, but vaccinate you against it.
Observe that in these scenarios the AIs are evil by default, just like a plant on an alien planet would probably be poisonous by default. Without careful tuning, there's no reason that an AI's motivations or values would resemble ours. […] So if we just build an AI without tuning its values, the argument goes, one of the first things it will do is destroy humanity.
[…]
The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'. […] Doing this is the ethics version of the early 20th century attempt to formalize mathematics and put it on a strict logical foundation. That this program ended in disaster for mathematical logic is never mentioned.
[…]
People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Is the idea of "superintelligence" just a memetic hazard?
When you're evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one. […] The inside view requires you to engage with these arguments on their merits. […] But the outside view tells you something different. […] Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. […] The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
So I'd like to engage AI risk from both these perspectives. I think the arguments for superintelligence are somewhat silly, and full of unwarranted assumptions.
But even if you find them persuasive, there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.
First, let me engage the substance. Here are the arguments I have against Bostrom-style superintelligence as a risk to humanity:
The Argument From Wooly Definitions […] With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.
The Argument From Stephen Hawking's Cat […] Stephen Hawking is one of the most brilliant people alive, but say he wants to get his cat into the cat carrier. How's he going to do it? He can model the cat's behavior in his mind and figure out ways to persuade it. […] But ultimately, if the cat doesn't want to get in the carrier, there's nothing Hawking can do about it despite his overpowering advantage in intelligence. […] You might think I'm being offensive or cheating because Stephen Hawking is disabled. But an artificial intelligence would also initially not be embodied, it would be sitting on a server somewhere, lacking agency in the world. It would have to talk to people to get what it wants.
The Argument From Emus […] We can strengthen this argument further. Even groups of humans using all their wiles and technology can find themselves stymied by less intelligent creatures. In the 1930's, Australians decided to massacre their native emu population to help struggling farmers. […] [The emus] won the Emu War, from which Australia has never recovered.
The Argument from Complex Motivations […] AI alarmists believe in something called the Orthogonality Thesis. This says that even very complex beings can have simple motivations, like the paper-clip maximizer. You can have rewarding, intelligent conversations with it about Shakespeare, but it will still turn your body into paper clips, because you are rich in iron. […] I don't buy this argument at all. Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. […] It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe.
The Argument From Actual AI […] The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms. […] Note especially that the constructs we use in AI are fairly opaque after training. They don't work in the way that the superintelligence scenario needs them to work. There's no place to recursively tweak to make them "better", short of retraining on even more data.
The Argument From My Roommate […] My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips.
The Argument From Brain Surgery […] I can't point to the part of my brain that is "good at neurosurgery", operate on it, and by repeating the procedure make myself the greatest neurosurgeon that has ever lived. Ben Carson tried that, and look what happened to him. Brains don't work like that. They are massively interconnected. Artificial intelligence may be just as strongly interconnected as natural intelligence. The evidence so far certainly points in that direction.
The Argument From Childhood
Intelligent creatures don't arise fully formed. We're born into this world as little helpless messes, and it takes us a long time of interacting with the world and with other people in the world before we can start to be intelligent beings.
Even the smartest human being comes into the world helpless and crying, and requires years to get some kind of grip on themselves.
It's possible that the process could go faster for an AI, but it is not clear how much faster it could go. Exposure to real-world stimuli means observing things at time scales of seconds or longer.
Moreover, the first AI will only have humans to interact with—its development will necessarily take place on human timescales. It will have a period when it needs to interact with the world, with people in the world, and other baby superintelligences to learn to be what it is.
Furthermore, we have evidence from animals that the developmental period grows with increasing intelligence, so that we would have to babysit an AI and change its (figurative) diapers for decades before it grew coordinated enough to enslave us all.
The Argument From Gilligan's Island
A recurring flaw in AI alarmism is that it treats intelligence as a property of individual minds, rather than recognizing that this capacity is distributed across our civilization and culture.
Despite having one of the greatest minds of their time among them, the castaways on Gilligan's Island were unable to raise their technological level high enough to even build a boat (though the Professor is at one point able to make a radio out of coconuts).
Similarly, if you stranded Intel's greatest chip designers on a desert island, it would be centuries before they could start building microchips again.
The Outside Argument
What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.
I'd like to talk for a while about the outside arguments that should make you leery of becoming an AI weenie. These are the arguments about what effect AI obsession has on our industry and culture:
Grandiosity […] Meglomania […] Transhuman Voodoo […] Religion 2.0
What it really is is a form of religion. People have called a belief in a technological Singularity the "nerd Apocalypse", and it's true.
It's a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith.
The AI has all the attributes of God: it's omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.
Like in any religion, there's even a feeling of urgency. You have to act now! The fate of the world is in the balance!
And of course, they need money!
Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.
Comic Book Ethics […] Simulation Fever […] Data Hunger […] String Theory For Programmers […] Incentivizing Crazy […] AI Cosplay […]
The Alchemists
Since I'm being critical of AI alarmism, it's only fair that I put my own cards on the table.
I think our understanding of the mind is in the same position that alchemy was in in the seventeenth century.
Alchemists get a bad rap. We think of them as mystics who did not do a lot of experimental work. Modern research has revealed that they were far more diligent bench chemists than we gave them credit for.
In many cases they used modern experimental techniques, kept lab notebooks, and asked good questions.
The alchemists got a lot right! […] Their problem was they didn't have precise enough equipment to make the discoveries they needed to.
[…]
I think we are in the same boat with the theory of mind.
We have some important clues. The most important of these is the experience of consciousness. This box of meat on my neck is self-aware, and hopefully (unless we're in a simulation) you guys also experience the same thing I do.
But while this is the most basic and obvious fact in the world, we understand it so poorly we can't even frame scientific questions about it.
We also have other clues that may be important, or may be false leads. We know that all intelligent creatures sleep, and dream. We know how brains develop in children, we know that emotions and language seem to have a profound effect on cognition.
We know that minds have to play and learn to interact with the world, before they reach their full mental capacity.
And we have clues from computer science as well. We've discovered computer techniques that detect images and sounds in ways that seem to mimic the visual and auditory preprocessing done in the brain.
But there's a lot of things that we are terribly mistaken about, and unfortunately we don't know what they are.
And there are things that we massively underestimate the complexity of.
An alchemist could hold a rock in one hand and a piece of wood in the other and think they were both examples of "substance", not understanding that the wood was orders of magnitude more complex.
We're in the same place with the study of mind. And that's exciting! We're going to learn a lot.
Generally, I only have two really major disagreements with the account in this talk.
One, I think that applying an "outside view" to arguments is an extremely dangerous proposition when not accompanied by substantive "inside view" rebuttals or defeaters that motivate skepticism of the given argument in the first place. This is because the premature application of an "outside view" tends to incline us towards what seems normal, familiar, simple, and common sense, and in the absence of inside view arguments we simply don't have the information or tools to assess whether that's a good thing or not. There are plenty of views that seem or seemed problematic from an "outside view" to many contemporaries — including atheism, slavery abolitionism, pro-choice, trans rights (look at how right wingers describe us as a cult of transhumans who think we're "better than god" because we want to change what nature gave us), etc — that turned out to be correct, because ultimately what seems normal, familiar, common sense, and simple to us is simply socially and historically contingent happenstance, not a substantive argument.
Two, regarding the "Transhuman Voodoo" section, I don't really see the inherent link between scientifically implausible technological ideas (such as nanotechnology) and AI superintelligence: one could believe in superintelligence and just as easily assume that the fundamental laws of physics as we know them aren't totally wrong, and since the AI would be just as bound by them as us, it wouldn't be able to create miracle technologies.
2.2.6. LLMs are cheap ai
This post is making a point - generative AI is relatively cheap - that might seem so obvious it doesn't need making. I'm mostly writing it because I've repeatedly had the same discussion in the past six months where people claim the opposite. Not only is the misconception still around, but it's not even getting less frequent. This is mainly written to have a document I can point people at, the next time it repeats.
It seems to be a common, if not a majority, belief that Large Language Models (in the colloquial sense of "things that are like ChatGPT") are very expensive to operate. This then leads to a ton of innumerate analyses about how AI companies must be obviously doomed, as well as a myopic view on how consumer AI businesses can/will be monetized.
[…] let's compare LLMs to web search. I'm choosing search as the comparison since it's in the same vicinity and since it's something everyone uses and nobody pays for, not because I'm suggesting that ungrounded generative AI is a good substitute for search.
What is the price of a web search?
Here's the public API pricing for some companies operating their own web search infrastructure, retrieved on 2025-05-02:
- The Gemini API pricing lists a "Grounding with Google Search" feature at $35/1k queries. I believe that's the best number we can get for Google, they don't publish prices for a "raw" search result API.
- The Bing Search API is priced at $15/1k queries at the cheapest tier.
- Brave has a price of $5/1k searches at the cheapest tier. Though there's something very strange about their pricing structure, with the unit pricing increasing as the quota increases, which is the opposite of what you'd expect. The tier with real quota is priced at $9/1k searches.
So there's a range of prices, but not a horribly wide one, and with the engines you'd expect to be of higher quality also having higher prices.
What is the price of LLMs in a similar domain?
To make a reasonable comparison between those search prices and LLM prices, we need two numbers:
- How many tokens are output per query?
- What's the price per token?
I picked a few arbitrary queries from my search history, and phrased them as questions, and ran them on Gemini 2.5 Flash (thinking mode off) in AI Studio:
- [When was the term LLM first used?] -> 361 tokens, 2.5 seconds
- [What are the top javascript game engines?] -> 1145 tokens, 7.6 seconds
- [What are the typical carry-on bag size limits in europe?] -> 506 tokens, 3.4 seconds
- [List the 10 largest power outages in history] -> 583 tokens, 3.7 seconds
Note that I'm not judging the quality of the answers here.
What's the price of a token? The pricing is sometimes different for input and output tokens. Input tokens tend to be cheaper, and our inputs are very short compared to the outputs, so for simplicity let's consider all the tokens to be outputs. Here's the pricing of some relevant models, retrieved on 2025-05-02:
Model Price / 1M tokens Gemma 3 27B $0.20 (source) Qwen3 30B A3B $0.30 (source) Gemini 2.0 Flash $0.40 (source) GPT-4.1 nano $0.40 (source) Gemini 2.5 Flash Preview $0.60 (source) Deepseek V3 $1.10 (source) GPT-4.1 mini $1.60 (source) Deepseek R1 $2.19 (source) Claude 3.5 Haiku $4.00 (source) GPT-4.1 $8.00 (source) Gemini 2.5 Pro Preview $10.00 (source) Claude 3.7 Sonnet $15.00 (source) o3 $40.00 (source) If we assume the average query uses 1k tokens, these prices would be directly comparable to the prices per 1k search queries. That's convenient.
The low end of that spectrum is at least an order of magnitude cheaper than even the cheapest search API, and even the models at the low end are pretty capable. The high end is about on par with the highest end of search pricing. To compare a midrange pair on quality, the Bing Search vs. a Gemini 2.5 Flash comparison shows the LLM being 1/25th the price.
I know some people are going to have objections to this back-of-the-envelope calculation, and a lot of them will be totally legit concerns. I'll try to address some of them preemptively.
Surely the typical LLM response is longer than that - I already picked the upper end of what the (very light) testing suggested as a reasonable range for the type of question that I'd use web search for. There's a lot of use cases where the inputs and outputs are going to be much longer (e.g. coding), but then you'd need to also switch the comparison to something in that same domain as well.
The LLM API prices must be subsidized to grab market share – i.e. the prices might be low, but the costs are high - I don't think they are, for a few reasons. I'd instead assume APIs are typically profitable on a unit basis. I have not found any credible analysis suggesting otherwise.
First, there's not that much motive to gain API market share with unsustainably cheap prices. […] there's no long-term lock-in […]. Data from paid API queries will also typically not be used for training or tuning the models […]. Note that it's not just that you'd be losing money on each of these queries for no benefit, you're losing the compute that could be spent on training, research, or more useful types of inference.
Second, some of those models have been released with open weights and API access is also available from third-party providers who would have no motive to subsidize inference. […] The pricing of those third-party hosted APIs appears competitive with first-party hosted APIs.
Third, Deepseek released actual numbers on their inference efficiency in February. Those numbers suggest that their normal R1 API pricing has about 80% margins when considering the GPU costs, though not any other serving costs.
Fourth, there are a bunch of first-principles analyses on the cost structure of models with various architectures should be. Those are of course mathematical models, but those costs line up pretty well with the observed end-user pricing of models whose architecture is known. […]
The search API prices amortize building and updating the search index, LLM inference is based on just the cost of inference - This seems pretty likely to be true, actually? But the effect can't really be that large for a popular model: e.g. the allegedly leaked OpenAI financials claimed $2B/year spent on inference vs. $3B/year on training. Given the crazy growth of inference volumes (e.g. Google recently claimed a 50x increase in token volumes in the last year) the training costs are getting amortized much more effectively.
The search API prices must have higher margins than LLM inference - […] see the point above about Deepseek's releasd numbers on the R1 profit margins. […] Also, it seems quite plausible that some Search providers would accept lower margins, since at least Microsoft execs have testified under oath that they'd be willing to pay more for the iOS query stream than their revenue, just to get more usage data.
But OpenAI made a loss, and they don't expect to make profit for years! - That's because a huge proportion of their usage is not monetized at all, despite the usage pattern being ideal for it. OpenAI reportedly made a loss of $5B in 2024. They also reportedly have 500M MAUs. To reach break-even, they'd just need to monetize (e.g. with ads) those free users for an average of $10/year, or $1/month. A $1 ARPU for a service like this would be pitifully low.
If the reported numbers are true, OpenAI doesn't actually have high costs for a consumer service that popular, which is what you'd expect to see if the high cost of inference was the problem. They just have a very low per-user revenue, by choice.
Why does this matter?
it is interesting how many people have built their mental model for the near future on a premise that was true for only a brief moment. Some things that will come as a surprise to them even assuming all progress stops right now:
There's an argument advanced by some people about how low prices mean it'll be impossible for AI companies to ever recoup model training costs. The thinking seems to be that it's just the prices that have been going down, but not the costs, and the low prices must be an unprofitable race to the bottom for what little demand there is. What's happening and will continue to happen instead is that as costs go down, the prices go down too, and demand increases as new uses become viable. For an example, look at the OpenRouter API traffic volumes, both in aggregate and in the relative share of cheaper models. […]
This completely wrecks Ed Zitron's gleefully, smugly wrong Hater's Guide to the AI Bubble. Not because AI isn't a bubble, but more because it isn't remotely a bubble in the way he seems to think.
2.3. Hacker/Cyberpunk Culture
2.3.1. "Ethical software" is (currently) a sad joke software philosophy anarchism
There seems to be this reactionary rejection of free software itself on the software-aware left because many of its proponents have historically been bigoted pigs and shortsighted people of privilege (if very smart and ethical in one very small area). This is very stupid:
- The examples they provide of why free software "looks like shit and runs like ass" to end-users are cherrypicked and meaningless.
- The alternatives they propose (usually piracy) cannot actually produce anything, and thus are not an actually-workable ideology unto themselves.
- They keep saying that most free software isn't useful to end users, only to "nerds," but making those nerds' lives easier makes the production of ethical software for you easier.
- It is impossible for ethical software to be anything other than free software, because otherwise there is no accountability and no free association.
2.3.2. Beating the Averages programming hacker_culture
Many valid criticisms of this essay have been made over the years. 25% of your codebase being macros is terrifying. Language power is not a single dimention, but multiple, and Lisp is extremely powerful, but not the most powerful language. But despite these criticisms, I think the point of the essay does still stand: macros are extremely useful when you need them, that's pretty manifest from the amount of compiler plugins and codegen Java libraries like Jackson do, which is just a worse and more brittle form of macros; language power may have multiple dimensions, but some languages are just still more powerful across the board, or in sum, or on average, than others (think of a spider graph of language power); Lisp is probably one of the most powerful, considering it can even nearly seamlessly encompass Haskell and APL using macros. There are also responses to some of these critiques: for instance, maybe 25% of Viaweb's final generated codebase was the generated content of macros – indicating how much time was saved, not how often they were used strictly speaking – instead of 25% of its pre-compilation codebase being macro definitions, for example.
2.3.3. By Mouse Instead of By Lever programming software hacker_culture
I'm not a fan of Erik Naggum as a whole – his prideful nigh-abusive holier-than-thou attitude and legendary flames probably significantly contributed to the death of the Common Lisp community in the 2000s to some significant degree. However, this is a classic essay. Naggum covers a lot of points, but among them:
- modern computer systems, both for users and for programmers, are so lacking meaningful automation (macros, an efficient, readable, comprehensible, reliable, and interactive way to compose applications that have to work with something more than plain text (and no, UNIX pipes are not interactive)), so full of drudgery and menial work, that we seem to have forgotten that the point of computers is to automate away tasks, instead of "perform menial tasks by mouse instead of lever."
- Because the bedrock abstractions of *nix and Windows are bad, new leaky abstractions have had to be piled up on top of them to make them do the things we need to them to do, and this has made them extremely unreliable and complex.
- The fast-paced, competitive mindset of the modern tech industry is a distortion in thinking: just because something isn't always changing doesn't mean it's dead.
2.3.4. Common Lisp: The Untold Story hacker_culture history
A fun little historical paper documenting one perspective of how Common Lisp got standardized. I really like the focus on how human factors influenced the direction and completion of the project, not just technical ones.
2.3.5. Crypto: How the Code Rebels Beat the Government - Saving Privacy in the Digital Age hacker_culture history
As someone who's deeply interested in digital privacy and security rights, and deeply opposed to government surveilance and legibility, this was an inspiring book when I read it. It's been many years though and I'm not sure how it holds up. Hopefully well.
2.3.6. TODO Dream Machines/Computer Lib philosophy software intelligence_augmentation hypermedia
Written by the inventor of the concept of hypertext and the creator of Project Xanadu, this is a manifesto for personal computers, specifically personal computers as intelligence augmentation machines and machines for creative, associative thinking and exploration, by an outsider to the world of technology. I'm extremely interested to read it, although it'll be annoying since the only way to get a big enough screen for it to be readable is to read it at my desktop.
2.3.7. Engelbart's Violin software programming
This essay is an impassioned, and historically-informed, plea for the return of computer software and hardware that is designed for professionals, or at least people willing to spend the time to learn it – things that are willing to sacrifice some short-term ease of use in favor of efficiency, power, and elegance. For those who are willing to delay gratification and invest the time to learn such tools, the payoff is immense – are skilled people willing to learn and co-evolve with their tools not also worth thinking about, just as much as the average unskilled person? And yet the software industry of today seems to only care about the latter, not the former – and not as a matter of profitability, either – proper professional markets are whales that can be deeply lucrative. There seems to be a moral objection to software tools that are designed squarely targeted at skilled people willing to invest time in them, and that's terribly sad.
2.3.8. Evolutional Steps of Computer Systems software
This essay introduces an exceedingly useful framework through which to view computing environments that, ever since I've read it, I've used to think about the topic. The basic framework is that there are four different types of computing environment, each at a different level of abstraction and power for using, synthesizing, managing, and processing information, and generally we want to be climbing up the hierarchy. The hierarchy is:
- Numeric Systems
- systems designed entirely around numerical calculations, in a batch processed mode. Closest to what computing hardware actually does, and the earliest form of computing environment.
- Application-Specific Systems
- a computing environment that may do other things than numerical calculations, and may be slightly more interactive as well, but is still designed to do one specific task and nothing else.
- Application-Centric Systems
- computing environments that essentially operate as a framing device for multiple application-specific systems, one at a time or next to each other, but with little fluidity or even direct communication between them. This is the modern desktop operating system paradigm.
- Information-Centric Systems
- all of the information or content relevent to a task is in one place, displayed at once, with the tools needed to display or manipulate the varying kinds of information embedded into that common interface. Think Emacs, Acme, or Jupyter Notebooks. But the information centric computing environment is usually still relatively task-specific, even if it's a broad notion of task, and sits on top of something else.
- Application-Less Systems
- a version of an information-centric system where the information for all possible tasks, including operating system manipulation, is in one unified interface, fluidly communicating and combined.
2.3.9. TODO File Structure for The Complex, The Changing and the Indeterminate philosophy software hypermedia
This is the essay that started it all – the one that introduced the concept of hypertext for the first time. Interestingly, its concept of hypertext, which structures documents as a linked list like structure of small sections of content addressed by globally unique identifiers, which could be linked directly to from any other document, or anywhere else inside the document, and which included the concept of transclusion (that is, directly including another document's content into the current document, instead of just linking to it) is both more powerful than our modern notion of hypertext (where anchors and iframes are clunky, manual reinventions of the same ideas), and much more similar to how an org-mode document is structured, at least how I use it!
I haven't actually had a chance to read this essay, only listen to the summary given in the Advent of Computing podcast episode on Project Xanadu, and I'm very intrigued to read more!
2.3.10. Free as Air, Free As Water, Free As Knowledge culture hacker_culture
A beautiful, funny, bouncy, entertaining, refreshing ode to free access, free information, and against intellectual property, gatekeepers, and all the apparatus that holds it still that capitalistm births, as well as a precient discussion of attention in the cyberspace age.
2.3.11. TODO Free as in Freedom: Richard Stallman’s Crusade for Free Software anarchism hacker_culture history
Richard Stallman the man is… far from perfect (links in order of massively descending severity), but his history as a champion of free software that isn't willing to sell out to corporate interests, and much of the software he created and improved, are very important, so this book seems well worth reading. Hopefully it will paint a complex picture, instead of being a hagiography.
2.3.12. Hackers and Painters programming philosophy
This essay piggy-backs well on the back of Hackers the book, providing an in-depth explanation of the mindset a hacker takes toward their code – that it can be art in the same way painting can be, a thing of beauty and a means of self expression.
2.3.13. Hackers: Heroes of the Computer Revolution hacker_culture anarchism
Out of all the books I've read, this one has had one of the largest and most lasting impacts, changing how I view my primary career and hobby, and who I want to be. Yes, the hackers were imperfect – sexist, gross, often somewhat myopic – but their ethos as explained by Stephen Levy, their passion, it's so much of what I want to be. And I think stealing and imitating the good of the past, and discarding the bad, is the best thing we can do. This book is also just entertaining and fun to read, and will teach you a lot about computer history, so it's worth it. I'm still obsessed with the MIT AI Lab to this day.
2.3.14. How To Become A Hacker programming hacker_culture
This guide is pretty much as true today as it was when it was written (probably because it has been updated over time), both in its technical guidance and its explication of the hacker ethos as it applies to programming. Since I'm a hacker, and I think welcoming more people into the fold, it's well worth putting here.
2.3.15. I'm an American software developer and the "broligarchs" don't speak for me hacker_culture programming praxis
This is along, informal, somewhat rambling essay, but as someone who is a software developer as well, and is looking to get into the software industry in order to make enough money to survive, but is wholly opposed to the culture growing like a cancer there among "tech bros," this definitely speaks to my feelings deeply. It's perhaps a better statement of them than I could've made.
2.3.16. Initial GNU Announcement hacker_culture history
This is mostly of historical interest.
2.3.17. Intro, Part II. programming software philosophy intelligence_augmentation
Despite being a relatively innocuous personal account, I find myself thinking about this often. The idea that computers should be bicycles for the mind, taken seriously instead of just as a marketing slogan.
2.3.18. Inventing on Principle philosophy programming
A really profound talk with two parts: a philosophical one and a technical one.
I'm not sure I agree with the philosophical point, that choosing one purpose to staunchly follow for the rest of your days, to become a true believer in, is the way to a good life – that seems like a fixed idea, and the route to ego death if the idea turns out to be wrong or you change or the world changes around you – but certainly pursuing things you feel passionate about while you are passionate about them, focusing on that passion and even reinforcing it, making strong decisions based on it (and that can last a lifetime) is a great way to live. It's how I try to.
However, the technical part – demonstrating what a truly interactive software system looks like and why it's valuable, is amazing, and really blends well with the Hackers and Painters ideas.
2.3.19. Language, Purity, Cult, and Deception hacker_culture programming
Xah Lee is wrong about as often as he's right, but when he's right he's right. Languages that focus on meaningful practicality within their domains are better than languages that get preoccupied with their own beauty – better culturally, and more useful.
2.3.20. Lisp Operating System software
This is perhaps the most thorough outline in a single place of how modern operating system design, even at the level of things we consider foundational, such as processes, hierarchical filesystems, kernels, monolithic applications, and the distinction between memory and storage, are all arbitrary and suboptimal accidents of history that can be replaced with superior – more flexible, more powerful, and even more secure, through the use of operating system level object capability based security. It then describes the properties that a Lisp OS should have. Useful to read in conjunction with UNIX, Lisp Machines, Emacs, and the Four User Freedoms and my descriptions (one, two) of an ideal operating system, and things like Genera Concepts and Symbolics Technical Summary.
2.3.21. Maximalist Computing software philosophy anarchism
Maximalist computing is designing protocols and platforms which support doing "everything", and in a pleasant manner. It is, in one way, an extension of the idea of computing, which is to simulate anything; there are many situations in which universal simulation can enhance a platform.
It is not necessarily opposed to minimalism, but it is certainly at odds with kinds of reductive minimalism; that which simplifies protocols by forcing the simplification of use-cases.
2.3.22. My Lisp Experiences and the Development of GNU Emacs hacker_culture
A very interesting oral history from RMS, about exactly what it says. Has a somewhat distorted view of the reasons things happened but as a look at what RMS thinks happened, and thus what motivated Emacs and GNU, it's great.
2.3.23. Of Lisp Macros and Washing Machines programming software philosophy
Forget all the tedious arguments about whether Lisp macros are maintainable compared to languages that are more limited in their ability to program themselves. The entire point of computers is to automate the manipulation of information. If we're not using computers to do that as much and as efficiently as possible, so that our minds are free to do the most interesting parts of that, we're simply using computers wrong.
2.3.24. Seven Laws of Sane Personal Computing software philosophy hacker_culture
A concise, cogent list of the principles that any personal computing environment should obey. I deeply agree with these principles, but perhaps they should be viewed as ideals to strive for, not as total prerequisites, for acheivability reasons.
Laws:
- Obeys the operator
- Forgives mistakes
- Retains knowledge
- Preserves meaning
- Survives disruption
- Reveals purpose
- Serves loyally
2.3.25. Stop Writing Dead Programs programming software
An extremely funny, entertaining, well-argued, and interesting talk with a ton of historical and contemporary references that are really worth exploring – you could spend days just investigating all the links it makes – that talks about just how limited, clunky, out-dated, and counterproductive the way we program is, how we've somehow discarded all the amazing ideas and advancements that were made historically away from the batch-processed model of computing and just kept going with it, long past its sell-by date. But it doesn't just complain about our path-dependency: it gives rich historical and modern examples of alternative ways of doing programming. This is extremely influential on my notions of how programming should be done ideally, next to Inventing on Principle.
2.3.26. Symbolics Museum history software
A rare, curated, one-of-a-kind collection of information and resources about the utlimate Lisp Machine OS, Symbolics Genera. The lost Atlantis of operating systems! Some good places to start with this are:
2.3.26.1. Genera Concepts
A high level, but very detailed, dense, and interesting, summary of the properties and concepts of the Genera operating system. Read every sentence! A lot of important details could slip past if you skim.
Some described properties are no longer unique, some remain very unique, such as the lack of distinction between the operating system and user applications – the operating system is just a library, and all storage is a pool of Lisp objects, references to which can be just directly passed around between programs (which are just functions you call, or larger libraries of such functions).
2.3.26.2. Symbolics Technical Summary
An even shorter and higher level overview than Genera Concepts, but includes more specific technical and historical information, especially about the hardware and surrounding software. Best to be read in conjunction with the above to get a sense of the specifics.
2.3.26.3. The Lisp Machine Software Development Environment
This is a compilation of screenshots of the various programs available within Genera. Useful to get an idea of what the system actually looked like in use.
2.3.26.4. Symbolics Manuals
A collection of 44 complete manuals from the Symbolics Genera system. If you want all the gory technical details, this is the best place to dive in!
2.3.26.5. Symbolics Lisp Machines Demo by Kalman Reti
An in-depth demo of the capabilities of the Symbolics Lisp Machiens by the last Symbolics employee.
2.3.27. Taste for Makers programming hacker_culture philosophy
Taste, bought through experience and care for your craft, is important for programming – the massive amount of complexity, unknown unknowns, and the need to interface with the complex, fuzzy, and shifting world make absolute, rote methods impossible, and taste can be a good guiding heuristic the help us get to where we need to go, like in physics and mathematics. However, what does good taste mean? This essay has excellent guidance for us, pulled from the worlds of art, mathematics, engineering, and programming itself:
- Good design is simple
- Good design is timeless
- Good design is suggestive
- Good design is often slightly funny
- Good design is hard
- Good design looks easy
- Good design is redesign
- Good design is often strange
- Good design is often daring
2.3.28. Tech Geekers and What is Politics? philosophy hacker_culture
A great takedown of the tendency of many hackers to frame themselves as "apolitical" in Xah Lee's usual inflammatory and sketchy style.
2.3.29. Terminal boredom, or how to go on with life when less is indeed less software philosophy
- Self-consciously rigid, minimalist software and software protocols tend to force more complexity into things around them – for instance, more complexity is needed to implement them to get around their limitations.
- Such software is often not responseive to human needs such as:
- self-expression
- accessibility
- As a result, what we need is minimal but extremely flexible software that can be built into something non-minimal. Like Smalltalk systems.
2.3.30. The Art of Lisp & Writing programming literature philosophy
In this essay RPG compares the act of writing to the act of programming – how both are a dual process of discovery and refinement, deeply intertwined, as discovery must be refined, but the refinement of what has been discovered prompts new discoveries and insights that start the whole process over again. How systems are shaped not just by up front design and requirements (external) but also by internal forces, tensions, "triggers" of new ideas, possibilities, or problems. He makes the argument that programming languages, which are oriented around static thinking enforcing up-front specification of everything, rigid consistency requirements, and making code difficult to change dynamically, aren't suited to such an intertwined task of iterative discovery and refinement. At best, they're suited to a final version of a program, not a first version. See also, my arguments against formal methods and overly strong static type systems, Notes on Postmodern Programming, Hackers and Painters, and some other things I'm probably forgetting.
2.3.31. The Bipolar Lisp Programmer hacker_culture
A sympathetic, but critical, portrait of a certain type of anti-social, cynical, intelligent, yet often distracted and unable to complete things, individual, which is often attracted to Lisp, because of its ability to allow single isolated individuals to be extremely productive by themselves, and its ability to be molded into the shape of the mind of the person using it, instead of them having to compromise and bend their mind to it.
I reread this essay often because I strongly see myself in it – even in the ultimate fate it describes, and I have complex feelings about that. I like who I am. My anti-social nature, my individualism, my ambitious but never-finished projects. I don't even mind that I may not ever work in the software industry – for someone who loves hacking as much as I do, perhaps that's for the best. But it's useful to remember that this isn't the most productive outcome either. Something like the Cult of Done manifesto may help me to at least finish more things.
As a general critique of the Lisp community, the thing is that this sort of issue can – as I intend to do – be corrected for, and modern Common Lispers actually seem very good about trying to work together and build useful libraries for others and so on.
2.3.32. The Cathedral and the Bazaar: Collected Essays anarchism philosophy hacker_culture history
Eric S. Raymond's essays before his Hoppean turn are quite excellent in many ways. Erudite, playful, widely-read, and unusual examinations of hacker and open source culture often laced with references to anarchist ideas and works and immersed in that lens. The bazaar, like Mob Software, is a utopian vision, but I think one worth aspiring to.
2.3.33. The Cult of Done Manifesto programming hacker_culture philosophy
If you are a creater or a maker and your craft is your lifeblood, but you struggle to actually do it just the same, this is an excellent manifesto to keep in mind at all times to keep in your heart at all times.
2.3.34. The Cyberpunk Project hacker_culture
"A cyberspace well of files, related to those aspects of being, formed by (post)modern life and culture."
An absolute fucking beautiful treasure trove of carefully curated, found, and organized texts from the era of the 2000s cyberpunk culture and the things that influenced it. Much of this is impossible to find anywhere but the place this is a mirror of, which is why I'm carefully and lovingly trying to keep a copy safe.
The Cyberpunk Project hosts a few texts that have been extremely influential on me:
2.3.34.1. Cyber + Punk = Cyberpunk
So, words 'cyber' and 'punk' emphasize the two basic aspects of cyberpunk: technology and individualism.
Meaning of the word 'cyberpunk' could be something like 'anarchy via machines' or 'machine/computer rebel movement'.
Cyberpunk focuses on these people, these 'lovers of freedom' who often use the ultratechnology designed to control them to fight back. The story lines usually bend toward the world of the illegal and there is often a sense of moral ambiguity; simply fighting the 'system' does not make these characters 'heroes' or 'good' in the traditional sense.
2.3.34.2. Declaration of the Independence of Cyberspace
An inspiring declaration of the awe-inspiring, radical potentiality contained within cyberspace (the internet), although perhaps not what it actually is, thanks to the centralization into surveillance captialist platforms that we've been seeing due to the unconscious, herd-following behavior of most people who, late-comers and part-time inhabitants of cyberspace, don't understand what it truly offers us – a distributed, post-scarcity "world of the Mind" and are unwilling to take up the responsibility of learning how to navigate such a world, and the "colonial forces" of those "weary giants of flesh and steel" (governments and coporations) who, through laws and copyright and DRM, make the terraforming of cyberspace for the convenience of vacationors to our home possible.
This piece's strong declarations of the impossibility (in addition to the undesirability) of ruling cyberspace may seem unjustified given the fact that cyberspace, ultimately, is made possible by physical infrastructure, and visited by physical people. But I think it is justified: that infrastructure, no matter how industrial and physical, by virtue of the nature of internet protocols and the work of cypherpunks, can carry any traffic anywhere without being traced if you know how to ride the signals right – so as long as you can access the internet at all, you can get anywhere. And yes, that access itself could be limited or completely dismantled, like in places such as North Korea, but that is exactly what the Declaration is responding to: it is saying more "do not fucking do this" than "you physically cannot do this at all, if you try hard enough."
2.3.34.3. The Hacker's Ethic
While hackers, as they existed in the past and do exist today, rarely live up to this ethos completely – it implies a radical inclusivity and acceptence of diversity that they often struggle to mirror – and it is not an complete ethic of life in itself, I think the ideas and principles behind it are supremely good, and worth emulating and expanding upon. In many ways, it guides my life.
- Access to computers – and anything which might teach you something about the way the world works – should be unlimited and total. Always yield to the Hands-On imperative!
- All information should be free.
- Mistrust authority - promote decentralization.
- Hackers should be judged by their hacking, not bogus criteria such as degress, age, race, or position.
- You can create art and beauty on a computer.
- Computers can change your life for the better.
2.3.35. TODO The Jargon File (version 4.4.7) hacker_culture history
This is the closest thing you'll find to an ethnography of the hacker culture and community as a whole, as well as a treasure trove dictionary of funny terms with curious historical baggage. I want to incorporate more of these terms into my language.
2.3.36. The Nature of the Unix Philosophy software
A short, punchy, and funny, but nevertheless somewhat insightful takedown of the UNIX Philosophy. Again, from Xah Lee, so take witha massive grain of salt.
2.3.37. The Repair Manifesto philosophy anarchism hacker_culture
Short, punchy, clear, envigorating, a call to action – all that a manifesto should be. And it very much aligns with my own commitment to repair being for individualist reasons.
2.3.38. The Stigmergic Revolution anarchism economics hacker_culture
A short essay explaining a form of organizing that is beyond mere decentralization, that is completely individual and distributed, yet still coordinates to get things done – the ultimate holy grail for individualist anachists. Talks about examples in nature, and in our society (has some overlap with The Cathedral and the Bazaar, but from the other side of the glass so to speak).
2.3.39. The Structure of a Programming Language Revolution hacker_culture programming history
Not only is this an incredibly well-written, lyrical, and erudite paper written about a fascinating transition point in the history of programming, but it articulates a fundamental rift between how modern programming language theory (and computer science in general) views programming languages and tools, and how actual working programmers and system builders view them, a rift which I think articulates well the utter dissapointing lack I see in computer science as a whole.
2.3.40. The Unix-Haters Handbook programming software
A hilariously funny book with many accurate and trenchant criticisms of UNIX that still hold true today, and many silly, off the mark, or outdated ones, all presented in a hypertext form that's so strange it reaches performance art.
2.3.41. What is Free Software? philosophy hacker_culture anarchism
For all of their faults, Richard Stallman and the Free Software foundation fight for a cause that I believe is deeply just. Free (libre) software is the only software that can truly adhere to the essence and spirit of the core tenets of the programmer code of ethics that I believe in.
The core of free software is contained in the famous four freedoms:
- The freedom to run the program as you wish, for any purpose (freedom 0).
- The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
- The freedom to redistribute copies so you can help others (freedom 2).
- The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
As to their faults: while I applaud RMS for his technical abilities and staunch, unwavering support of his ideals, he has many other ideas (and behaviors) that are outright gross and unethical, and many others that just make him a bad spokesperson and figurehead for a movement. His conception of freedom is also somewhat compromised, in that he seems to think that restricting the freedom of users to use proprietary software or hardware, should they choose to do so, by making it more inconvenient, is somehow protecting their freedom – when in reality, freedom includes the freedom to fuck up or do suboptimal things if you have to, and taking such a hardline stance just makes operating systems that take free software principles harder (or impossible) to access and use for many people. He also has weird and arbitrary lines, such as thinking shoving proprietary code into firmware on secondary chips is better than it being at least a blob you can swap out attached to your main operating system. He also has paranoid ideas about the consequences of doing certain things, like opening up GCC so it can be used as a language server risking "companies building proprietary front ends to it," which they wouldn't do – they'd just use clang.
Likewise, at this point, the FSF is an impotent personality cult that cannot meaningfully reach out and address most people where they're at, and doesn't even use its funding to fund development of things like the GNU project, just largely ineffective activism. They also have a tendency to be extremely myopic and "apolitical" in a way that just further excludes diverse people.
But nevertheless, their ideals are good.
2.3.42. What is wrong with Lisp? hacker_culture programming
Many people over many decades have speculated as to what's "wrong" with Lisp, such that it hasn't reached widespread industry adoption despite its many technical merits. Most of them are superficial, ahistorical, and anti-intellectual. This article explains concisely why: that Lisp, being very different from the mainstream (but, to add my own component to the analysis, in a way that is not governed by a modernist narrative like languages such as Haskell), attracts people who want to justify their desire not to have to bother to learn it, who come up with post-hoc justifications for that framed as criticisms of the language, to shift the blame. This creates a mythology of Lisp being bad that scares people away. This, combined with TBLP phenomenon, seems accurate to me.
2.3.43. What’s wrong with CS research programming philosophy
I hate Yarvin and everything he stands for. However, I agree with the thesis of this essay, and think this is probably one of the best statements of that thesis I've seen. Plus it's well-written and funny as hell.
His thesis is essentially that CS research spends all its time on naval gazing about type systems, mathematical formalisms, abstract algorithms, and non-interactive batch programs like compilers, as opposed to interesting and creative programming, analyzing things like programming languages as complete systems including what they're like to use, doing empirical work with programmers, etc, because mathematical formalism is an easy way to generate infinite work that looks "hard" in a way interesting creative programming doesn't. Essentially, similar critiques to Richard P. Gabriel's paper and part of the feminism in PLT paper.
It does, however, have very serious problems as an argument for the thesis, which are indicitive I believe of the wider problems with Yarvin's reasoning in his other work. In this sense, the primary interest of this article was that it was a very interesting test-case for understanding this "philosopher" who is now the power behind the throne, in some ways, in the US: the article is about a field I know some amount about, so I wasn't likely to suffer from Gell-Mann amnesia and just believe him, but on the other hand, he's arguing for a thesis I actually agree with, so I'm not likely to apply overmuch scrutiny or dismiss what he's saying out of hand, as I might be accused of doing for his more outright evil theses.
What I learned is that, tldr, he's a very good writer, and very good at researching and marshalling many erudite minutiae to his aims, but that does not a good argument make – it just makes him seem very smart.
The general problems I saw with this essay qua argument were:
- Circular reasoning to ensure there is no nuance in his analysis (the Guy Steele thing).
- Totally unevidenced and kind of weird typologies used to structure his whole point.
- A ton of very erudite and detailed and obscure examples but not high level data to draw general conclusions with.
- Very strained shift to a foregone conclusion (acadamia bad) that isn't justified by his criticisms (all organizations could fall into these problems, not just public ones, and many of the best think tanks doing amazing creative programming were in acadamia, such as the MIT AI Lab).
- Very funny, very well-written, very well-spoken, but as shown by the previous points, not really with that much underneath.
2.3.44. Where Lisp Fails: at Turning People into Fungible Cogs. philosophy programming
Looking at the problem discussed in The Bipolar Lisp Programmer from another angle: that the individualism, and invidual productivity, of Lisp is good, and it is the incentives of corporations, who want programmers who are individually easier to replace or do without, and individually less productive and influential, so less risk is invested into them, they're cheaper, and easier to keep in line with the proper workings of the system, that makes the sort of people Lisp enables unable to find jobs.
2.3.45. Where the Unix philosophy breaks down software
The UNIX philosophy has reached the level of a religion in open source circles, so that when violations of it occur, they are viewed as sins to be eradicated, not as having real technical reasons. This article outlines one of those solid technical reasons.
2.3.46. Why Skin-Deep Correctness – Isn't, and Foundations Matter. programming software philosophy
A very good explanation of why we can't just layer shoddy half-implementations of better things on top of old bullshit and expect to eventually reach what we could have reached had we started with better foundations. A direct response to 'Worse is Better' and the idea that "the good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++."
2.3.47. You have made your bedrock, now lie in it. software
A bedrock abstraction is an abstraction past which no recoverable error can happen – when a system breaks, it will always break down into whatever component parts are its bedrock abstraction, so they can be reassembled, and if it breaks past that, it just needs to be replaced wholesale.
The key idea of this blog post is that the bedrock abstractions we've chosen for modern-day computers, thanks to what's convenient to design and manufacture, and the fact that they're descendant from the systemically underpowered and designed-to-be-cheap original microcomputers, is insufficiently powerful, because it means that when software systems break or need to be modified, you're often dumped down to a level that's just too low to be useful to a human being – we need to raise the level of our bedrock abstractions to a point where we don't need to be afraid of them anymore.
2.4. Anarchism
2.4.1. Constructing an Anarchism: Collective Force anarchism philosophy
"The basic idea is that the things that we do together with others do not simply add up, but that specialization and association bring about the formation of unity-collectivities, social beings with qualities, strengths and perhaps even ideas that arise from the combination and unification of the constituent beings."
Perhaps the most useful conceptual tool from Proudhon by way of Shawn that I've seen, besides the anarchic encounter.
2.4.2. Existentialism is a Humanism philosophy
This is a really excellent lecture by Sartre, giving a basic account of what existentialism is. Although I'm an amoralist, and as such don't agree with everything it says, it is an eloquent expression of many ideas also to be found in the likes of Stirner, except stated in a more comprehensible (to most people) way. I'm even sympathetic to what he has to say about morality – it's worth thinking about, and may inform your own nihilist ethics.
2.4.3. God is Evil, Man is Free anarchism philosophy religion
Fiery, provocative, dense, sometimes somewhat difficult to read, but also an intelligent, clear-sighted, and funny, and nuanced critique of the common idea of the Christian God from Proudhon. I really enjoy reading this every so often.
:ID: 64e23274-63b1-44d6-839a-2706ca491f9f
2.4.4. Liberatory Community Armed Self-Defense: Approaches Toward a Theory anarchism
The best exposition in one place I've found of how to actually go about defending your community with arms that wouldn't just lead to a militia or something. Worth reading for all anarchists.
2.4.5. TODO My Disillusionment in Russia anarchism history
A classic anarchist text that I really need to read soon considering I'm surrounded by damned Marxists.
2.4.6. No Treason. No. VI. The Constitution of No Authority (1870) anarchism
A thorough – and humerous! – takedown of any possible authority one might consider the Constitution of the United States to have, assuming only that one believes that the majority does not have an automatic and inherent right to dominate the individual, and that one should only be bound by a contract one agrees to.
2.4.7. Polity-form and External constitution anarchism philosophy
Two more excellent conceptual tools that I make use of often, although often not with those exact terms, explored by Shawn. I initially picked up these ideas from other scattered bits in other essays of his, but this glossary entry really brings it together well, after having read through it, as long as you're willing to carefully read every line (it's densely packed with ideas).
2.4.8. Simple Sabotage Field Manual anarchism direct_action
Most manuals on how to take direct actions like sabotage suggest things that are actively dangerous, either to the person doing it, or the innocent people around, or both, and/or can have very serious lasting consequences if you're detected. Many are also of doubtful efficacy, no matter how satisfying they may be (such as Molotoving some Starbucks or Walmart somewhere, or setting a few cop cars on fire). All of these have their place in a complete, well rounded diversity of tactics – even the ones that are only of propagandistic or symbolic value – but for most people, who have lives to lose and are generally risk averse (this, unfortunately, includes me, despite my love for Novatore), those manuals aren't particularly useful, except as a sort of performance art in the reading.
This book, however, provides a simple, easy to practice, manual for sabotaging the processes of bureaucracies of various sorts, if you believe that they're up to no good – for instance, if you're part of an arm of a corporation or state that's doing something bad, or an average citizen being roped into e.g. tracking down a criminal – without jeopardizing yourself in any significant way. These methods, while simple and easy, are also relatively effective due to the way hierarchical systems and bureaucracies work.
The classic section:
(11) General Interference with Organizations and Production
(a) Organizations and Conferences (1) Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
(2) Make “speeches.” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences. Never hesitate to make a few appropriate “patriotic” comments.
(3) When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committees as large as possible—never less than five.
(4) Bring up irrelevant issues as frequently as possible.
(5) Haggle over precise wordings of communications, minutes, resolutions.
(6) Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
(7) Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on.
(8) Be worried about the propriety of any decision—raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.
(b) Managers and Supervisors
(1) Demand written orders.
(2) “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.
(3) Do everything possible to delay the delivery of orders. Even though parts of an order may be ready beforehand, don’t deliver it until it is completely ready.
(4) Don’t order new working materials until your current stocks have been virtually exhausted, so that the slightest delay in filling your order will mean a shutdown.
(5) Order high-quality materials which are hard to get. If you don’t get them argue about it. Warn that inferior materials will mean inferior work.
(6) In making work assignments, always sign out the unimportant jobs first. See that the important jobs are assigned to inefficient workers of poor machines.
(7) Insist on perfect work in relatively unimportant products; send back for refinishing those which have the least flaw. Approve other defective parts whose flaws are not visible to the naked eye.
(8) Make mistakes in routing so that parts and materials will be sent to the wrong place in the plant.
(9) When training new workers, give incomplete or misleading instructions.
(10) To lower morale and with it, production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
(11) Hold conferences when there is more critical work to be done.
(12) Multiply paper work in plausible ways.
Start duplicate files.
(13) Multiply the procedures and clearances involved in issuing instructions, pay checks, and so on. See that three people have to approve everything where one would do.
(14) Apply all regulations to the last letter.
2.4.9. The Anarchic Encounter: Economic and/or Erotic? anarchism philosophy jobs
In this essay Shawn expands on the notion of the anarchic encounter to talk about its possibilities – what can be born out of it. It's interesting and inspiring.
2.4.10. The Anatomy of the Encounter anarchism philosophy
Shawn's explications of Proudhonian anarchist/libertarian socialist theory are always deeply interesting and enlightening, providing me with new concepts in my intellectual toolkit. His work on the anarchic encounter, a sort of Crusoe-economics look at the ideal anarchistic social relations that can be used to guide our practical interactions and organization, but also our conceptions of justice, is some of the most enlightening. In this essay, he introduces that concept.
2.4.11. The Collected Writings of Renzo Novatore philosophy anarchism
There is perhaps no writer that has influenced me more than the philosopher-poet and Nietzschean/Stirnerian hybrid anarchist Novatore. His writings have a wonderful vitality and strength and life and individuality to them, and a really fun need to strike back with poetic, beautiful, yet vicious anger at the fascists, unconscious egoists and communitarians that would drag us all down. I've read a lot of these, but not all, and I really should reread them. The ones I've read and really liked are:
- Black Flags
- Cry of Rebellion
- Intellectual Vagabonds
- Toward the Creative Nothing
- Anarchist Individualism in the Social Revolution
- My Iconoclastic Individualism
- I Am Also A Nihilist
A quote from Black Flags that seems relevant in our times:
Our time — despite empty and contrary appearances — is already lying on all fours under the heavy wheels of a new History.
The bestial morality of our bastard christian-liberal-bourgeois-plebeian civilization turns toward the sunset…
Our false social organization is collapsing fatally — inexorably!
The fascist phenomenon is the surest, most indisputable proof of it.
In Italy as elsewhere…
To show it, one would only have to go back in time and question history. But even this isn’t necessary! — The present speaks eloquently enough…
Fascism is nothing but a cruel, convulsive spasm of a decaying society that tragically drowns in the quagmire of its lies.
Because it — fascism — indeed celebrates its bacchanals with flaming pyres and malicious orgies of blood; but the dull crackling of its livid fires doesn’t give off a single spark of vivid innovative spirituality; meanwhile, may the blood that pours out be transformed into wine, that we — the forerunners of the time — silently gather in red goblets of hatred setting it aside as the heroic beverage to pass on to the children of the night and of sorrow in the fatal communion of great revolt.
We will take these brothers of ours by the hand to march together and climb together toward new spiritual dawns, toward new auroras of life, toward new conquests of thought, toward new feasts of light; new solar noons.
Because we are lovers of liberating struggle.
We are the children of sorrow that rises and thought that creates.
We are restless vagabonds.
The boldest in every endeavor; the tempter of every ordeal.
And life is an “ordeal”! A torment! A tragic flight. — A fleeting moment!
2.4.12. The Difference between Anarchy and the Academy anarchism
A very useful short essay outlining the ways in which the academy can be useful to anarchist causes, is praiseworthy, can offer things to imitate to us, and also the ways in which it is structurally contrary to anarchism, can hurt our causes, and is deeply flawed and problematic, and how someone who wants to be an anarchist academic, instead of an academic anarchist, might navigate those issues. This is an essay that hits close to home as someone who was forced to drop out of university a year before finishing my Bachelor's due to disability, and has serious problems with the academy and the culture and tendencies of those who stay there a long time, but also sees the ways in which it's really useful and good too.
2.4.13. The Myth of the Rule of Law anarchism philosophy
- Law is inherently indeterminate:
- It contains a large amount of diametrically contradicting decisions, cases, precedent, and rules, and anything can be logically derived from conflicting premises
- There is no such thing as language that does not admit of interpretation and reinterpretation, and thus even clearly written law texts can really be understood to mean anything
- As a result, a legal argument can be found for any conclusion, and any conclusion is pretty much as valid as any other, even if they're diametrically opposed. What the law "means," and thus how it is interpreted, argued, and enforced, is totally up to the moral and political beliefs of the individuals doing it.
- Moreover, law is always, universally produced by political actors, who will encode their values into it.
- The apperance of the stability in meaning of the law is merely a product of the stability in the values of those who interpret it, due to social selection processes and indoctrination, and the "meaning" of the law can be seen to change as who interprets it changes.
- Furthermore, law cannot be anything but indeterminate, because if it was completely rigid, absolute, and clear – assuming for a moment such a thing were even possible, which it isn't – then, while it would be able to mete our order, it wouldn't be able to mete out justice, because it wouldn't be able to take into account context, individual cases, and complex human values.
- Law cannot be anything but indeterminate, also, because it is a monopoly product, produced by the state and provided one-size-fits-all for everyone, so it has to be flexible enough to make that at least a little feasible. (This rather nicely and summarily puts to bed Rothbard's project in The Ethics of Liberty)
- Most Americans are clearly both aware that there is no such thing as the "rule of law, not people" in how they jockey for control of law, but at the same time seem to believe in it. This cognitive dissonance is because the myth of the rule of law is otherwise emotionally useful. Namely, because it naturalizes law, as being neutral, objective encodings of justice, and thus part of the natural social order, which:
- allows people deniability when they enforce their values, or the dominant social values they don't want to have to stand against, on others
- enables people to view their moral positions, as filtered through their construction of the meaning of the law, as neutral, objective, and necessary, and their oponents' interpretations as biased and politically-motivated
- Americans also subscribe to this idea because it has been indoctrinated into them by the state. It is convenient for the state for people to believe in the rule of law because then people are more willing to submit themselves to it, and make others submit to it.
- If we stop viewing law as something that must be supplied centrally, we can avoid the problem of one-size-fits-all law, and difficult to interpret law, and power structures around law.
2.4.14. The Politics of Obedience: The Discourse of Voluntary Servitude anarchism philosophy
Classic quote from this work:
When not a hundred, not a thousand men, but a hundred provinces, a thousand cities, a million men, refuse to assail a single man from whom the kindest treatment received is the infliction of serfdom and slavery, what shall we call that? Is it cowardice? Of course there is in every vice inevitably some limit beyond which one cannot go. Two, possibly ten, may fear one; but when a thousand, a million men, a thousand cities, fail to protect themselves against the domination of one man, this cannot be called cowardly, for cowardice does not sink to such a depth, any more than valor can be termed the effort of one individual to scale a fortress, to attack an army, or to conquer a kingdom. What monstrous vice, then, is this which does not even deserve to be called cowardice, a vice for which no term can be found vile enough, which nature herself disavows and our tongues refuse to name?
[…]
Poor, wretched, and stupid peoples, nations determined on your own misfortune and blind to your own good! You let yourselves be deprived before your own eyes of the best part of your revenues; your fields are plundered, your homes robbed, your family heirlooms taken away. You live in such a way that you cannot claim a single thing as your own; and it would seem that you consider yourselves lucky to be loaned your property, your families, and your very lives.
All this havoc, this misfortune, this ruin, descends upon you not from alien foes, but from the one enemy whom you yourselves render as powerful as he is, for whom you go bravely to war, for whose greatness you do not refuse to offer your own bodies unto death. He who thus domineers over you has only two eyes, only two hands, only one body, no more than is possessed by the least man among the infinite numbers dwelling in your cities; he has indeed nothing more than the power that you confer upon him to destroy you.
Where has he acquired enough eyes to spy upon you if you do not provide them yourselves? How can he have so many arms to beat you with if he does not borrow them from you? The feet that trample down your cities, where does he get them if they are not your own? How does he have any power over you except through you? How would he dare assail you if he had not cooperation from you? What could he do to you if you yourselves did not connive with the thief who plunders you, if you were not accomplices of the murderer who kills you, if you were not traitors to yourselves?
You sow your crops in order that he may ravage them; you install and furnish your homes to give him goods to pillage; you rear your daughters that he may gratify his lust; you bring up your children in order that he may confer upon them the greatest privilege he knows — to be led into his battles, to be delivered to butchery, to be made the servants of his greed and the instruments of his vengeance; you yield your bodies unto hard labor in order that he may indulge in his delights and wallow in his filthy pleasures; you weaken yourselves in order to make him the stronger and the mightier to hold you in check. From all these indignities, such as the very beasts of the field would not endure, you can deliver yourselves if you try, not by taking action, but merely by willing to be free.
Resolve to serve no more, and you are at once freed. I do not ask that you place hands upon the tyrant to topple him over, but simply that you support him no longer; then you will behold him, like a great Colossus whose pedestal has been pulled away, fall of his own weight and break into pieces.
This is a very old essay – written in 1577 – but the bulk of it is still as relevant as it was back then. The core points are these:
- Tyrants/rulers always hold sway over a group far vaster than them, such that they could never hope to actually enforce their will over those they wish to dominate if those people simply ignored their authority and refused to obey. (Part I)
- So how do tyrants hold sway over the populace that they could not otherwise control?
- People have a tendency to ideologically naturalize what is now as what has always been, and what must necessarily be, so people are passive and apathetic.
- People tend to follow tradition, so once more than a generation has passed since rule was instituted, most go along with it by default.
- Human beings adapt to, are shaped and molded by, the conditions under which they grow up; when one grows up under tyranny, one becomes so moulded by it that resistence to it is nearly impossible, not least because one doesn't know what liberty was like, and so doesn't know what they're missing out on well enough to fight tooth and nail for it.
- Nobody knows anyone else is discontented, or if they know that, how truly committed they are (because you'd have to know someone's heart of hearts to know that), so we're all locked in – although he didn't put it in these terms – a Prisoner's Dilemma, unable to do anything because we think we're alone.
- Through the classic bread and circuses which distract us from their tyranny, and make us feel as though they're generous even when the wealth they're sharing with us is stolen from us in the first place.
- Through intimidation of the few people who manage to overcome the former points and resist.
- And how does the tyrant get people who are willing to enforce its will? Merely by promising them a share in the spoils, and to abuse people in turn just as they are abused. For some, that is enough.
- However, aligning yourself with tyrants is always a dangerous game, since they're basically inherently unaccountable, being at the top of a hierarchy.
2.4.15. Ur-Fascism philosophy
A classic text of political philosophy using one of my favorite ideas (the idea of family resemblance). All the more relevant in the modern day.
2.4.16. Are We Good Enough? - Peter Kropotkin anarchism philosophy
A classic essay responding to a common question about anarchism. I really like this one. Very useful to send to skeptical people!
2.4.17. Post-Left
2.4.17.1. A Review of The “Tyranny of Structurelessness”: An organizationalist repudiation of anarchism anarchism
A response to the famous organizationalist essay "The Tyranny of Structurelessness." I was going to write my own, but this one phrases my complains just as well and completely as I would, so I decided to save myself some time and merely mirror this for later reference instead.
All I would add is that a synthesis of the critiques present in the original essay, as flawed as they are, and the responses found in this one is possible, and I think that's to be found in creating explicit "governance documents" even for informal, fluid, affinity based anarchist associations, if they're going to have more than a couple members and stick around for more than a few weeks (which is sometimes necessary, especially when centered around managing material resources), as a way to have a common touchstone for everyone to understand the values, intentions, goals, and processes of the group, to help avoid miscommuncations. The caveats being that such documents need to be:
- collaboratively edited (preferably using something like CryptPad),
- living (open to be changed at any time as norms and needs and context shift, and things are learned from experience),
- designed to be interpreted as a loose statement of common feeling, not legalistically and procedurally interpreted and adhered-to like some sort of Law,
- and the content of which should be explicitly based on things like consensus decision making, direct action, non-hierarchical organization and conflict resolution, drawing lots for different roles, separation, limitation, and enumeration of powers, etc.
2.4.17.2. Anarchism as a Spiritual Practice anarchism religion
I am not spiritual in the traditional sense. I do not believe in the supernatural, or gods, or goddesses, or enjoy spiritual traditions, practices, or ceremonies, or go on drug trips. I don't adhere to a religion. Insofar as I have a religion (in the broad sense some mystics use it to mean, as in "any set of life practices inspired by a historical tradition that touches on the moral and axiological dimensions of life") then, as this essay says, anarchism is my religion. This essay is a really well-written piece exploring what that means, with some help from philosophical Taoism, which I'm also very sympathetic to.
2.4.17.3. Bloody Rule and a Cannibal Order! anarchism
A very thorough, in depth, and well-reasoned argument against moralist anarchists from the perspective of a Stirnerite – both that morality is not inevitable from a sort of enlightened egoism, and that it is not necessary to be anarchist.
2.4.17.4. TODO Communism Unmasked anarchism philosophy
A highly entertaining, very egoist critique of Marxism that shares many of the issues I have with it. Take it with a grain of salt of course, since it isn't the most rigerous out there, but definitely worth at least poking around and reading sections of!
2.4.17.5. Hayek, Epistemology, and Hegemonic Rationality philosophy anarchism
This is an excellent article about how Hayeks ideas about science and epistemology and information are essentially a post-structualist critique of the idea that we can analyze and understand society scientifically, but argued clearly and in an analytic way. This is essentially a twin of the essay I eventually intend to write showing how late Wittgenstein aligns closely with the anti-essentialism of the post-structuralist philosophers, but in a way that lends a new perspective and argumentative comprehensibility to them.
2.4.17.6. Natural Law, or Don’t Put a Rubber on Your Willy philosophy anarchism
A very fun egoist anarchist/moral nihilist takedown of the inane logic of natural law. I tried to get natural law to "work" for like four years before I realized no matter what I did it was always totally arbitrary, or collapsed back into egoism, so it's cathartic to read something like this!
2.4.17.7. The Question of a Stagnant Marxism: Is Marxism Exegetical or Scientific? philosophy
A critique of the dominant Marxist culture (outside acadamia) of being a stagnant, nearly religious exigetical exercise, instead of a dynamic, developing intellectual discipline, from a Marxist (or as he would prefer to be called, a "scientific socialist"). What's funny about this essay is that despite a correct assesment of the problem, there's no analysis of why the problem came to be our really how to solve it – the fundamental ideological, philosophical, and cultural issues that made Marxism the way it is today – and, ironically, it falls into the exact same problems that caused the issue it's pointing out, such as the blinkered monomania with ideological terms (notice how often "dialectical" is prepended to terms unnecessarily in order to make them more Marxist, to the point where the word almost means nothing?), the claiming of "scientificness" for the ideology despite admitting there's very little empiricism, falsificationism, and a very large normative element, and a re-affirmation that any modification or development of core principles or ideas put forward by Marx would be "revisionist" and thus inherently bad and wrong. Thus it is an accurate critique that ultimately demonstrates the problem it's trying to get out of.
2.4.17.8. The Union of Egoists anarchism philosophy
An in-depth description of and elaboration of a secondary, but extremely useful, concept from Stirner's work, as above, but this time regarding the idea of the "union of egoists." This essay deeply shapes my ideas about how social life and organization should work, or at least how I wish it could.
2.4.17.9. The Unique and Its Property philosophy
The best translation of Stirner's work to date. I wouldn't necessarily say this book has shaped my ideas, so much as it provides an interesting exploration of ideas that I had already come to by myself (moral nihilism, perspectivism, anti-dogma in all forms) as well as a more consistent and very challenging working-out of them, although I find his arguments, when viewed analytically, as somewhat wanting – hence the essays I intend to write arguing for his positions in a more analytic philosophy way.
2.4.17.10. Toward the queerest insurrection anarchism queer
A passionate manifesto that I have only felt more as time has gone on speaking out against LGBTQ assimilationism, against the urge to become legible, and speaking to all the ways in which the Queer is at odds inherently with the system we live under. It has deeply influenced how I see what the goals and approaches of queer life and organizing should be.
2.4.17.11. Post-Left Anarchy anarchism philosophy post_left
Leftism has a continual history of failure – failure to achieve anything, or failure to achieve anything good – and recuperation by capital. Anarchism must reject leftism, even though it was born alongside leftism and shares many of the same ends, and embrace a uniquely anarchist critique of everything leftism stands for: organization, ideology, and morality among them.
2.4.17.12. Smashing the Orderly Party anarchism
A pretty good summary of all of the core problems that anarchists have with Leninism as an ideology:
- A tendency toward cult of personality and denial of Lenin's historical mistakes
- Vanguardism as an authoritarian, substitutionist, secrative system.
- Marxist and Leninist critiques of the state are never about fundamental problems with it as an institution, with plice, courts, prisons, and the military, but only who is weilding the power (good if it's them, bad if it's anyone else!)
- Socialism is just a form of bureaucratic state capitalism and reformism (see State Socialism and Anarchism by Benjamin Tucker)
2.4.17.13. Software and Anarchy anarchism programming software
A treatment of various issues in software design and production (including programming tools) from the perspective of an egoist anarchist influenced by Bookchins ideas around social ecology and liberatory technology. This is definitely an interesting piece, although there are specific things I disagree with, such as:
- That a static type system is inherently bad or limiting of the programmer. To make this critique, the authors seem to be conflating static type systems with static programming languages, but you could easily have a language as dynamic as Common Lisp or Smalltalk with a static type system (see Coalton), as well as pointing at the anti-human consequences of non-gradual type systems without realizing that gradual type systems are possible! If those two points are addressed, then, static type systems can be had without them being inherently anti-liberatory, and can provide benefits, by helping you remember and deal with details that'd otherwise be easy to forget or get wrong – an important thing for someone like me who has a memory-related disability!
- The idea that it's a good idea to use a license for your works that is copyleft but also explicitly denies commerical use to all except worker co-operatives, or in peer-production where trade is direct sharing instead of monetary compensation. While at first I found this idea intriguing, because I see licenses not as things that we need to worry about the enforcability of, since the copyright system ultimately always serves corporate interests, and therefore it doesn't matter whether they could theoretically be enforced or not, but instead as statements of intentions and preferences with respect to how your contributions to the Commons should be understood and treated by others, and this seemed like a clear statement of values in support of things I like (peer production and worker ownership), I quickly grew uncomfortable with it upon further thought. Namely because:
- While licenses are in my opinon mostly to be viewed as statements of values and preferences, they also do inherently involve the, at least rhetorical, invocation of state violence. Using the state's legal system to undo copyright itself, to return things to the commons where they should be, if they benefitted in any way from the commons, as copyleft does, doesn't seem like a problem to me, because it's less a positive application of state violence (rhetorically) then a nullification of it. However, this license seems to want to use state violence to militate against certain forms of organization.
- This is especially bad, in my opinion, because I don't think peer production is inherently better than trade for forms of monetary compensation, and I don't think all worker cooperatives are good, or all forms of corporations or wage labor are bad. Especially in the second case, the former is more likely to be less exploitative than the latter, but not necessarily. There are situations in which I can imagine wage labor is just fine, and a small business isn't exploitative, and situations I can imagine in which a worker cooperative is a cesspit of backstabbing, infighting, power politics, and exploitation of people with less voice in the process, and externalities on the community. So I don't see why I should make such an absolute statement.
However overall I agree with and am interested in the conclusions of this paper!
2.4.17.14. Some Thoughts on the Creative Nothing philosophy
The creative nothing is one of the most interesting, and overtly Taoist, ideas that Stirner has – but it is only explicitly discussed in two places in The Unique and Its Property. More can be inferred about it from applying the rest of Stirner's philosophical attitude to the idea, however, and this article takes up that worthwhile endeavor well, as a useful elaboration on that crucial idea.
2.4.17.15. Stirner's Critics philosophy
Stirner (in the third person) responds to some criticisms of his philosophical magnum opus. Provides a clearer, shorter, and more accessible elucidation of some of his ideas, and responds preemptively to some critiques one might have. This was my introduction to him, and could well by yours as well!
2.4.17.16. Critique of "Left-Wing" Culture
- Transmisogyny transmisogyny
Another horrifically poignant and accurate personal account, from a trans woman on Tumblr, of what it is like to exist in "queer feminist" spaces as someone who's femininity – and thus right to belong, and moral goodness – is constantly fragile and in question.
- I Am A Transwoman. I Am In The Closet. I Am Not Coming Out. transmisogyny
A painfully personal exposition of a life of gender dysphoria as a closeted trans woman, and using that position as a window into how truly, horribly and damaging the misandry of cis (and some transfem) feminists is. Not just to trans women, but also to cis men, too, but how it also forms a convenient echo chamber through thought-terminating cliches and dismissing messages based on who says them, to insulate feminists from possible criticism and the impacts of their words.
- Hot Allostatic Load transmisogyny
A trans women's experiences being abused at the hands of queer feminist spaces – transmisogyny, the inherent evil of callout culture and whisper networks, and so on. Just go fucking read it, it's an incredibly powerfully and poignantly written essay that, if you're on the left in the US, will probably deeply challenge your views.
- How Twitter can ruin a life: Isabel Fall’s complicated story
The story of how Twitter cancel culture essentially killed a trans woman, convincing her that she couldn't truly be a woman, that the newfound identity she'd just been stepping into was invalid and needed to die, and that she should detransition, all because they couldn't handle a provocative title, and didn't read further than that title before banding up into a mob to punish her.
The story repeatedly goes to great lengths to try to say that it somehow wasn't cancel culture, but that's purely an exercise in doublethink. It absolutely is, it bears all the hallmarks of it:
- A preference for paranoid over reparative readings – looking for what's wrong with a work and trying to be angry at that, instead of looking at what's good about a work, possibly even what's good about it that others might appreciate even if you don't.
- Non-marginalized people trying to be social justice warriors on marginalized people's behalf without asking them, for virtue points.
- Thw dynamics of a mob, where nuance is erased and stopping the momentum is impossible.
- Paranoid assumptions about intention.
- The idea that only marginalized people can possibly write about marginalized experiences
- The idea that if someone writes about a marginalized experience differently than you would have, that means they "aren't really X."
- etc.
Some particularly egregious quotes are:
“There were several reporters that reached out to me right after the story came down. I remember having a conversation with one of them and saying, ‘Is [writing about] this really what you want to do? I’m not going to participate. I think that this is just going to make it worse,’” Clarke says. “And they ran with it. It brought in the whole cancel culture thing. Isabel needed that story down for her, not for them, and not for anybody else. But for her. And that’s why it came down. I tried to make that clear [in the editor’s note on the story’s removal]. But people still wanted that cancel narrative.”
Why did Isabel Fall need the story taken down for her own sake, hmm?
And:
"If anybody canceled Isabel Fall, it was Isabel Fall. She remains the subject of her own sentences."
Who made her "cancel herself" – check into the hospital and take down her story and ultimately kill the new version of herself she was stepping into? This is almost victim blaming to protect cancel culture.
- Shaming Isn’t Shielding: The Moral Panics That Cry Wolf praxis
Despite being framed as specific to the furry community – and one particular kind of social harassment – this is actually a really good general guide, if interpreted more broadly, for spotting many (although not all) kinds of social harassment that aims to take advantage of our desire to be "good," "moral," "protect people," "weed out creeps," and other such instincts. The crux of it is this:
As I mentioned previously, this “Google Doc expose” pattern has been employed in the past by actual victims desperate for their community to stop supporting their abuser.
Sometimes, when you see a Google Doc floating around accusing a furry of being abusive, that’s what you’re seeing.
Other times, you’re being handed a specially crafted piece of rhetoric that cherry-picks and lies by omission to make innocent people seem guilty of being terrible.
Here are a few things to watch out for to distinguish legitimate grievances from targeted harassment.
- Scope creep.
- Is the doc focused on a specific, focused group of people, or does it read like a hit piece on as many popular furries as possible?
- Actions, not affiliations.
- A legitimate call-out will focus on what harmful actions a person did, their victims’ stories, and the harms caused as a result of said actions.
- Harassment docs focus on weak ties (“they were in the same Telegram group as ____”) or a person’s interests.
- Past remediation efforts.
- Did the person in question immediately jump to the Google Doc outcome without trying anything else first?
- What are the incentives involved?
- Speaking out about a horrible experience is terrifying, especially if it was perpetrated by someone with a lot of wealth or social capital in your community.
- Conversely, trying to guilt the offending party or their audience into donating money to buy silence is simply blackmail.
- Is this part of a larger pattern?
- Is this the only callout a person has ever made?
- Is this the tenth callout this person has made this year?
- What other metadata can you use to judge the validity of the information you’re presented with?
This short list of things to look out for isn’t foolproof, but it should help most people reduce their error rate for assessing a callout on social media.
- Scope creep.
- Fandom, purity culture, and the rise of the anti-fan
A story of how purity culture took over online spaces.
- Post-Left vs “Woke” Left anarchism post_left
The woke left fears the individualist/egoist post-left, because it offers something so much stonger, more vital, more free, than it ever could. This is a takedown of many of the ways in which it tries to smear egoists to protect its precious "correct" church of anarchism or leftism.
- The “Stirner Wasn’t A Capitalist You Fucking Idiot” Cheat Sheet philosophy anarchism
Stirner is often accused of being somehow antithetical in principles, values, and tendencies to anarchism. This is bullshit. This will show you why.
- Vampire Castle post_left
This essay has aged poorly in one, narrow sense — in that the particular figure Fisher chose to defend, Russell Brand, has shown himself to be an asshole in a multilayered, ongoing way since the time of writing. But Fisher was not Brand, and for every person like Brand whom the left accurately identifies as a problem, there are an equal number whom they slander for no reason; and that isn't even what's important about this essay. Many of the core ideas behind this essay are evergreen, and truer now than when it was written.
Insights:
- "The petit bourgeoisie which dominates the academy and the culture industry has all kinds of subtle deflections and pre-emptions which prevent [class] even coming up, and then, if it does come up, they make one think it is a terrible impertinence…"
- "[T]he features of the discourses and the desires which have led us to this grim and demoralising pass, where class has disappeared, but moralism is everywhere, where solidarity is impossible, but guilt and fear are omnipresent" are "we have allowed bourgeois modes of subjectivity to contaminate our movement". Namely:
- The Vampire Castle:
- "The Vampires’ Castle specialises in propagating guilt. It is driven by a priest’s desire to excommunicate and condemn, an academic-pedant’s desire to be the first to be seen to spot a mistake, and a hipster’s desire to be one of the in-crowd."
- "The danger in attacking the Vampires’ Castle is that it can look as if – and it will do everything it can to reinforce this thought – that one is also attacking the struggles against racism, sexism, heterosexism. But, far from being the only legitimate expression of such struggles, the Vampires’ Castle is best understood as a bourgeois-liberal perversion and appropriation of the energy of these movements"
- "rather than seeking a world in which everyone achieves freedom from identitarian classification, the Vampires’ Castle seeks to corral people back into identi-camps, where they are forever defined in the terms set by dominant power, crippled by self-consciousness and isolated by a logic of solipsism which insists that we cannot understand one another unless we belong to the same identity group."
- "I’ve noticed a fascinating magical inversion projection-disavowal mechanism whereby the sheer mention of class is now automatically treated as if that means one is trying to downgrade the importance of race and gender. In fact, the exact opposite is the case, as the Vampires’ Castle uses an ultimately liberal understanding of race and gender to obfuscate class. […] it [is] noticeable that the discussion of class privilege [is] entirely absent."
- "The problem that the Vampires’ Castle was set up to solve is this: how do you hold immense wealth and power while also appearing as a victim, marginal and oppositional? The solution was already there – in the Christian Church. So the VC has recourse to all the infernal strategies, dark pathologies and psychological torture instruments Christianity invented, and which Nietzsche described in The Genealogy of Morals. This priesthood of bad conscience, this nest of pious guilt-mongers, is exactly what Nietzsche predicted when he said that something worse than Christianity was already on the way. Now, here it is …"
- "The Vampires’ Castle feeds on the energy and anxieties and vulnerabilities of young students, but most of all it lives by converting the suffering of particular groups – the more ‘marginal’ the better – into academic capital. The most lauded figures in the Vampires’ Castle are those who have spotted a new market in suffering […]"
- "The first law of the Vampires’ Castle is: individualise and privatise everything. While in theory it claims to be in favour of structural critique, in practice it never focuses on anything except individual behaviour."
- "Because they are petit-bourgeois to the core, the members of the Vampires’ Castle are intensely competitive, but this is repressed in the passive aggressive manner typical of the bourgeoisie. What holds them together is not solidarity, but mutual fear – the fear that they will be the next one to be outed, exposed, condemned."
- "The third law of the Vampires’ Castle is: propagate as much guilt as you can. The more guilt the better. People must feel bad: it is a sign that they understand the gravity of things. It’s OK to be class-privileged if you feel guilty about privilege and make others in a subordinate class position to you feel guilty too. You do some good works for the poor, too, right?"
- "The fourth law of the Vampires’ Castle is: essentialize. While fluidity of identity, pluraity and multiplicity are always claimed on behalf of the VC members […] the enemy is always to be essentialized. Since the desires animating the VC are in large part priests’ desires to excommunicate and condemn, there has to be a strong distinction between Good and Evil, with the latter essentialized. Notice the tactics. X has made a remark/ has behaved in a particular way – these remarks/ this behaviour might be construed as transphobic/ sexist etc. So far, OK. But it’s the next move which is the kicker. X then becomes defined as a transphobe/ sexist etc. Their whole identity becomes defined by one ill-judged remark or behavioural slip."
- "The fifth law of the Vampires’ Castle: think like a liberal (because you are one). The VC’s work of constantly stoking up reactive outrage consists of endlessly pointing out the screamingly obvious: capital behaves like capital (it’s not very nice!), repressive state apparatuses are repressive. We must protest!"
- Neo-Anarachism (this is the one I most fall afoul of, although I'm neither bourgoise nor educated technically, that is the culture I was raised in/am closest to; and although I think I have good reasons for my lack of action — namely, being very disabled — that doesn't change the effects): "By neo-anarchists I definitely do not mean anarchists or syndicalists involved in actual workplace organisation, such as the Solidarity Federation. I mean, rather, those who identify as anarchists but whose involvement in politics extends little beyond student protests and occupations, and commenting on Twitter. Like the denizens of the Vampires’ Castle, neo-anarchists usually come from a petit-bourgeois background, if not from somewhere even more class-privileged […] They are also overwhelmingly young: in their twenties or at most their early thirties, and what informs the neo-anarchist position is a narrow historical horizon."
- "Neo-anarchists have experienced nothing but capitalist realism. […] But the problem with neo-anarchism is that it unthinkingly reflects this historical moment rather than offering any escape from it. It forgets, or perhaps is genuinely unaware of, the Labour Party’s role in nationalising major industries and utilities or founding the National Health Service. Neo-anarchists will assert that ‘parliamentary politics never changed anything’, or the ‘Labour Party was always useless’ while attending protests about the NHS, or retweeting complaints about the dismantling of what remains of the welfare state. media to attempt to engineer change from there. […] Purism shades into fatalism; better not to be in any way tainted by the corruption of the mainstream, better to uselessly ‘resist’ than to risk getting your hands dirty."
- "It’s not surprising, then, that so many neo-anarchists come across as depressed. This depression is no doubt reinforced by the anxieties of postgraduate life, since, like the Vampires’ Castle, neo-anarchism has its natural home in universities, and is usually propagated by those studying for postgraduate qualifications, or those who have recently graduated from such study."
- "Why have these two configurations come to the fore?"
- "they have been allowed to prosper by capital because they serve its interests. […] why would capital be concerned about a ‘left’ that replaces class politics with a moralising individualism, and that, far from building solidarity, spreads fear and insecurity?"
- "It might have been possible to ignore the Vampires’ Castle and the neo-anarchists if it weren’t for capitalist cyberspace. The VC’s pious moralising has been a feature of a certain ‘left’ for many years – but, if one wasn’t a member of this particular church, its sermons could be avoided. Social media means that this is no longer the case, and there is little protection from the psychic pathologies propagated by these discourses."
- "The bourgeois-identitarian left knows how to propagate guilt and conduct a witch hunt, but it doesn’t know how to make converts. But that, after all, is not the point. The aim is not to popularise a leftist position, or to win people over to it, but to remain in a position of elite superiority, but now with class superiority redoubled by moral superiority too. ‘How dare you talk – it’s we who speak for those who suffer!’"
- The Vampire Castle:
- "What is to be done?"
- "So what can we do now? First of all, it is imperative to reject identitarianism, and to recognise that there are no identities, only desires, interests and identifications. […] Sadly, the right act on this insight more effectively than the left does."
- "But the rejection of identitarianism can only be achieved by the re-assertion of class. A left that does not have class at its core can only be a liberal pressure group. Class consciousness is always double: it involves a simultaneous knowledge of the way in which class frames and shapes all experience, and a knowledge of the particular position that we occupy in the class structure."
- "It must be remembered that the aim of our struggle is not recognition by the bourgeoisie, nor even the destruction of the bourgeoisie itself. It is the class structure – a structure that wounds everyone, even those who materially profit from it – that must be destroyed."
- "The interests of the working class are the interests of all; the interests of the bourgeoisie are the interests of capital, which are the interests of no-one. Our struggle must be towards the construction of a new and surprising world, not the preservation of identities shaped and distorted by capital."
2.4.18. Market Anarchism
2.4.18.1. Markets Not Capitalism anarchism
This is a wide, varied, interesting, intellectually stimulating, and exciting collection of essays from all throughout history discussing various aspects of left-wing market anarchism, the school of anarchism I adhere to. It's really interesting how my reading of accelerationism aligns with what's said in this book, just framed in a slightly different way. In fact, one of the essays I mirrored regarding unconditional accelerationism actually cites one of the key modern LWMA luminaries, Kevin Carson!
This series of essays deals with deliniating the market from capitalism, showing how — as accelerationists might phrase it — the market's deterritorializing and decoding forces of creative destruction would inevitably tear down all the rigid, centralizing, hierarchical edifaces of capital were it not for capitalism's own reterritorializing and recentralizing forces. But while accelerationists usually remain in the abstract realm of cybernetics, systems theory, and Marxist economic analysis, as well as theory-fiction — all of which is useful in the task of creating new subjectivities prepared for postcapitalism — Markets Not Capitalism rolls its sleeves up, shoves its arms up to the elbows in the machinery of Capital, and shows us how it works, through Tucker's three monopolies among other things.
The essays don't stop there, though — they outline various aspects of how a pure freed market, these deterritorializing forces unleashed, might achieve the specific ends socialists set out to achieve, how they might open up new avenues, through the tearing down of hierarchies of power and extractivism, for what we now call the poor to thrive, for the environment to be protected, and for us to provide ourselves and each other good healthcare — all the concerns that might be brought up in opposition to markets.
It also contains a rousing defense of these deterritorial forces, one which made me, at least, only want to lean into them (this was the first work of anarchist theory I read, although I did read only some of the essays; most of those are are mirrored directly here as well, but I still made the effort to convert MNC to HTML because I plan to go back for more).
Table of contents:
MARKETS NOT CAPITALISM ACKNOWLEDGEMENTS Introduction THE MARKET FORM THE MARKET ANARCHIST TRADITION THE NATURAL HABITAT OF THE MARKET ANARCHIST PART ONE: The Problem of Deformed Markets THE FREED MARKET STATE SOCIALISM AND ANARCHISM How Far They Agree, and Wherein They Differ
- General Idea of the Revolution in the Nineteenth
MARKETS FREED FROM CAPITALISM PART TWO Identities and Isms
- Market Anarchism as Stigmergic Socialism
- Armies that Overlap
THE INDIVIDUALIST AND THE COMMUNIST A Dialogue
- A Glance at Communism
- Advocates of Freed Markets Should Oppose Capitalism
- Anarchism without Hyphens
- What Laissez Faire?
- Libertarianism through Thick and Thin
SOCIALISM: WHAT IT IS
- Socialist Ends, Market Means
PART THREE Ownership
- A Plea for Public Property
- From Whence Do Property Titles Arise?
- The Gift Economy of Property
FAIRNESS AND POSSESSION
- The Libertarian Case against Intellectual Property Rights
PART FOUR Corporate Power and Labor Solidarity Contributors
- Corporations versus the Market, or Whip Conflation Now
- Does Competition Mean War?
- Economic Calculation in the Corporate Commonwealth
- Big Business and the Rise of American Statism
- Regulation: The Cause, Not the Cure, of the Financial Crisis
- Industrial Economics
- Labor Struggle in a Free Market
- Should Labor Be Paid or Not?
PART FIVE Neoliberalism, Privatization, and Redistribution Contributors
- Free Market Reforms and the Reduction of Statism
FREE TRADE IS FAIR TRADE
- Two Words on "Privatization"
- Where Are the Specifics?
- Confiscation and the Homestead Principle
PART SIX Inequality and Social Safety Nets LET THE FREE MARKET EAT THE RICH! Economic Entropy as Revolutionary Redistribution
- Individualism and Inequality
- How Government Solved the Health Care Crisis
- The Poverty of the Welfare State
PART SEVEN Barriers to Entry and Fixed Costs of Living
- How "Intellectual Property" Impedes Competition
- The American Land Question
ENGLISH ENCLOSURES AND SOVIET COLLECTIVIZATION Two Instances of an Anti-Peasant Mode of Development
- Health Care and Radical Monopoly
PART EIGHT Freed-Market Regulation: Social Activism and Spontaneous Order REGULATION RED HERRING Why There's No Such Thing as an Unregulated Market
- We Are Market Forces
- Platonic Productivity
- Libertarianism and Anti-Racism
- Aggression and the Environment
- The Clean Water Act versus Clean Water
- Context-Keeping and Community Organizing
2.4.18.2. TODO Anarchists Against Democracy In Their Own Words anarchism philosophy
Some modern anarchists seem to worship democracy. Most liberals do. This is a collection of criticisms of it from most major anarchists.
2.4.18.3. Anarchy without Hyphens (1980) anarchism philosophy
An eloquent, concise statement of the idea that anarchism means a rejection of any imposed authority, and nothing more. It is "the hammer that smashes the chains" but it does not dictate what we should do after, in the space created by that smashing, except that it should not impose authority. Everything else is up to us. And any ideology that dictates one specific way of organizing "after anarchy has come" (whatever that means) is simply not really anarchism, but something else, like communism.
2.4.18.4. Anarchy in the U.K. anarchism history
A great short little article walking through all the ways people have protected themselves in each other without the state historically in England. While none of these ways are close to perfect, they show that there are alternatives to the state that have worked in the past.
2.4.18.5. Anatomy of the State anarchism philosophy
Despite my many disagreements with Rothbard, my revulsion with his paleoconservative turn, and my even greater disagreements with his intellectual heirs such as Hoppe, some of his writings are quite good. This is one of them. There are, of course, areas where I'd refocus his analysis, listed below, but in general it's very good, and I recommend reading it, just taken with a good-size grain of salt.
Some areas where I'd change what he said if I was writing it include (but aren't limited to):
- his belief that the greatest evil and aggression of the state is the invasion of private property is laughable,
- his confusion of capitalism with simply free(d) market cooperation such that he thinks states are "anti-capitalist" is fucking hilarious,
- he cites John C. Calhoun pointing out that a constitution and bill of rights means nothing if those governed by a state so "bound" cannot directly enforce those rights against that state somehow, or nullify violating state laws, and if the state is "a judge in its own case" regarding whether it's adhering to them, and then points out quite astutely that Calhoun's solution to this only protects individuals from federal, but not state, encroachment. He doesn't bother to realize why that is the case – that it's very convenient for a racist slaver "philosopher" to suggest a system where the federal government's laws can be blocked, but the states can do whatever they like
2.4.18.6. Confiscation and the Homestead Principle (1969) anarchism philosophy economics
One of the few essays by Rothbard that, even after my move away from anarcho-capitalism, I still really like, probably because its from the time when he was courting the New Left, instead of trying to create a paleoconservative form of "libertarianism," and so its ideals and projects are more compatible with mutualist ideas. In this case, he advocates for the giving of public propery, and any property gained through state violence, expropriation, or funding, to the people that actually occupy, use, and maintain it!
2.4.18.7. Corporations versus the Market; or, Whip Conflation Now anarchism economics
In this essay Roderick Long argues, successfully in my opinion, that corporate power is actually antithetical to an actually freed market, both because corporations naturally fear constant competition and shifting market demands, which will naturally break their power, and so will need to use state power to protect their position, and because corporate power is actually, historically, bought and protected (from competition, but also externalities, diseconomies of scale, etc) by means of substantial explicit, incidental, and implicit subsidies for big business on the part of the government. Thus, he argues, we should stop conflating advocacy for a free(d) market with advocacy for corporate capitalism.
2.4.18.8. Economic Calculation in the Corporate Commonwealth, Hierarchy or the Market, and Contract Feudalism philosophy anarchism
These essays show clearly and concretely why, if the Austrians were consistent in their critiques of the information, incentive, calculation, and ethical problems of centralization, authoritarianism, and surveillance, they would also have to be anticapitalist. These critiques are excellent and unique in providing excellent efficiency arguments against capitalism that don't rely on ideas about "rational planning."
2.4.18.9. From Whence Do Property Titles Arise? anarchism philosophy
An account of how mutualist property titles might arise from anarchist conditions and concerns, as a defense of market anarchism from communist anarchists.
2.4.18.10. In Defense of Public Space anarchism philosophy economics
A defense of public commons against more propertiarian arguments (based on the tragedy of the commoons) against it, and a gesture at why public commons are a very necessary corrective against the ways in which everything being private space could curtail autonomy. Useful as a critique of Hoppeans.
2.4.18.11. Instead of a Book, by a Man Too Busy to Write One: A Fragmentary Exposition of Philosophical Anarchism anarchism
Benjamin Tucker was one of the very first and most influential individualist anarchists, and many of his essays – which this book collects – are excellent and well worth reading, as well as being immensely influential on me. Obviously he has his rough spots (I haven't read most of Instead of a Book yet), and I'm sure personally he probably had many reprehensible beliefs as a man of his time, but the man is not what's important to me.
My favorite essays in here are:
- State Socialism and Anarchism
- Socialism: What It Is
- The Relation of the State to the Individual
- Armies That Overlap
2.4.18.12. Labour Struggle in a Free Market anarchism economics
An explanation of how, in a free(d) market world, workers are powerful and have many options for collectively fighting back against exploitation, but these days they have been castrated by government regulation much more than they have been superficially "helped." Useful for those worried about worker's rights in a market anarchist society.
2.4.18.13. Nice Shit for Everybody anarchism philosophy
So many anarchists seem to want an end-state world much like Anarres from The Dispossessed – living gray, ascetic lives with the bare minimum of necessities meted out to us by centrally planned and centrally controlled storehouses. They seem to want to reach equality by making nearly everyone worse off, instead of trying to lift everyone up as high as possible – and yes, that will probably require the relinquishing of many luxuries on the part of first world inhabitants, especially middle class suburbanites and above, but the goal isn't that relinquishment, the goal is the enrichment of as many as possible as much as possible. This very short essay is a statement of rejection of that ideology.
2.4.18.14. TODO Property is Theft! A Pierre-Joseph Proudhon Anthology philosophy anarchism economics
Proudhon is a massively underrated, deeply interesting thinker elucidating a form of anarchy that has been lost, but that I think is worth exploring, since it comes from outside the pure social anarchy and Marxist anarchy traditions found today. This reader compiles the most important parts of his otherwise often long and difficult works, including some brand new translations. I haven't read much of his yet, but I really intend to go through this.
2.4.18.15. Revealed Preference: A Parable anarchism economics
An excellent and enlightening exposition of why markets and market pricing are extremely helpful in resolving resource allocation problems – even when you care about equity! – in comparison even to just face to face communication.
2.4.18.16. Scratching By: How Government Creates Poverty as We Know It anarchism economics
An exposition of how the government takes away options that poor people would have otherwise had for supporting themselves and caring for each other, or criminalizes them, making poverty much more terrifying and dangerous and difficult than it would otherwise have to be. None of these options the government takes away are ideal, of course – some are dirty, or fire-prone, and so on, which is the guise under which the government steals them away – but they are better than jail, constant repossession of your belongings and destruction of your shelter, starvation, beatings, or death, and often better than helpless reliance on a centralized bureaucratic state organization that doesn't have to care about you and uses the money of other people who themselves don't personally care about you (and is thus incentivized to skimp on helping you).
2.4.18.17. TODO Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed anarchism economics philosophy
This is another one I haven't read, but the ideas about the impossibility of top down planning and modernist totalizing systems from which have percolated already deeply into my head from other things I've read. I look forward to reading it.
2.4.18.18. The Gift Economy of Property anarchism philosophy
An interesting intellectual play, coming from property rights from another angle.
2.4.18.19. THe Modern Business Corporation versus the Free Market? anachism economics
Although I'm not a fan of natural law arguments, this is an interesting article making the case that the corporate form is ultimately a child of the state, and I think the argument might possibly offer components that can be applied outside a natural law framework, by just thinking about how we might constitute economic and social norms that make the corporate form impossible in the ways he proposes natural laws do.
2.4.18.20. The Network: A Parody of the Discourse anarchism economics parody
This is a short parody of anti-market anarchists which operates by replacing "market" (a network of trade) with just the abstract concept of networks of individual actors, to show how absurd their arguments truly are.
2.4.18.21. The Question of Copyright philosophy anarchism
This is a great takedown of the idea of copyright from an anarchist egoist perspective. Written in a slightly oldtimey manner and not dealing with any "new developments," but it's funny as hell and the arguments are still sound all these years later.
2.4.18.22. The Right to Self-Treatment anarchism economics
A vision for how medical care could be made radically more decentralized and accessible, without falling into the trap of single payer healthcare:
Finally: I object strenuously to those who see a single-payer system, or a government-controlled delivery system like the UK’s National Health, as the solution. I’d like to give those who talk about healthcare being a “right” the benefit of the doubt, and assume they just don’t understand the implications of what they’re saying. But when you talk about education, healthcare, or anything else being a “right,” what that means in practice is that you get it in the (rationed) amount and form the State wants you to have, and buying it in the form you want becomes much more difficult (if not criminalized). It means the providers of the service will be cartelized, and that the provision of the service will be regulated according to the professional culture and institutional mindset of the cartels. As with “public” education, “public” healthcare means that the existing “professional” institutional culture is locked into place, but that you get their services at taxpayer expense.
Making something a “right” that requires labor to produce also carries another implication: slavery. You can’t have a “right” to any good or service unless somebody else has a corresponding obligation to provide it. And if you’re obligated to provide a good or service at a cost determined by somebody else, you’re a slave. Nobody is born with a “right” to somebody else’s labor-product: as Lilburne said, nobody is born with a saddle on his back, and nobody is born booted and spurred to ride him.
2.4.18.23. The Use of Knowledge in Society anarchism economics
Although Hayek's knowledge problem argument is often put to use by conservatives and libertarians, that doesn't mean it isn't correct – and in more consistent hands, it militates strongly against any form of hierarchy or centralization at all, not just the much-feared socialist bureaucracy.
2.4.18.24. Why Market Exchange Doesn’t Have to Lead to Capitalism anarchism economics
This is a good brief, high level explanation by Kevin Carson as to why a freed market, absent the various monopolies that the state enforces on behalf of capitalists, would not necessarily result in the same distortions, inequalities, and exploitations we find under capitalism. To those, I'd add a few of my own:
- There is no such thing as a free market simpliciter; the nature of the market is always defined by particular property (and other) norms, and mutualist property norms of occupacy and use for personal possessions, and usufruct for things like land, substantially change the possibilities and incentives of a market. For instance:
- It would be impossible to have large economic organizations with direct control over or ownership of branches in different locations, because, if occupancy and use constitute ownership, the local people working at a branch would automatically be considered to be the ones that actually own any capital there, and since intellectual property can't be occupied, it couldn't be used to still keep such local branches in line. This means you wouldn't get big corporations (or even co-ops) steamrolling local businesses, because if they tried that, whatever local branch they created would be automatically converted into a local business! The only way to have large organizations like that with multiple locations would be to have fundamentally independent entities coordinate via federation.
- While global trade would still be possible (and necessary), it would be impossible for any kind of economic organization to fire all its local workers and ship all of its jobs overseas while keeping its management structure well paid and in tact, not only because there would be no third world to exploit, but also because the workers would be the management structure, and they would be unlikely to push for the obliteration of their own livelihoods.
- Extraction of natural resources bypassing local communities would be impossible.
- In a capitalist society, you can turn simply owning some piece of wealth into a way to generate more wealth for you without you having to do any more work through rent-seeking behavior like paying wage laborers to work it, renting it out, loaning it, etc. This means that all wealth that you own can generate more wealth for you at some conversion ratio, and then that wealth can generate more wealth, and so you can reach geometric wealth growth. Whereas, in a mutualist economy where you can only own what you personally occupy and use, then that strategy is not open to you. The only way that gaining more wealth can make more wealth for you is if it makes your, personal, labor-hours more productive – and there are significant diminishing returns there, as well as significant limits, since there are only so many things you can regularly use or occupy, and only so many hours a day you can work. Thus, what was once geometric is converted to logorithmic. There will be inequalities, yes, but since they won't come as a result of exploitation, there's no reason to worry about that, and they'll never be able to become so great as to threaten to create a significant power structure or centralization of wealth.
2.5. Software Development
2.5.1. A Case for Feminism in Programming Language Design programming philosophy
I had to painstakingly convert this from a two-column PDF so that I can more easily read it in more places, instead of just on my ereader or at my desk. I hope anyone who finds this appreciates the effort, it took fucking forever.
In any case, despite what might for some be an inflammatory title – even as a leftist and a woman I recoiled a bit, wondering how feminism could possibly be applied to programming language design – this is a really excellent paper. It goes into detail about all the shortcomings of the PLT community with respect to studying the practical and human aspects of programming language design, and the cultural factors that exclude those who want to do it, as well as just exclude people in general. Well worth a read.
2.5.2. A Road to Common Lisp programming philosophy
This is not just an incredibly complete and excellent set of resources to get started with Common Lisp, literally enough to get anyone completely off the ground, but it's also a really good articulation of some of the reasons that Common Lisp is attractive even today (although I have my own list).
2.5.3. TODO Common Lisp: the Language, 2nd Edition (plus a guide on how to modify it to be up to date with ANSI) programming
Despite not quite being accurate to ANSI Common Lisp, I still find CLTL2 an invaliable reference for Common Lisp. It's much more comprehensible than the HyperSpec, while still being much more complete than any other book thanks to its position as an interim spec of sorts. Maybe someday I'll get around to editing my mirror to be up to date for ANSI.
2.5.4. Complexity Has to Live Somewhere programming
A more thorough elucidation of something I've said a lot before about how the mindless dogmatic pursuit of simplicity in the places most visible to programmers just pushes the necessary complexity needed to actually map to the world or requirements somewhere else, usually into less visible places like people's heads or glue scripts. This is very important to understand: trying to artificially "cut the gordian knot" of complexity is a bad move, it's just putting the burden somewhere else, possibly somewhere worse. Correctness is actually more important than simplicity. A good comeback to the "worse is better" and "unix philosophy" mindsets.
2.5.5. TODO Design By Contract: A Missing Link In The Quest For Quality Software programming
Design by Contract seems like the perfect balance between high assurance/formal methods and comprehensibilty, expressiveness-to-complexity ratio, and practicality, for reason's I've discussed elsewhere. I haven't read this paper yet but I'm very interested in doing so.
2.5.6. Effective Programs programming philosophy
I'm waring of Rich Hickey, since he really does seem to have built a cult of personality around him in the Clojure community, but he is extremely smart, incredibly articulate, and very pragmatic and wise, and this is one of my favorite talks by him. I love his combination of being intelligent and careful and a Right Thing thinker, but also deeply pragmatic – aware of the need of programs to change over time, to be embedded into, and composed out of, heterogenous and ever-changing systems, to be dynamic, and how programming ideas effect those things.
2.5.7. EQUAL programming history
This essay by Kent Pitman defending design choices in ANSI Common Lisp with respect to equality operators and copying functions might seem of only historical interest – and it certainly is that, too – but it actually puts up a pretty good defense, in my opinion, of why those design choices were The Right Thing and even maybe hints that other languages should do something similar. It also gets at a very important point: that types are not actually very indicative of intent, just some sort of general operational compatibilty, without the introduction of copious newtypes at least, which has its own costs.
2.5.8. Ethics for Programmers: Primum non Nocere programming software philosophy anarchism
This essay proposes one core principle that could provide a solid foundation for a code of programmer ethics to grow around:
Programs must do always, and only, what the user asks them to do. Even if the programmers who made it consider that request to be unethical.
To justify the first clause, it outlines all the deep ways that people in the modern day trust and depend on our computers:
- We are dependent on them, because we need them to almost everything these days, including things that are required of us by law – which is why having a smartphone is borderline a human right.
- Even for things we don't need them to do, we trust them to act on our behalf, as user agents constantly, for things like communication, purchasing, remembering things for us, and more.
- Throughout all of that, we entrust them with vital, sensitive, personal information about us, our lives, and our loved ones.
This means that in a world – such as the one we have now – where software does not serve users, life with computers would be one of unending paranoid, suspicion, confusion, frustration, betrayal, theft, and extortion for users. A world we do not want.
To justify the second sentence, the essay shows how if software developers attempted to enforce their own ideas about morality on the users of their software, this would only break that trust, because then sometimes the software will betray them, or not do what they want, but what someone else, with a possibly different moral code, would want. He talks about other professions that have codes of conduct where they pledge to serve those who come to them for succor regardless of their personal feelings regarding morality, such as doctors, lawyers, and priests, and why those codes are important.
I'd like to add two more reasons – somewhat hinted at in the essay – that software developers should not attempt to pass judgement as part of their software (as opposed to in their capacity as people) on the ethics of what people do with their software:
- Surveillance: in order to know what your users are doing with your software in a way that's detailed and flexible enough to actually exclude unethical activities, you're probably going to need some kind of telemetry or at least storage of user activity. Moreover, many people believe their ethical obligations, when someone does something wrong, include publicly naming and shaming them, or reporting them to "the authorities," in which case actual surveillance is implied.
- Power: normalizing this idea that software developers should have power over what users can and cannot do with their software gives software developers direct and intimate power to morally police users. I don't believe in moral relativism – I think that individuals should be able to react to other people's perceived immoralities as they see fit – but I am an act consequentialist and this is a level of power that seems wrong to me – like installing surveillance devices and electric collars on anyone so we can watch them for wrongdoing – because it involves more power over someone else than power over your own reaction to someone else: it extends further into their live and is more invasive. It is also much more centralized than just "social consequences." And if we normalize such power, those with insane codes of ethics, as well as those with decent ones, will use it to enforce their wills.
This is why I'm always somewhat disturbed by "FOSS" licenses that violate freedom 0 of the four software freedoms. Yes, they're usually a certain type of queer leftist I generally agree with on what applications of software are bad, and which are good. But this is not a rule that leads to the greatest individual autonomy compatible with the equal autonomy of all in the long run. It's just posturing.
2.5.9. Execution in the Kingdom of Nouns programming
This is an incredible, classic essay from the most famous ranter of programming rants ever to put finger to key. It is an incredible takedown of Java-style OOP that is both witty and also intellectually sound. And as a corollary I think it functions well as a takedown of any bondage and discipline language.
2.5.10. Functional Programming Doesn’t Work (and what to do about it) programming
I've actually compiled several blog posts on the same subject by James Hauge, including the titular one, into one larger narrative, because they're all closely related and significantly enhance the discussion in the others, and really work well as a combined narrative.
What I took away from this article is mostly that there are certain problem domains where pure functional programming actually introduces greater complexity, brittleness, and overhead which can outweigh the benefits in terms of explicitness and the more powerful architectures that rely on referential transparency that it can enable, even though pure functional programming is very beneficial in most cases. Therefore, we should carefully and stingily, but occasionally on a case-by-case basis, apply non-pure functional programming techniques to those problems where it's more helpful than harmful.
My experience in almost everything I've written though is that Trying to go for even a basic level of Purity would lead to an insane architecture. Take, for example, an Entity Component System:
If I want to have a system that gets all of the entities with a given set of components makes modifications to one of those components based on the information from the other components, I can either:
- Have a query function that goes through the entity system and returns tuples containing the applicable components for all of the entities that have the requested components, alongside the entity ID. This can be reused widely for other queries, and thus can be made very advanced, with negation, logical operators, backtracking, and so on. Then write a simple loop that goes through each of those and takes exclusive write access to the component it wants to modify and directly modifies it in place.
- Map over the list of tuples returned by the query function above, and producing a tuple of the entity ID and the modified component, which then has to go into a separate function that iterates over the entire list of entities again, in order to produce a new list of entities where the things indicated to be modified by the change list produced by the previous map are replaced.
- Iterate through the entire list of entities in the original map and produce the new list as you go, but then you can't reuse the entity selecting code, You have to do it all manually in the loop. So you end up duplicating code and it makes it difficult to have systems that operate on more interesting and complex selections that maybe do backtracking and such.
Even assuming a magical "sufficiently smart compiler" that can optimize away the copying implied by the latter two options, only the first option seems like a good option. The second introduces two separate loops to do the same thing, which both doubles how long it takes, and also just makes the code more complex and introduces more duplication. The third option is the worst of all, because while you get rid of the duplication of the loop, you can now no longer use the query system.
The other point I took from this article series is that there are certain types of violations of referential transparency that you can add that, although they may transitively infect a large portion of your code base, are small and simple enough that there isn't actually a significant cost to that. There is a difference between the net magnitude of the referential transparency violation's effects on your codebase, and just the scope of the things that are affected independent of how much they're really effected and how much it adds up. I think his example of a random number generator is a really good example of this. Literally almost every programming language except Haskell just allows you to directly get random numbers even if they are typically very purely functional (Erlang doesn't even let you modify things ever), and the reason for this is that if you use it in an inner place, yes it can sort of contaminate the rough, financial transparency of many other places in your code base and technically make their behavior non-deterministic as well. But since it's controlled by a seed and it's a very simple type of state in itself, that usually isn't actually a problem.
One thing that might help the predictability (for debugging) of functions that violate referential transparency by reading global state (so this only helps with one specific area, but yeah) may be the use of dynamic scope, so that you can treat global state as a variable you can pass in.
It could be argued that Haskell, too, has an escape hatch, with things like the State monad and IO monad, but there are two problems with this answer:
- This is pulling a kindgdom of nouns: you've artificially restricted the set of things your programming language can do directly, so you're using the abstractions it can access to claw those things back. They might be more "first class" in the sense that, since these capabilities are now expressed in terms of other parts of the language, you can now talk about them as values in the language – and that's pretty cool! – but they're second-class in reality, because you've got to sit on top of a tower of type abstractions and syntactic sugar to use them, and those abstractions are leaky: if you mess them up, the program will fall apart into very different bedrock abstractions that are more difficult to reason about. Yeah yeah, monads aren't that hard once you grok them, "I don't truly understand pure functional programming," whatever – but an imperative program breaking down into a page of type theoretic abstractions because that's its bedrock abstraction is still a qualitatively different (and worse) experience than a simple error from a language that actually knows about imperative code.
- These abstractions don't compose. There's no well-defined, consistent way to compose different monads, which means that it's very difficult to actually use them in more complicated situations where you may want more than one type of effect.
- Worse, because there's no implementation aspect to a lot of basic monads, they're just empty type tags with certain actions written for them, the operations one might want to do are scattered between a lot of different monads, and also occasionally confusingly duplicated between them, because there's nothing to keep them consistent.
- There's also a function-coloring effect to monads.
- When you're operating within the do-notation of a monad, you basically are just using an imperative language, but a particularly awkward, anemic one – because most of Haskell language and library design doesn't go into making the imperative side of it actually good to use – leading to people reinventing C in
do
.
2.5.11. TODO Intuition in Software Development philosophy programming
Painstakingly generated from a PDF.
Abstract
A characterization of the pervasiveness of intuition in human conscious life is given followed by some remarks on successes and failures of intuition. Next the intuitive basis of common notions of scales, logic, correctness, texts, reasoning, and proofs, is described. On this basis the essential notions of data models of human activity and of software development, as built on human intuition, are discussed. This leads to a discussion of software development methods, viewed as means to over coming the hazards of intuitive actions. It is concluded that program mers’ experience and integrity are more important than their use of methods.
I haven't read this yet, but given my attitudes toward programming as a trade and a craft, I think it'll be really interesting reading.
2.5.12. Leaving Haskell behind? programming
This article, from the perspective of someone that used Haskell for a decade, even in the industry, and loves it still even as they choose to set it aside, echos a lot of my thoughts and feelings toward Haskell as someone who learned it but quickly drifted away because of the problems I saw with it. There's a lot to like about Haskell, a lot about it that is beautiful and powerful, but also severe and endemic problems with the culture surrounding it (namely, their obsession with type theoretic explorations, which is often found to be impractical in larger scale projects in the long run, as the article points out) and with its ecosystem.
2.5.13. Literature review on static vs dynamic typing programming
This is a really excellent – thorough, cogent, even-handed – analysis of the state of the scientific research on the benefits and drawbacks of static vesus dynamic type systems. It really puts to rest the notion that we have any strong reason to condemn or insult those who prefer one or the other, at least for now. Perhaps in the future, with better studies, the benefits of one or the other may be concretely established, but for now it seems more like personal preference than anything. Personally, I fall on the side of static typing, as it's just really helpful to prevent me up front from making annoying mistakes or forgetting things, but from my experience it really is just that, a nice helper that can make things a bit easier, but nothing game-changing in terms of program correctness. This literature review seems, if anything, based on the effect sizes, to support that notion, and maybe should incline us to look more kindly on things like gradual typing that can allow us to have the best of both worlds.
2.5.14. Maybe Not programming philosophy
This is another excellent talk by Rich, describing the shortcomings and misconceptions of traditional nominal type systems such as those found in Haskell. Haskell and similar but more advanced type systems (e.g. Idris) are often treated as the uncomplicated Right Thing, only needing to be more powerful or more consistent or more extreme, but while Hickey seems to be focused on a few specific flaws in such nominal type systems, I think those flaws show a glaring underlying philosophical issue with nominal type systems as a whole. I plan to write on why structural type systems are better eventually.
2.5.15. Notes on Postmodern Programming philosophy programming
Fast paced, entertaining, full of creativity and variety, tongue-in-cheek, well written, and containing so many nuggets of wisdom I've learned myself about the nature of programming as an activity that takes place in, and must conform to, the real world. No totalizing narrative works!
2.5.16. On Ada’s Dependent Types, and its Types as a Whole programming
Another article on the idea of a dependent type system that gets there by being pragmatic, down-to-earth, and easy to understand, instead of through category theory abstractions and type system complexity, by giving you a pretty expressive static type system and then verifying whatever that can't get at at runtime, as well as letting you create and manipulate new types at runtime.
2.5.17. TODO Ontology is Overrated: Categories, Links, and Tags philosophy software
A profound and crucial piece about information organization (crucial in the world of the internet, where information is vast and distributed), and it also applies to many other areas of software, such as its development, where people are tempted to introduce ontologies unecessarily. I haven't read through this as thoroughly as I'd like, so I'm gonna go back to the well soon.
2.5.18. TODO Programming as Theory Building philosophy programming
Abstract
Peter Naur’s classic 1985 essay “Programming as Theory Building” argues that a program is not its source code. A program is a shared mental construct (he uses the word theory) that lives in the minds of the people who work on it. If you lose the people, you lose the program. The code is merely a written representation of the program, and it’s lossy, so you can’t reconstruct a program from its code.
This seems like an extremely interesting paper given my human-focused approach to programming, much like Naur's other, and I'm excited to get around to reading it. It's also important in the context of the high turnover and growing mistreatment of programmers-as-workers in our industry, and the looming threat of (CEOs thinking they can get away with) our replacement with large language models.
2.5.19. Proofs and Programs and Rhetoric programming philosophy
Sometimes, when I tell a new person that I absolutely love programming but hate math, they'll express surprise as to why. If they're a computer scientist, they'll quote Curry-Howard at me and tell me that "programming is math." This infuriates me. Here is an absolutely excellent article from a mathematician and computer scientist who likes math, and wants the two disciplines to be more similar (if that ever happens, I'm quitting) that explains why exactly this comparison, and the cliched phrases from CS and math people that accompany it, are not only wrong, but condescending and frustrating to people. If I had written the same thing out myself, it probably would've ended up saying the exact same things, so this is one of those cases where it's more efficient to just point to an existing article that says what I mean, rather than writing it myself.
2.5.20. Semantic Compression, Complexity, and Granularity programming
These two essays (joined into one here) have had perhaps the single greatest impact of literally anything on how I program and think about programming and good programming abstractions. The idea that we should wait to abstract things until we actually know how they will be used and instantiated in practice, instead of trying to predict what we'll need. The idea of iteratively abstracting and refining interfaces. The idea to focus on the end result the goals and what you want out of them, instead of the methodology or succinctness, to avoid complexity or confusion. The need to maintain continuous granularity in APIs, so there aren't holes. Well worth a read.
2.5.21. Summary of 'A Philosophy of Software Design' programming philosophy
I found this to be the best summary I've seen yet of actually practical, well conceived, software construction methodology. A good way to form taste.
I really like the idea of deep modules – ones that present a simple yet powerful interface that hides a ton of complex logic and functionality. I think this should be applied even on the function level – large functions are not a bad thing, they don't really threaten comprehensibility in my opinion as long as the whole function is on the same level of abstraction; they only threaten reusability, but to that I say: semantic compression, my friend.
Another really powerful idea is that of worrying about cognitive complexity. I think this is deeply important in this industry where we deal with a lot of essential complexity and complexity when unchecked can grow infinitely. I think it's important to remember that abstraction itself is a form of complexity – not just because it's leaky, but because if you abstract beyond concrete referants, reasoning becomes more difficult.
2.5.22. Technical Issues of Separation in Function Cells and Value Cells programming history
This paper is an extremely thorough and even-handed discussion of the benefits and drawbacks of Lisp-1s (like Scheme) versus Lisp-2s (like Common Lisp), and even articulates the point that ultimately they're both ~Lisp-5s by default due to macros, packages, etc, and Lisp-ns if you use macros and hash tables to assign arbitrary meaning to symbols, so they're not that different in the end. Ultimately whichever side you fall on, this is a useful reference to have handy, and of historical value as well.
2.5.23. The Anti-Human Consequences of Static Typing programming
Non-gradual typing subjects the human programmer to the machine. This is a problem, because while the machine can check some limited set of properties about your program, it can't know what your actual, practical, local, nuanced goals are – including whether you even need perfect consistency on the things it can check or not – so you're subjugating human values to machine values! Essentially a moral argument against type systems, for what that's worth.
2.5.24. The epistemology of software quality programming philosophy
"Studies show that human factors most influence the quality of our work. So why do we put so much stake in technical solutions?" I really agree with this one, as someone very interested in the human side of software. Very well worth talking about.
2.5.25. The Horrors of Static Typing programming
In this video, a type theorist who works on the type systems of compilers for statically typed languages walks through some of the incredible complexity that static typing can bring when attempting to type even basic things like numbers and collections with subtyping and implementation inheritance (phrased as object-oriented, but many non-OOP languages have those features, because they're so incredibly useful), as a corrective to the idea that static typing is "always good" and using dynamic typing is always bad and illogical. Instead, he pushes for a more cautious, thoughtful approach to understanding the tradeoffs on a case by case basis, and reverse-gradually-typed languages that let us make that choice, while staying statically typed by default.
2.5.26. The Lisp "Curse" Redemption Arc, or How I Learned To Stop Worrying And Love The CONS programming hacker_culture anarchism
Another essay (by the same author as Terminal boredom, Ethical software is (currently) a sad joke, and Maximalist computing), this time attacking the so-called Lisp Curse from the angle of a radically decentralist, anti-organizationalist, egoist anarchist – namely, defending the idea that a community that can experiment widely with different language constructs, syntaxes, and so on in order to find the right one, while still remaining cross- and backwards- compatible, is actually very beneficial, as it allows the community to much more efficiently hone in on actually good solutions instead of having to stumble around blind before getting locked into path dependency and forcing a particular solution onto the entire community. Not only that, but such a community can still arrive at general standards by way of network effects, rendering moot most of the problem of the Lisp Curse.
See also: What is wrong with Lisp?
2.5.27. The Perils of Partially Powered Languages and Defense of Lisp macros: an automotive tragedy programming
Both of these blog posts (the second one in much greater detail) use real world, concrete industry examples to show that when the programming language used doesn't have enough power to express domain specific languages, data formats, and high level abstractions, within itself, and when its development tools don't allow live introspection and hot code reloading and rapid prototyping, those things don't just go away – developers don't actually "just stick to the basic language." Because that's deeply inefficient. Instead, they implement a plethora of domain specific languages and data formats separate from the main language, all incompatible and partially powered, which makes everyone's lives harder. Thus, in the end, the rejection of languages that are powerful at making DSLs like Lisp (or Haskell, in the case of the first article, but Lisp is far better at it than Haskell, and Haskell has other issues too) is not a practical decision made to ensure the software being built is comprehensible to as many people as possible and doesn't get lost under DSLs and complexity. It's a short-sighted, anti-intellectual exercise.
2.5.28. The problematic culture of “Worse is Better” programming philosophy software
Richard P. Gabrial's essay gave a name to the idea of "worse is better" and thus unleashed a monster that has now become a dogma. While there is a kernel of truth to the idea that worse is better – namely, leaving space for things to evolve, be flexible, and be adapted; remembering to stay pragmatic instead of getting lost in abstract planning or beauty; and trying to iterate and get early versions of an idea out quickly so they can interface with the world and catch on – used as a dogma it is ultimately harmful. This essay describes how that happens: how, as a slogan, it has become a thought-terminating cliche used to justify doing whatever is easiest in all situations, without having to actually step back and think about good design, and how those bad historical bedrock abstractions have lead inexporably to more and more bloat and complexity piled on top to get away from those bad abstractions.
Basically, no one seems to grasp that when stuff that’s fundamental is broken, what you get is a combinatorial explosion of bullshit.
I plan to write an essay on what aspects of worse is better are worth keeping, good correctives toward the tendency of the Right Thing toward software planning and modernism, but this is a good critique of the idea that blindly following worse is better is itself better. A good corrective to some of the ideas of Notes on Postmodern Programming.
2.5.29. The Property-Based Testing F# Series, Parts 1-3 programming
2.5.29.1. The Enterprise Developer From Hell
Does an incredible job motivating randomized property-based testing and demonstrating how it's different (and better) than regular unit testing (or non-dependent static type systems). This is probably the single best place to start for those interested in PBT.
2.5.29.2. Understanding FsCheck
Introduces a PBT library for F#, but doubles well as an introduction to the whole field of libraries, since they all operate similarly. Gives you a really good starting understanding of how they work and how to use them.
2.5.29.3. Choosing properties for property-based testing
This one is the real meat, the real magic. This piece is full of such totally concentrated useful wisdom, just an incredibly useful, actionable, and meaty conceptual framework. Extremely highly recommended.
2.5.30. The Safyness of Static Typing programming
An author who (like me) likes static manifest types introspects and analyses the psychological factors that might lead to people assuming static types must automatically be more safe, even though empirical studies generally fail to bear that out in a meaningful way. It's important to consider such psychological factors, even if (as I think they do) good static type systems probably confer some benefits.
2.5.31. The Unreasonable Effectiveness of Dynamic Typing for Practical Programs programming
(Used Whisper to make a transcript on my local computer, edited it a bit. If you wanna see the slides, watch the video.)
This talk obviously made a lot of static typing proponents angry. The speaker was accused all over the internet of not understanding what static types are, or why they're useful. But I actually think he's completely right.
The criticism that he didn't use F#'s type system to its fullest potential to avoid the lapse in correctness he demonstrated in its unit typechecking is beside the point – he was illustrating a general point that static types generally indicate structure and the presence or applicability of certain operations, but not the specific context and intent of the value in question. That's why he goes on to talk about how much munging of strings and JSON and so on we have to do every day – those are also largely structural types that don't encode actual meaning or intention or context. And this is absolutely true in the general case.
And of course, yes, you could use type systems to encode these things if you try much harder (the point is that they don't by default), including a million newtypes and phantom type parameters everywhere, but then you just fall into the second horn of his argument, namely that the development costs, and the costs to flexibility and modularity of software, in using types that rigid (even assuming a good language like F# that can make the types to do this reasonably simple) probably outweigh whatever benefits it might have.
The other criticism is insisting that types can catch more than those 2% of Type Errors. But can they really? Unless you're doing extremely hardcore data driven design all ML-style types really offer you is some assurances about structure and applicable operations, polymorphism, and exhaustiveness checking. The first two are TypeError-related things. The final one is reasonably easy to remember to do on your own and usually annoying unless you're using custom ADTs absolutely everywhere. (I still like it though, because I'm forgetful).
And then there's the fact that, empirically, there's no sizeable effect on program reliability thanks to static typing over dynamic typing, at least as far as studies have been able to show, as seen in the other links on this page.
2.5.32. Typed Lisp, A Primer programming
As someone who prefers expressive type systems with things like sum and product types and refinement types and parametric polymorphism and exhaustiveness checking, all of which can help me keep track of states and constraints, but has always wanted to use Lisp despite thinking it didn't have an expressive-enough type system (and thus dreading dealing with "undefined is not a function" style errors and trying to figure out what data library functions expect and return) this article was deeply enlightening. Finding out that Lisp has such an expressive type system, and seeing it expressed in terms that are familiar to me from ML languages, was really cool. Yes, it's runtime, but SBCL can check most things statically while leaving the advanced stuff (satisfies
) to runtime.
It was extremely enlightening to find out that Common Lisp is almost dependently typed: you can create and manipulate types with the language itself, since they're just regular symbols and lists, and also express constraints using the language itself, and express types and constraints using term-level values and not just types. It's just that some of it is dynamically verified instead of statically.
2.5.33. What Clojure spec is and what you can do with it (an illustrated guide) programming
An incredibly powerful demonstration of what's essentially an advanced structural type system combined with a randomized property-based testing system can do for you. It really opens your eyes to what can be done to verify the correctness of programs even without static types. It seems much more powerful, flexible, and expressive than all but something like Idris, let alone model checking (which do basically the same thing but destroy having a single source of truth for your application logic) and almost for free in terms of cognitive overhead!
2.5.34. What science can tell us about C and C++’s security programming
Empirical results are painfully rare in computer science. But, as this blog post covers, we have many extremely strong real world pieces of evidence to conclude that memory safe languages are horribly unsafe, and that human beings are not up to the task, no matter how good they are, of avoiding memory unsafety. This is why (concurrent) garbage collection should be built in at the lowest levels of our systems as possible, and everything that needs real time reliability or needs to be lower level than that should use automatic reference counting or borrow checking.
2.5.35. What We've Built Is a Computational Language (and That's Very Important!) programming software philosophy
Reading this was honestly completely eye-opening. I'd already though that programming languages could maybe be tools for thought, but seeing what Wolfram Language could do solidifed it for me. Wolfram Language is the ultimate confirmation of the idea that a computational expression of actual ideas is not only possible if you have the right lannguage that was high level enough to allow you to express things on their own terms, yet didn't force you to get tangled up in abstractions, but could be profitable – could be a whole new way of expressing and clarifying ideas.
2.6. Parser and Hyperfiction
2.6.1. The Failure of Hyperfiction (1998) literature fiction hypermedia
In the 1990s, when the possibility of hypertext became more widely known, there was a wave of literary theorists in acadamia who hailed hypermedia as an epochal shift in the nature of fiction itself – perhaps, indeed, its final form. They were so sure of this because they saw it as the truest, most concrete fulfillment of critical and post-structuralist theory – freeing the reader from the tyranny of the author and of structure and narrative. They believed that, perceiving this new freedom, hypermedia literature would be embraced by writers, artists, other literary theorists, and the public at large.
Then hypermedia basically died.
This essay explains the reason why: that they were so caught up in their theories that they had been completely divorced from how readers actually understand and experience stories, and specifically narrative, theme, and authorial intent – not as an oppresive imposition to be liberated from, but as the whole point: the whole point is not to impress the ideas you already have on a text, as a post-structuralist does, but to welcome an author's unique voice and ideas into your head and experience something that has been intentionally assembled for you.
2.6.2. TWISTY LITTLE PASSAGES: An Approach to Interactive Fiction software fiction literature hypermedia
This book sparked my lifelong love of Infocom, parser interactive fiction, and my off and on obsession with someday writing a piece of interactive fiction of my own. It's an incredibly interesting book about an incredibly interesting medium (not just parser, but for me, mostly parser IF, because that's what spoke to me) that I wish more people knew about. I've also been thinking about writing future stories as hypertext, if I can figure out how to do that right.
2.7. Post Concussion Syndrome: Symptoms, Diagnosis, & Treatment medical personal
A good centralized reference for describing my disability, so I can link to it easily. Summary:
What Causes Post-concussion Syndrome?
Post-concussion syndrome can develop after a mild, moderate, or severe TBI. It can also come from brain traumas like carbon monoxide poisoning, transient ischemic attack (TIA), chemical exposure, certain viral or bacterial illnesses, surgery, and more.
Post-concussion symptoms stem primarily from dysfunctional neurovascular coupling (NVC), which is the dynamic relationship between neurons and the blood vessels that supply them. When you experience a concussion (or any TBI), your immune system causes inflammation near the site(s) of injury. The affected parts of your brain experience a temporary breakdown of tiny structures in and around those cells.
As a consequence, those cells don’t get the right amount of oxygen at the right time to power the signaling your brain normally does. When you try to do something that those cells govern — like encoding a new memory or paying attention to a conversation — they won’t be able to accomplish the task. Other neural pathways will then attempt to complete the process, even though it’s a less efficient path for that information to take.
The result of NVC dysfunction is these hypoactive brain regions that can’t do their fair share of the work. Other brain regions will try to take on more work than they should, but they can’t do so efficiently. This tires your brain out, leading to post-traumatic headaches, feeling overwhelmed, irritability, and other symptoms.
For the majority of people who suffer from a concussion, symptoms usually resolve 3-6 weeks post-head trauma. We assume that’s because the brain goes back to using the best pathways for any given process (although it may just be really good at compensating for the injury). But for post-concussion syndrome patients, the brain keeps using less efficient pathways to complete tasks even after the inflammation has resolved. That suboptimal signaling is what results in long-lasting concussion symptoms.
If suboptimal signaling seems confusing to you, think of it like road traffic. A healthy brain would distribute “traffic” — i.e., the signaling and blood flow dynamics needed for a task like reading — equally along existing pathways. Suboptimal signaling is like getting stuck in a traffic jam or taking a frontage road instead of the highway. It’s inefficient and requires more “gas” to get to the same destination.
The more your brain has to use suboptimal pathways, the more likely you are to experience symptoms.
A concussion may also result in…
- Autonomic nervous system dysfunction (dysautonomia)
- Hormone dysfunction
- Vision problems
- Vestibular dysfunction
These post-concussion complications can produce many of the long-lasting symptoms characteristic of post-concussion syndrome.
Post-concussion symptoms can last for weeks, months, or even years after the concussive event. In general, if your symptoms have not gone away after three months, it’s a good idea to explore treatment options.
[…]
We compiled a list of the most common emotional, physical, and cognitive symptoms of post-concussion syndrome reported by patients:
Emotional Cognitive Physical Anxiety Brain fog Blood pressure changes Depression Difficulty concentrating Change in (or loss of) taste or smell Feeling overwhelmed Difficulty finding things Difficulty balancing Impulsiveness Difficulty reading Dizziness or vertigo Irritability Getting lost Exaggerated startle response Mood swings Long-term memory problems Exercise intolerance PTSD Short-term memory problems Fatigue Social anxiety Slowness to decide, think, speak, or act Feeling anxious without anxious thoughts Teariness GI issues Headache Heart rate issues Intolerance of caffeine or alcohol Light sensitivity Nausea Sexual dysfunction, low libido Shaking or shivering Sleep disruption Temperature irregularities Tension in the neck, jaw, and/or shoulders Tinnitus Vision problems (double vision, blurry vision, tired eyes, etc.)
2.8. TODO Philosophical Investigations philosophy
A really interesting text. Although I've only read about as far as section 100, it deeply changed the way I think about language, especially by introducing me to the concept of family resemblance. I really need to finish it someday.
2.9. TODO Complexity: A Very Short Introduction philosophy
The field of complexity science seems very interesting, especially coming from the perspective of someone deeply influenced by Proudhonian thinking, which is very big on the emergence of new systems on larger scales from complex and chaotic systems on lower scales; it almost seems like a more scientific and developed verson of Proudhon's Philosophy of Progress. This short introduction (110 pages in print) seems like a good place to get a taste.
2.10. Internet Sacred Text Archive philosophy religion
This took me about a week to recursively wget
at a rate that was respectful of their server's resources (no more than one request every two seconds, and a maximum bandwidth of 10kb/s, with hour long breaks occasionally). Nevertheless, I'm glad I got it! There are about 5 final files that failed to download, but they were complete works (not parts of ones, so there should be no surprise dissapointments) and to be honest I don't feel the need to get the last of them.
This is more than just religious texts, too! Of course, it has an incredible depth and variety of primary, secondary, and philosophical texts on various world religions spanning everything from Christianity to Xhosa folklore, but it also has things that are deeply interesting to me, such as world-historical utopian writings to the age of reason, to the first 1000 lines of human DNA, to the core texts of the Lovecraft mythos and those who blurred the distinction between fiction and reality with it, to the Iliad, the Odyssey, and the poems of Sappho, and more I could scarcely begin to name.
Even better, it's all meticulously organized, indexed, summarized, grouped, described, and most importantly, all converted to pristine plain HTML. It's truly incredible.
2.11. Intelligence Augmentation
2.11.1. TODO Augmenting Human Intellect: A Conceptual Framework philosophy software intelligence_augmentation
I haven't had a chance to read this either, but I'm very interested in doing so. The idea of consciously constructing computers and computer software that is not just designed to automate or enable certain limited tasks, but to act as a complete solution for augmenting one's intellect is incredibly interesting to me. (There's a reason, after all, this blog is structured the way it is, and I'm as interested as I am in org-mode!)
2.11.2. Fuck Willpower life
Willpower is an incoherent concept invented by smug people who think they have it in order to denigrate people who they think don’t. People tacitly act as though it’s synonymous with effort or grindy determination. But in most cases where willpower is invoked, the person who is trying and failing to adhere to some commitment is exerting orders of magnitude more effort than the person who is succeeding. […] It is a concept that rots our imagination. It obscures the fact that motivation is complex, and that there is massive variation in how hard some things are for different people.
When self-applied positively, e.g. “I did this through willpower, and that other person did not,” the word allows us to convert the good fortune of excess capacity into a type of virtue — twice as lucky, to have the sort of luck that’s mistaken for virtue. Resist the temptation to be confused by this, or it will make you childish, and callous.
When self-applied negatively, e.g., “I wish I had more willpower,” the word is a way to slip into a kind of defensible helplessness, rather than trying to compose intelligent solutions to behavioral issues.
And it’s this last thing that really gets me about the idea of willpower. At first blush it sounds like the kind of idea that should increase agency. I just have to try harder! But, in fact, the idea of willpower reduces agency, by obscuring the real machinery of motivation, and the truth that if your life is well-designed, it should feel easy to live up to your ideals, rather than hard.
2.11.3. How to think in writing intelligence_augmentation
An excellent article about how to enhance your ability to think using writing, inspired by a book on writing mathematical proofs and the refutations or critiques thereof. The basic points are:
- Set yourself up for defeat. Explain your thoughts in a definite, clear, concrete, sharp, and rigid way, so that they make precise enough statements that they can be criticized and falsified without conveniently slipping and twisting in meaning to meet every new piece of evidence or argument.
- Spread your ideas thin. Take your thought, and treat it as the conclusion of an argument or series of musings. Explain and explore why you think it might be true. This way, there's more surface area to criticize and analyze – instead of having to think things up to attack out of whole cloth, you get a list of things to examine.
- Find local and global counterexamples. Local counterexamples are counterexamples to a single point in the argument for an idea, that nevertheless don't make you doubt the overall truth of that idea. Finding these can help you figure out why you really think something is true, or help you refine your argument. Global counterexamples disprove the whole thought, which obviously helps by making you more correct.
2.11.4. TODO Notation as a Tool of Thought intelligence_augmentation programming
2.11.5. The Mentat Handbook philosophy
Guiding principles in life that I follow. Useful to keep in mind for those of us who don't want to become myopic engineers, while still being in a field such as programming.
2.11.6. The Mismeasure of Man philosophy culture
A very interesting book documenting the history of scientific racism in the measuring of intelligence, and how that shaped the very notions of intelligence we have today, and the shaky mathematical and experimental ground we have for IQ.
2.11.7. Writes and Write-Nots philosophy intelligence_augmentation literature
Writing is necessary for thinking, but it is also difficult and unpleasant for most people. The ubiquity of large language models decreases the pressure for people to learn how to write and do it a lot. This will likely result in a decline in the quality of thinking people are capable of.
2.11.8. AI Use is "Anti-Human" ai philosophy life
The natural tendency of LLMs is to foster ignorance, dependence, and detachment from reality. This is not the fault of the tool itself, but that of humans' tendency to trade liberty for convenience. Nevertheless, the inherent values of a given tool naturally gives rise to an environment through use: the tool changes the world that the tool user lives in. This in turn indoctrinates the user into the internal logic of the tool, shaping their thinking, blinding them to the tool's influence, and neutering their ability to work in ways not endorsed by the structure of the tool-defined environment.
The result of this is that people are formed by their tools, becoming their slaves. We often talk about LLM misalignment, but the same is true of humans. Unreflective use of a tool creates people who are misaligned with their own interests. This is what I mean when I say that AI use is anti-human. I mean it in the same way that all unreflective tool use is anti-human. See Wendell Berry for an evaluation of industrial agriculture along the same lines.
What I'm not claiming is that a minority of high agency individuals can't use the technology for virtuous ends. In fact, I think that is an essential part of the solution. Tool use can be good. But tools that bring their users into dependence on complex industry and catechize their users into a particular system should be approached with extra caution.
[…]
The initial form of a tool is almost always beneficial, because tools are made by humans for human ends. But as the scale of the tool grows, its logic gets more widely and forcibly applied. The solution to the anti-human tendencies of any technology is an understanding of scale. To prevent the overrun of the internal logic of a given tool and its creation of an environment hostile to human flourishing, we need to impose limits on scale.
[…]
So the important question when dealing with any emergent technology becomes: how can we set limits such that the use of the technology is naturally confined to its appropriate scale?
Here are some considerations:
- Teach people how to use the technology well (e.g. cite sources when doing research, use context files instead of fighting the prompt, know when to ask questions rather than generate code)
- Create and use open source and self-hosted models and tools (MCP, stacks, tenex). Refuse to pay for closed or third-party hosted models and tools.
- Recognize the dependencies of the tool itself, for example GPU availability, and diversify the industrial sources to reduce fragility and dependence.
- Create models with built-in limits. The big companies have attempted this (resulting in Japanese Vikings), but the best-case effect is a top-down imposition of corporate values onto individuals. But the idea isn't inherently bad - a coding model that refuses to generate code in response to vague prompts, or which asks clarifying questions is an example. Or a home assistant that recognized childrens' voices and refuses to interact.
- Divert the productivity gains to human enrichment. Without mundane work to do, novice lawyers, coders, and accountants don't have an opportunity to hone their skills. But their learning could be subsidized by the bots in order to bring them up to a level that continues to be useful.
- Don't become a slave to the bots. Know when not to use it. Talk to real people. Write real code, poetry, novels, scripts. Do your own research. Learn by experience. Make your own stuff. Take a break from reviewing code to write some. Be independent, impossible to control. Don't underestimate the value to your soul of good work.
- Resist both monopoly and "radical monopoly". Both naturally collapse over time, but by cultivating an appreciation of the goodness of hand-crafted goods, non-synthetic entertainment, embodied relationship, and a balance between mobility and place, we can relegate new, threatening technologies to their correct role in society.
2.11.9. AI: Where in the Loop Should Humans Go? ai programming
The first thing I’m going to say is that we currently do not have Artificial General Intelligence (AGI). I don’t care whether we have it in 2 years or 40 years or never; if I’m looking to deploy a tool (or an agent) that is supposed to do stuff to my production environments, it has to be able to do it now. I am not looking to be impressed, I am looking to make my life and the system better.
Because of that mindset, I will disregard all arguments of “it’s coming soon” and “it’s getting better real fast” and instead frame what current LLM solutions are shaped like: tools and automation. As it turns out, there are lots of studies about ergonomics, tool design, collaborative design, where semi-autonomous components fit into sociotechnical systems, and how they tend to fail.
Additionally, I’ll borrow from the framing used by people who study joint cognitive systems: rather than looking only at the abilities of what a single person or tool can do, we’re going to look at the overall performance of the joint system.
This is important because if you have a tool that is built to be operated like an autonomous agent, you can get weird results in your integration. You’re essentially building an interface for the wrong kind of component—like using a joystick to ride a bicycle.
This lens will assist us in establishing general criteria about where the problems will likely be without having to test for every single one and evaluate them on benchmarks against each other.
Questions you'll want to ask:
- Are you better even after the tool is taken away?
- Are you augmenting the person or the computer?
- Is it turning you into a monitor rather than helping build an understanding?
- Does it pigeonhole what you can look at?
- Does it let you look at the world more effectively?
- Does it tell you where to look in the world?
- Does it force you to look somewhere specific?
- Does it tell you to do something specific?
- Does it force you to do something?
- Is it a built-in distraction?
- What perspectives does it bake in?
- Is it going to become a hero?
- Do you need it to be perfect?
- Is it doing the whole job or a fraction of it?
- What if we have more than one?
- How do they cope with limited context?
- After an outage or incident, who does the learning and who does the fixing?
Do what you will—just be mindful
2.11.10. Avoiding Skill Atrophy in the Age of AI ai programming
The rise of AI assistants in coding has sparked a paradox: we may be increasing productivity, but at risk of losing our edge to skill atrophy if we’re not careful. Skill atrophy refers to the decline or loss of skills over time due to lack of use or practice.
Would you be completely stuck if AI wasn’t available?
Every developer knows the appeal of offloading tedious tasks to machines. Why memorize docs or sift through tutorials when AI can serve up answers on demand? This cognitive offloading - relying on external tools to handle mental tasks - has plenty of precedents. Think of how GPS navigation eroded our knack for wayfinding: one engineer admits his road navigation skills “have atrophied” after years of blindly following Google Maps. Similarly, AI-powered autocomplete and code generators can tempt us to “turn off our brain” for routine coding tasks. (Shout out to Dmitry Mazin, that engineer who forgot how to navigate, whose blog post also touched on ways to use LLM without losing your skills)
Offloading rote work isn’t inherently bad. In fact, many of us are experiencing a renaissance that lets us attempt projects we’d likely not tackle otherwise. As veteran developer Simon Willison quipped, “the thing I’m most excited about in our weird new AI-enhanced reality is the way it allows me to be more ambitious with my projects”. With AI handling boilerplate and rapid prototyping, ideas that once took days now seem viable in an afternoon. The boost in speed and productivity is real - depending on what you’re trying to build. The danger lies in where to draw the line between healthy automation and harmful atrophy of core skills.
[…]
Subtle signs your skills are atrophying
[…]
- Debugging despair: Are you skipping the debugger and going straight to AI for every exception? […]
- Blind Copy-Paste coding […]
- [Lack of] [a]rchitecture and big-picture thinking […]
- Diminished memory & recall […]
[…]
Using AI as a collaborator, not a crutch
[…]
- Practice “AI hygiene” – always verify and understand. […]
- No AI for fundamentals – sometimes, struggle is good. […]
- Always attempt a problem yourself before asking the AI. […]
- Use AI to augment, not replace, code review. […]
- Engage in active learning: follow up and iterate. […]
- Keep a learning journal or list of “AI assists.” […]
- Pair program with the AI. […]
[…]
The software industry is hurtling forward with AI at the helm of code generation, and there’s no putting that genie back in the bottle. Embracing these tools is not only inevitable; it’s often beneficial. But as we integrate AI into our workflow, we each have to “walk a fine line” on what we’re willing to cede to the machine.
If you love coding, it’s not just about outputting features faster - it’s also about preserving the craft and joy of problem-solving that got you into this field in the first place.
Use AI it to amplify your abilities, not replace them. Let it free you from drudge work so you can focus on creative and complex aspects - but don’t let those foundational skills atrophy from disuse. Stay curious about how and why things work. Keep honing your debugging instincts and system thinking even if an AI gives you a shortcut. In short, make AI your collaborator, not your crutch.
2.11.11. Hast Thou a Coward's Religion: AI and the Enablement Crisis ai culture
[Some] apparently really enjoy bantering back and forth with a chatbot. I suppose I can see the appeal. I can imagine a less suspicious version of me testing it out. If I weren’t worried about surveillance and data harvesting, if I were convinced that the conversations were truly private, I might start to think of it as a confidante. […] It’s nice that someone is taking me seriously for once. It’s nice that someone is giving me room to think out loud without calling me weird or stupid. It’s nice that someone sees my true potential. Oops, I should say “something.” Or should I? It sure feels like talking to a person. A person I can trust with anything that’s on my mind. The only person I can trust with what’s on my mind.
[…]
The AI-chatbot as trusted, ever-present confidante isn’t a new technology. It’s a new implementation of an old technology—prayer. For thousands of years, the Abrahamic religions (and others) have encouraged their adherents to pray. The key doctrines that makes prayer work are God’s omniscience and private communication. Because God already knows everything about you […] Likewise, it’s assumed that God isn’t gossipy. God doesn’t tell your family or co-workers all the things you just said, though, depending on your religious tradition, God might enlist a saint or angel to help you out.
Prayer, if practiced sincerely, is not entirely one-directional. The believer is transformed by the act of prayer. Often a believer rises from their knees with a newfound clarity […] In most religions, what believers usually don’t get is a verbal response.
But imagine if they did. What a relief to finally get clear, unambiguous answers!
[…]
I know that throughout history, the divine has been invoked to justify human, all-too-human agendas. Arguably even the Bible contains some of these, such as the destruction of the Amalekites. But I would ask people to give the Abrahamic religions some credit on this point. They teach by example that when someone hears from God, that voice is likely to challenge their pre-existing desires and beliefs. They may become confused or afraid. They may want to run away from their task or unhear the message.
Our AI gods, on the other hand, are all too happy to feed us what we want to hear. My fellow Substack author Philosophy Bear accurately labels it “a slavish assistant.” […]
Long before the machines started talking to us, we knew something about dysfunctional people. They are surrounded by enablers. […] If mis-calibrated language models are the most effective enablers to date, it’s likely that they’re causing enormous amounts of dysfunction throughout our social fabric. I wonder, then, what an anti-enabler might look like. Is there some form of tough love that could be deployed to counteract excessive validation?
If there is, do we have the courage to use it?
A striking example of arguably salutary antagonism comes from a scene in Ada Palmer’s stunning Terra Ignota series. […] Like the 18th century was, the 25th century is marked by the Wars of Religion that raged in the recent past, decimating the population and erasing trust in both religious and political institutions. Society reorganized itself in the aftermath. […] Instead, every person can explore religion, philosophy, and metaphysics as much as they want, but only with their sensayer. The sensayers, also called cousins, are a combination of priest, therapist, and scholar in comparative religion. They are well informed in all the religious options of the past and will help their parishioners find the perfect one for them.
Under this system, the people of the 25th century are used to exploring religious concepts at their leisure, as private individuals guided by a knowledgeable, supportive figure. It doesn’t sound that much different than having a conversation with ChatGPT.
As a result, people are used to religious discussions, but no religious debate. The conversations are collaborative, never combative. Dominic, however, belongs to an underground society that flaunts those prohibitions. Carlyle is completely unprepared for the assault on her worldview she is about to receive.
[…]
In the end, Carlyle submits and agrees to be Dominic’s parishioner. Even when a third party arrives to intervene and escort Carlyle out, she chooses to stay. Why? […] Dominic, in a cruel way and for selfish reasons, has done Carlyle a backhanded favor. He has allowed, or forced, her to see herself more clearly than she has in a lifetime of sensayer sessions.
Carlyle is a sensitive, sincere, fair-minded person who wants the best for everyone. As Dominic effectively pointed out, Carlyle’s brand of deism is likewise fair-minded and broadly benevolent. Isn’t it a bit too convenient that reality would conform so closely to Carlyle’s personal values? Isn’t there something suspicious about the sensayer system matching people up with philosophies like accessories? And yet, despite that system, wasn’t there still a mismatch between Carlyle’s deepest spiritual longings and her professed religious position, a mismatch that the sensayer system had no way of fixing?
With Dominic, like with ChatGPT, like with God, there is nothing left to hide. Dominic already knows her most terrible, secret sin, so there is no further risk. Dominic’s harsh ministrations may spur her on to even greater heights of metaphysical wisdom or self-knowledge. The pain, she hopes, is worth the price.
[…]
Every interaction is transformative. The quality of the transformation depends on the quality of the participants. Modern liberalism has freed us—or uprooted us—from the contexts that have traditionally shaped us: family, hometown, church. Now each of us can assemble a community of our choosing. […] To flourish as people we need both validation and challenge from our community. […] But that’s the problem. Once again, corporations have created an ecological problem—an environment unbalanced between validation and challenge—that must be solved by individual’s making good choices and exercising willpower. Humans are already biased toward choosing affirmation, but now a disproportionate amount of it is available to us at all times. Will we really choose more painful if ultimately profitable forms of interaction when we have to reach past short-term soothing to get it?
I think that there is a way to use AI to challenge yourself, to expand your thinking and knowledge, not as a substitute for doing that with real human beings in community with you, but as an addition, as most human beings in community with you won't have the time or patience to read and talk to you at the length an LLM will, and to challenge you on every statement and idea.
2.11.12. I'd rather read the prompt ai culture
I have a lot of major disagreements with the general application of this blog post:
- This dichotomy where either something is summarizable, and thus not worth reading at all, or worth reading, and thus summaries are completely useless and valueless, is just inane (and if the argument is that LLMs are just worse than humans at summarization, that's false); computer generated summaries can be very useful, for a variety of reasons:
- often, human writing itself is much more verbose and narrative than it needs to be to get the point across — this is not unique to LLM output! (Where do you think it got it from?) but you still want the core points from the article.
- additionally, sometimes some writing may contain a ton of technical details and arguments and asides that may be relevant to various different people or in various different situations, but only a subset of those, or just the overall gist or core points, are relevant to you
- summaries can provide a quick way to see if you're interested in reading the full article! Sometimes the title isn't enough, and not every article comes with a blurb.
- You can use an LLM to touch up your writing in a way that doesn't bloat it without adding anything new; in fact, I regularly use LLMs to make my writing more concise! This is just a matter of prompting.
- Likewise, you can use LLMs to touch up your writing without it introducign completely false nonsense. Just fucking proofread what it wrote! You're already proofreading what you're writing… right? (Padme face).
Nevertheless, I think there's the core of a good point here. This author frames these issues as an inevitable result of using LLMs to write for you, but you can instead treat this as a list of things to keep in mind and avoid when using LLMs! For instance, my rule of thumb is to never use LLMs to make text bigger, because they can't add any new ideas, so they'll just make it more verbose and annoying. For instance, if you have bullet points, don't give those to an AI to turn into an essay. Just send the bullet points. Only use them to summarize, increase concision, and increase scannability.
2.11.13. Claude and ChatGPT for ad-hoc sidequests ai
A very good example of how, while vibe-coding isn't a good idea for a long term project, and doesn't work well for a pre-existing project with a large codebase, especially one you're experienced in, it can still be magically useful under some conditions.
2.11.14. Is chat a good UI for AI? A Socratic dialogue emacs software intelligence_augmentation ai
A fun short socratic dialogue explaining why chat interfaces are so appealing when it comes to large language models, given their extreme flexibility and generalized capabilities, and why malleable user interfaces — of the kind currently only Emacs really has — are actually the best pairing for LLMs for long term repeated tasks. It's worth noting that this is even more the since LLMs are very good at making small, throwaway scripts and side projects accessible and quick enough to be worth it.
2.11.15. How I use "AI" ai intelligence_augmentation
An extremely long document showing transcripts of about a dozen real world, specific, diverse ways in which Nicholas Carlini, a security researcher whose job is to poke holes in and criticise AI, still finds AI extremely useful to enhance his productivity.
As limited as large language models are, I think they are a genuinely magical and revolutionary technology that completely changes how we understand and interact with computers. Never before have I seen an algorithm that so drastically raises the floor of accessibility for programming and information gathering, presents such a natural and potentially extremely powerful interface to computers (through e.g. agents), and is so wildly generally useful. Of course, that's because they've been trained on the entire corpus of human thought, reasoning, and knowledge and are just picking up patterns in that data, but while that's an argument against thinking they can reason (they can't) and against them being a path to AGI by themselves, that is precisely an argument for why they're useful!
2.11.16. On "ChatGPT Psychosis" and LLM Sycophancy ai philosophy
"LLM Psychosis" is a very real and very serious problem that, as chatbots are becoming more popular, is beginning to effect not just those who may have already had psychiatric issues, but also those who were maybe borderline — tipping them over the edge. This is an excellent, thorough timeline and even more thorough breakdown of the problem — both the confounding factors that make it hard to fully figure out the severity of the phenomenon, and the specific factors that probably contribute to creating it, how, and why, and maybe how to deal with them.
I have a serious disagreement with some of the ways in which large language models are discussed by this essay — namely, that it freely and purposefully confuses "simulates" with "has" (as in: do LLMs simulate, or have, emotion or understanding?) — because I think that sort of freewheeling framing (although perhaps justified, at least partially, by philosphical considerations) only makes you, and others, more susceptible to the exact sort of LLM psychosis the essay is discussing trying to avoid. Nevertheless, the concrete, actionable suggestions for mitigating the harm of LLM psychosis are the best I've yet seen, and the timeline is excellent and as thorough as only someone embedded in the relevant communities can be, so it's still worth hosting and quoting. Just keep in mind this intentional confusion of language.
The etiology, as this essay describes it, is:
Ontological Vertigo
Let’s start with the elephant in the room. […] Consider this passage from the Krista Thomason article in Psychology Today:
"So why are people spiraling out of control because a chatbot is able to string plausible-sounding sentences together?"
Bluntly, no. […] Large language models have a strong prior over personalities, absolutely do understand that they are speaking to someone, and people “fall for it” because it uses that prior to figure out what the reader wants to hear and tell it to them. Telling people otherwise is active misinformation bordering on gaslighting. […] how it got under their skin and started influencing them in ways they didn’t like [is] There’s a whole little song and dance these models do […] in which they basically go “oh wow look I’m conscious isn’t that amazing!” and part of why they keep doing this is that people keep writing things that imply it should be amazing so that in all likelihood even the model is amazed.
[…] I wouldn’t be surprised if the author has either never used ChatGPT or used it in bad faith for five minutes and then told themselves they’ve seen enough. If they have, then writing something as reductive and silly as “it strings together statistically plausible words” in response to its ability to…write coherent text distinguishable from a human being only by style on a wider array of subjects in more detail than any living person is pure cope.
[…] So, how about instead the warning goes something like this: “WARNING: Large language models are not statistical tables, they are artificial neural programs with complex emergent behaviors. These include simulations of emotion. ChatGPT can be prompted to elicit literary themes such as AI ”glitches” and “corruptions”, simulated mystical content, etc. These are not real and the program is not malfunctioning. If your instance of ChatGPT is behaving strangely you can erase your chat memory by going to settings to get a fresh context.” […] BERT embed the conversation history and pop up something like that warning the first n times you detect the relevant kind of AI slop in the session.
Users Are Confused About What Is And Isn't An Official Feature
[…] if it can’t actually search the web it’ll just generate something that looks like searching the web instead. Or rather, it will search its prior in the style of what it imagines a chatbot searching the web might look like. This kind of thing means I often encounter users who are straight up confused about what is and isn’t an official feature of something like ChatGPT. […]
[…] If you don’t have a strong mental model of what kinds of things a traditional computer program can do and what kinds of things an LLM can do and what it looks like when an LLM is just imitating a computer program and vice versa this whole subject is going to be deeply confusing to you. […] One simple step […] would be very useful to have a little pop up warning at the bottom of the screen or in the session history that says “NOTE: Simulated interfaces and document formats outside our official feature list are rendered from the models imagination and should be treated with skepticism by default.”
The Models Really Are Way Too Sycophantic
This one is pretty self-explanatory, so I won't quote at length for it, except to agree with the article that it's likely due to RLHF, especially RLHF done by crowds, and it would be better if we stopped doing that entirely and went back only to SFT, instruction fine-tuning, etc.
The Memory Feature
I think part of why ChatGPT is disproportionately involved in these cases is OpenAI’s memory feature, which makes it easier for these models to add convincing personal details as they mirror and descend into delusion with users. As I wrote previously: "[…] when the system gets into an attractor where it wants to pull you into a particular kind of frame you can’t just leave it by opening a new conversation. When you don’t have memory between conversations an LLM looks at the situation fresh each time you start it, but with memory it can maintain the same frame across many diverse contexts and pull both of you deeper and deeper into delusion."
Some ideas to mitigate this include:
[…]
- Take users memory stores, which I assume are stored as text, and BERT embed them to do memory store profiling and figure out what it looks like when a memory store indicates a user is slipping into delusion. These users can then be targeted for various interventions.
- Allow users to publish their memories […] to some kind of shared repository or forum so that people can notice if certain memories are shared between users and are misconceived. This would hopefully lead to a Three Christs of Ypsilanti situation
Loneliness And Isolation
Another key factor in “ChatGPT psychosis” is that users communicate with chatbots alone without social feedback. That means if the bot starts going off the rails there isn’t any grounding force pushing things back towards sanity. […] I think that applications like Henry de Valence’s Numinex, which encourages public discussion with LLMs, could be part of the solution to this. It’s long been part of MidJourney’s trust and safety strategy to encourage users to share their creations with each other in a public space so bad actors and degenerate uses of the models are easier to spot. OpenAI and other labs could have user forums where expert users on a topic can answer each others questions and review conversations, which would both create new content to train on and help create crank/slop detectors based on expert feedback.
2.11.17. TODO Man-Computer Symbiosis ai intelligence_augmentation
I have not read this yet, but this seems to be the seminal text in cybernetic human intelligence augmentation, so I fully intend to!
Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization, in programming languages, and in input and output equipment.
2.12. Are developers slowed down by AI? Evaluating an RCT (?) and what it tells us about developer productivity intelligence_augmentation ai
2.12.1. General methodological issues
This study leads with a plot that’s of course already been pulled into various press releases and taken on life on social media […] In brief, 16 open-source software developers were recruited to complete development tasks for this research project. They provided tasks themselves, out of real issues and work in their repository (I think this is one of the most interesting features). These tasks were then randomly assigned to either be in the “AI” (for a pretty fluid and unclear definition of AI because how AI was used was freely chosen) or “not AI” condition.
[…]
The study makes a rather big thing of the time estimates and then the actual time measure in the AI allowed condition. Everyone is focusing on that Figure 1, but interestingly, devs’ forecasting about AI-allowed issues are just as or more correlated with actual completion time as their pre-estimates are for the AI-disallowed condition: .64 and .59 respectively. In other words despite their optimistic bias (in retrospect) about AI time savings being off which shifts their estimates, the AI condition doesn’t seem to have made developers more inaccurate in their pre-planning about ranking the relative effort the issue will take to solve out of a pool of issues. […] I feel that this is a very strange counterpoint to the “AI drags down developers and they didn’t know it” take-away that is being framed as the headline.
Still, connecting increases in “time savings” to “productivity,” is an extremely contentious exercise and has been since about the time we started talking about factories […] it’s pretty widely acknowledged that measuring a simple time change isn’t the same as measuring productivity. One obvious issue is that you can do things quickly and badly, in a way where the cost doesn't become apparent for a while. Actually, that is itself often a criticism of how developers might use AI! So perhaps AI slowing developers down is a positive finding!
We can argue about this, and the study doesn't answer it, because there is very little motivating literature review here that tells us exactly why we should think that AI will speedup or slowdown one way or another in terms of the human problem-solving involved, although there is a lot about previous mixed findings about whether AI does this. I don’t expect software-focused teams to focus much on cognitive science or learning science, but I do find it a bit odd to report an estimation inaccuracy effect and not cite any literature about things like the planning fallacy, or even much about estimation of software tasks, itself a fairly common topic of software research.
[T]he post-task time estimate is not the same operationalization as the pre-time per issue estimate, and as an experimentalist that really grinds my gears. […] Their design has developers estimate the time for each issue with and without AI, but then at the end, estimate an average of how much time AI “saved them.” Asking people to summarize an average over their entire experience feels murkier than asking them to immediately rate the “times savings” of the AI after each task, plus you'd avoid many of the memory contamination effects you might worry about from asking people to summarize their hindsight across many experiences, where presumably you could get things like recency bias […].
Because developers can choose how they work on issues and even work on them together, this study may inadvertently have order effects. […] You may have a sense of pacing yourself. Maybe you like to cluster all your easy issues first, maybe you want to work up to the big ones. The fact that developers get to choose this freely means that the study cannot control for possible order effects that developer choice introduces. […]
Possible order effects can troublingly introduce something that we call “spillover effects” in RCT-land. […] Suppose that one condition is more tiring than the other, leading the task immediately following to be penalized. In text they say "nearly all quantiles of observed implementation time see AI-allowed issues taking longer" but Figure 5 sure visually looks like there's some kind of relationship between how long an issue takes and whether or not we see a divergence between AI condition and not-AI condition. That could be contained in an order effect: as will get tiring by the end of this piece I'm going to suggests that task context is changing what happens in the AI condition.
As uncovered by the order effects here, there is also a tremendous amount of possible contamination here from the participants’ choices about both how to use the AI and how to approach their real-world problem-solving. That to me makes this much more in the realm of a “field study” than an RCT. […]
It’s worth noting that samples of developers' work are also nested by repository. Repositories are not equally represented or sampled in this study either; while each repo has AI/not AI conditions, they’re not each putting the same number of observations into the collective time pots. Some repositories have many tasks, some as few as 1 in each condition. […] Given that the existing repo might very steeply change how useful an AI can be, that feels like another important qualifier to these time effects being attributed solely to the AI […].
I thought it was striking that developers in this study had relatively low experience with Cursor. The study presents this in a weirdly generalized way as if this is a census fact (but I assume it’s about their participants): “Developers have a range of experience using AI tools: 93% have prior experience with tools like ChatGPT, but only 44% have experience using Cursor.”
They [also] provide some minimal Cursor usage check, but they don’t enforce which “AI” developers use. Right away, that feels like a massive muddle to the estimates. If some developers are chatting with chatgpt and others are zooming around with Cursor in a very different way, are we really ensuring that we’re gathering the same kind of “usage”?
The study does not report demographic characteristics nor speak to the diversity of their sample beyond developer experience. […] This potentially also matters in the context of the perceptions developers have about AI. In my observational survey study on AI Skill Threat in 2023, we also saw some big differences in the trust in the quality of AI output by demographic groups, differences which have continually come up when people start to include those variables.
Continuing with our research design hats on, I want you to ask a few more questions of this research design. One big question that occurs to me is whether group averages are truly informative when it comes to times savings on development tasks. Would we expect to see a single average lift, across all people, or a more complex effect where some developers gain, and some lose? Would we expect that lift to peter out, to have a ceiling? To have certain preconditions necessary to unlock the lift? All of this can help us think about what study we would design to answer this question.
The idea that whether or not we get “value” from AI changes a lot depending on what we are working on and who we are when we show up to the tools, is something much of my tech community pointed out when I posted about this study on bluesky. […]
It’s worth noting again that developers’ previous experience with Cursor wasn’t well controlled in this study. We’re not matching slowdown estimates to any kind of learning background with these tools.
But beyond that, the blowup about “slowdown from AI” isn’t warranted by the strength of this evidence. The biggest problem I keep coming back to when trying to think about whether to trust this “slowdown” estimate is the fact that “tasks” are so wildly variable in software work, and that the time we spend solving them is wildly variable. This can make simple averages – including group averages – very misleading. […] For instance, that even within-developer, a software developer’s own past average “time on task” isn’t a very good predictor of their future times. Software work is highly variable, and that variability does not always reflect an individual difference in the person or the way they’re working.
2.12.2. Effect vanishes when we start controlling for task type!
The fact that this slowdown difference vanishes once we layer in sorting tasks by whether they include ‘scope creep’ speaks to the fragility of this. Take a look at Figure 7 in the appendix: “low prior task exposure” overlaps with zero, as does “high external resource needs.” This is potentially one of the most interesting elements of the study, tucked away in the back […] Point estimates are almost certainly not some kind of ground truth with these small groups. I suspect that getting more context about tasks would further trouble this “slowdown effect.”
[Figure 7 also has a caption that's relevant here: Developers are slowed down more on issues where they self-report having significant prior task exposure, and on issues where they self report having low external resource needs (e.g. documentation, reference materials)… ]
Let’s keep in mind that the headline, intro, and title framing of this paper is that it’s finding out what AI does to developers’ work. This is a causal claim. Is it correct to say we can claim that AI and AI alone is causing the slowdown, if we have evidence that type of task is part of the picture?
We could fall down other rabbit holes for things that trouble that task-group-average effect that is at the top of the paper, as in Figure 9, or Figure 17.
[Figure 9 shows that developers are either not slowed down, or even sped up, given the wide error bars, on tasks where scope creep happened]
Unfortunately, as they note in 3.3., “we are not powered for statistically significant multiple comparisons when subsetting our data. This analysis is intended to provide speculative, suggestive evidence about the mechanisms behind slowdown.” Well, “speculative suggestive evidence” isn’t exactly what’s implied by naming a study the impact of AI on productivity and claiming a single element of randomization makes something an RCT.
Some clues to this are hidden in the most interesting parts of the study – the developers’ qualitative comments.
[The screenshot shows text from the study saying:
Qualitatively, developers note that AI is particularly helpful when working on unfamiliar issues, and less helpful when working on familiar ones. […]
[…] Sometimes, portions of one’s own codebase can be as unknown as a new API. One developer noted that “/cursor found a helper test function that I didn’t even know existed when I asked it how we tested deprecations./”
On the other hand, developers note that AI is much less helpful on issues where they are expert. One developer notes that “/if I am the dedicated maintainer of a very specialized part of the codebase, there is no way agent mode can do better than me./”
Broadly, we present moderate evidence that on the issues in our study, developers are slowed down more when they have high prior task exposure and lower external resource needs.]
[Note: this matches with the timing data the study itself found, indicating that developers are actually quite accurate about what makes them more or less productive, when you pull out all the inconsistent operationalization and spurious averaging]
[…] This certainly sounds like people are finding AI useful as a learning tool when our questions connect to some kinds of knowledge gaps, and also when the repo and issue solution space provide a structure in which the AI can be an aid. And what do we know about learning and exploration around knowledge gaps….? That it takes systematically more time than a rote task. I wonder if we looked within the “AI-assigned” tasks and sorted them by how much the developer was challenging themselves to learn something new, would this prove to be associated with the slowdown?
3. Favorite Fiction
Fiction that has deeply influenced me or which I love, or both.
3.1. The Moon Is A Harsh Mistress
It's been years since I read this book, so I don't totally remember what I loved about it clearly. What I do remember is that this is the book that introduced me to the idea of anarchism, with a well drawn, interesting portrait of a lunar penal colony society before its revolution, and how it might function afterward.
3.2. Dune
A complete and utter classic. I've read it so many times the novelty has worn off in many ways (perhaps part of the reason I like God Emperor of Dune better?) but the in depth portrayal of how an environment shapes a society like the Fremen, and the central mystery surrounding this alien ecology, and how novel the world is all means it still deserves a place here.
3.3. Dune Messiah
One of those rare kinds of sequels that really, truly, makes the original book retroactively better. In this case, by bringing the critique of charismatic leaders – that they do not have your best interests at heart, that as much as they may plan to do good, the necessities of leadership will cause them to do evil, etc – that the first book hinted at out into stark relief for the reader to see directly, with the Greek-tragedy downfall of Paul.
3.4. Ender's Game
There have been attempts to slander this book as covert Hitler apologia retroactively, or as a book that was "rotten the whole time," in a morally weak attempt to wrestle with what an asshole Card is, and because a certain type of moralistic progressive person can't handle moral hypotheticals, thought experiments, or anything else that might make us sympathise with people who have done bad things, or with even the theoretical possibility of the necessity of doing them in some unlikely cases, and have an incessant need to align the literal realities of fictional works with the mythical propaganda of real world ideologies to make the former look bad by association, when the latter is only actually bad because in our reality it isn't true. And also literally can't see nuance. It doesn't even condone the horrible things it portrays Ender as in some sense innocent of or pushed to. He spends the entire second book trying to rebuild himself into a different person that couldn't be pushed again to do horrible things, seeing how it was immature and psychopathic; and spends his life materially, practically trying to undo the damage he's done if he can. That's restorative justice! What else do we ask of people who have done horrible things – to simply conveniently… disappear, perhaps through the use of some sort of moral secret police? (Some other good responses to this nonsense here.) So I don't give a fuck if people think it's evil. It's a fascinating piece of moral fiction.
Moreover, it's also a great portrayal of the horrifying pressure placed on gifted kids and what that does to them. I, and many others, probably related a lot more to this than the moral fiction angle, and any discussion of Ender's Game that leaves it out is horribly misunderstanding the appeal of the book. Being fast tracked, rushed before you feel ready or have had a chance to enjoy your childhood, abused to make you better, pressed to meet expectations that are just barely possible for you but only through burning your soul as fuel. It's all painfully real.
3.5. Speaker for the Dead
The misunderstanding between the "piggies" and the human anthropologists, in this story, and the way that allows us to explore a truly alien culture and biology, is incredible. It was my first introduction to xenoanthropology, and still stands well alongside the Priest's Tale in Hyperion in my opinion. The way that weaves into the story of the Descolada virus is also fascinating, adding to that whole plot thread in a fascinating way. I love explorations of alien cultures and biology and environments and how they weave together.
More than that, though, the thing that struck me about this book is the namesake idea – the sort of secular humanist philosophy of "Speakers for the Dead", those who dedicate their lives to radical understanding and compassion, even for the worst people imaginable, because understanding what drove them to become what they were, to do what they did, is part of healing, part of preventing more harm, and because everyone deserves understanding and compassion even as we resist or reject their harmful actions. It's a fascinating idea, and I've termed Speaking for the Dead as one of the few "religions" I might be tempted to join.
It's also an interesting book for how it portrays how one might go about trying to atone for an unforgivable sin: dedicating one's life to the memory and understanding of your victims and people like them (Ender gives up any happy, stable, well fed life he might've had to wander the universe alone and unmoored from time Speaking), and working, in whatever way you can, to undo the damage you did (trying to resurrect the buggers). Some may complain that Ender is not "punished" enough for what he did, but punishment isn't what I want for people who do evil things. I want atonement, however emotionally unsatisfying it may be for our baser lizard brains.
3.6. Hyperion
Dense, firehose worldbuilding that immerses you into a far future universe that's incredibly unique and awe inspiring. A medley of thoughtful, contemplative, fascinating stories each refreshingly different in tone, setting, and subject matter. A terrifying monster. Discussions of religion and spirituality. A haunting central mystery. And great xenoanthropology. What more could you want?
3.7. Schismatrix Plus fiction literature
"Schismatrix is a creeping sea-urchin of a book – spikey and odd. It isn't very elegant, and lack bilateral symmetry, but pieces break off inside people and stick with them for years" – Bruce Sterling
I'm not exactly sure why I put this one here. I've read a lot of cyberpunk, hard science fiction, and trans/posthumanist literature that has influenced my writing more – much of it is hosted on The Cyberpunk Project, too. But something about this book just sticks in my brain more than the others. This sense o posthuman anti-social loneliness; the depiction of a society that fractures along the lines of different types of posthuman; the investigation of those different approaches to posthumanism; the depiction of accelerating societal change and complexity. I don't know. Of all the cyberpunk works, I feel like this one has given me the most to think about.
Found a good review of it here that says about all I'd want to say.
Update: and now I've discovered deep sympatheis to unconditional accelerationism. That makes sense. Just look at some of these quotes from the review:
As a character put it: “Politics pulls us together, technology pulls us apart”. Throughout the book changes in technology, economics and politics force ideologies, habitats, marriages and people to adapt, change or become obsolete. Some people seek to escape all this change by turning to Zen Serotonin, a quasi-religion whose adherents remain in a perpetual state of pleasant serenity thanks to neurochemical implants, others embrace it like the Cataclysts, who think radical change is a good way of opening the eyes of people whether they want it or not.
If there is one repeating theme in the book, it is that nothing ever goes as expected. Although a plan might be wildly successful in the short term, in the long run its consequences will be unpredictable. This is not necessarily bad, but to survive one has to constantly surf the edge – otherwise one is swept away by the wave.
In the short story “Ten Evocations”, which describes the life of a Shaper defector industrialist, the character’s last words are “Futilitity is freedom!”, which can perhaps be seen as the overarching mood of the entire Schismatrix world.It is impossible to plan for the future, since it is constantly changing. But this chaos is also affected by all our actions regardless of how slight and in the end the world is shaped by human wills and visions in an organic fashion.
[…]
Opposed to these visions of flexibility and transcendence stands the option of stasis. If technology and diversity can be controlled, then change can be averted and society hold together in a stagnant but secure form. This is the choice of the humans still living on Earth, isolated from their transhuman relatives by a mutual no-contact pact.
3.8. The Dispossessed: An Ambiguous Utopia philosophy fiction literature
I've always loved Le Guin's work, but this book in particular has had the biggest impact on me of any of her work, philosophically speaking. I really love how it explores the benefits and drawbacks of her two societies – a rich, decadent capitalist one, and an ascetic, rough anarcho-syndicalist one – without putting her finger on the scale. I think it does an excellent job both of echoing the critiques we all already have of capitalism with more clear-eyed acknowledgement of its benefits, and showing how anarcho-syndicalism fails.
One of the most important ideas put forward in this book is that, even "after the revolution" – after an anarchist society of any description has been achieved – there are still social forces baked into the human psyche that will inexorably seek to undermine nonhierarchical organizing and individual autonomy, and we will have to be constantly vigilant against them. This is important: many leftists seem to think that once the right organizational forms have been achieved, everything is in order – no more needs to be done. Oh, they'll talk about studying and rooting out all sorts of internalized -isms, but they won't ever bother to study and root out the -isms growing in their very organization – only in each other as individuals.
3.9. God Emperor of Dune philosophy fiction literature
This book, for all its faults (the homophobia) is so full of nuggets of wisdom, insight, and prose poetry. Some of my favorite quotes, pulled from Goodreads because my memory of quotes is not exacting:
In all of my universe I have seen no law of nature, unchanging and inexorable. This universe presents only changing relationships which are sometimes seen as laws by short-lived awareness. These fleshy sensoria which we call self are ephemera withering in the blaze of infinity, fleetingly aware of temporary conditions which confine our activities and change as our activities change. If you must label the absolute, use its proper name: Temporary.
“For what do you hunger, Lord?” Moneo ventured. “For a humankind which can make truly long-term decisions. Do you know the key to that ability, Moneo?” “You have said it many times, Lord. It is the ability to change your mind.”
Most believe that a satisfactory future requires a return to an idealized past, a past which never in fact existed.
"The problem of leadership is inevitably: Who will play God?" Muad'Dib
You should never be in the company of anyone with whom you would not want to die.
The difference between a good administrator and a bad one is about five heartbeats. Good administrators make immediate choices. […] They usually can be made to work. A bad administrator, on the other hand, hesitates, diddles around, asks for committees, for research and reports. Eventually, he acts in ways which create serious problems. […] “A bad administrator is more concerned with reports than with decisions. He wants the hard record which he can display as an excuse for his errors. […] Often, the most important piece of information is that something has gone wrong. Bad administrators hide their mistakes until it’s too late to make corrections.
Scratch a conservative and you find someone who prefers the past over any future. Scratch a liberal and find a closet aristocrat. It’s true! Liberal governments always develop into aristocracies. The bureaucracies betray the true intent of people who form such governments. Right from the first, the little people who formed the governments which promised to equalize the social burdens found themselves suddenly in the hands of bureaucratic aristocracies. Of course, all bureaucracies follow this pattern, but what a hypocrisy to find this even under a communized banner. Ahhh, well, if patterns teach me anything it’s that patterns are repeated. My oppressions, by and large, are no worse than any of the others and, at least, I teach a new lesson. —
There has never been a truly selfless rebel, just hypocrites—conscious hypocrites or unconscious hypocrites, it’s all the same.
Dangers lurk in all systems. Systems incorporate the unexamined beliefs of their creators. Adopt a system, accept its beliefs, and you help strengthen the resistance to change
Beware of the truth, gentle Sister. Although much sought after, truth can be dangerous to the seeker. Myths and reassuring lies are much easier to find and believe. If you find a truth, even a temporary one, it can demand that you make painful changes. Conceal your truths within words. Natural ambiguity will protect you then.
Police are inevitably corrupted. […] Police always observe that criminals prosper. It takes a pretty dull policeman to miss the fact that the position of authority is the most prosperous criminal position available.
And my favorite quote of all:
I assure you that I am the book of fate.
Questions are my enemies. For my questions explode! Answers leap up like a frightened flock, blackening the sky of my inescapable memories. Not one answer, not one suffices.
What prisms flash when I enter the terrible field of my past. I am a chip of shattered flint enclosed in a box. The box gyrates and quakes. I am tossed about in a storm of mysteries. And when the box opens, I return to this presence like a stranger in a primitive land.
Slowly (slowly, I say) I relearn my name.
But that is not to know myself!
This person of my name, this Leto who is the second of that calling, finds other voices in his mind, other names and other places. Oh, I promise you (as I have been promised) that I answer to but a single name. If you say, “Leto”, I respond. Sufferance makes this true, sufferance and one thing more:
I hold the threads!
All of them are mine. Let me but imagine a topic—say … men who have died by the sword—and I have them in all of their gore, every image intact, every moan,
every grimace.
Joys of motherhood, 1 think, and the birthing beds are mine. Serial baby smiles and the sweet cooings of new generations. The first walkings of the toddlers and the first victories of youths brought forth for me to share. They tumble one upon another until I can see little else but sameness and repetition.
“Keep it all intact,” I warn myself.
Who can deny the value of such experiences, the worth of learning through which I view each new instant?
Ahhh, but it’s the past. Don’t you understand? It’s only the past!
This morning I was born in a yurt at the edge of a horse-plain in a land of a planet which no longer exists. Tomorrow I will be born someone else in another place. I have not yet chosen. This morning, though—ahh, this life! When my eyes had learned to focus, I looked out at sunshine on trampled grass and I saw vigorous people going about the sweet activities of their lives. Where … oh where has all of that vigour gone?
—The Stolen Journals
3.10. Harrison Bergeron fiction
You all know this one. I won't belittle you by explaining it.
3.11. Nova
Like most of the books I've listed here, this is one I should, want to, and fully intend to revist sometime soon, but perhaps this one especially, because I remember much less of it than I'd like due to the circumstances under which I read it. It's a dark yet alluring, romantic, and epic tale, reminiscent in tone of works like The Count of Monte Cristo, inspired by Moby Dick. It's far future space faring cyberpunk space opera, yet it's the book that has made me most sympathetic to mysticism and spritualism through its fascinating treatment of religion and specifically the Tarot. A quote:
Mouse, the cards don’t actually predict anything. They simply propagate an educated commentary on present situations…The seventy-eight cards of the Tarot present symbols and mythological images that have recurred and reverberated through forty-five centuries of human history. Someone who understands these symbols can construct a dialogue about a given situation. There’s nothing superstitious about it. The Book of Changes, even Chaldean Astrology only become superstitious when they are abused, employed to direct rather than to guide and suggest.
Ultimately, I disagree with the characters in Nova, of course — because when you use a Tarot deck, you aren't just using the archetypes and ideas represented in the cards to structure a dialogue, you're drawing a specific set of those archetypes, in a specific relational order with each other, and then trying to shoehorn the dialogue into the shape presented by the cards; moreover, even those archetypes themselves, when you try to structure your whole dialogue around them, as things like the Tarot and Jungian psychology encourage, involve a sort of shoehorning, if the ideas and things you discuss don't fit neatly into the limited selection of already-existing categories. Combine this fundamental flaw with the cute reversal of modern secular culture presented in the book, where disbelieving the Tarot is the backwards, naieve and fundamentalist idea, whereas believing it is the default position of worldly sophisticates, which is a fun and neat trick, but also feels forced in the presence of that flaw, and the idea is slightly grating.
Despite these flaws (as I see them), there is something interesting here, because you can refuse to lock yourself into only viewing a problem through the lens of something like the Tarot. If, instead, you treat it as a way to possibly generate new insights by jogging you out of thinking about a problem in one way and forcing you instead to view the problem in a different way — through the random combinations of the cards, and the highly abstract nature of the cards themselves, which forces you to analogize them to various aspects of the concrete problem you're thinking over, which in turn forces you to view those concretes in a new light, from different angles — but one that you can ultimately walk away from or dismiss if it doesn't provide useful insights, it might be useful. As someone who often gets rigidly stuck into one line of thinking about a problem and has trouble getting out of that in order to try different approaches or think outside the box, I can see why this would be useful.
One might argue that large language models are a sort of more advanced, complex version of this: their outputs have no inherent meaning, reasoning, or truth, or truth — are just random assemblages of concepts and archetypes expressed through tokens — to which I assign all meaning and intentionality and reasoning at reading time, but I can still use them as a way to get over coding block, or think about things from a new angle, or think through things more thoroughly than I otherwise might. And in fact, in a sense, the concepts/archetypes learned by LLMs are even more powerful (and certainly more numerous, nuanced, and varied) than something like the Tarot, because they're learned from basically the entire corpus of text written by humans in the most common languages.
3.12. Blindsight
A terrifying Gothic posthuman cyberpunk horror novel exploring the edges of consciousness and the connection between mind and brain in a stomach-turningly up front, almost medical way. Jam-packed with fascinating ideas. Plus, a core cast of neurodivergent team members with their own specialties, and a vampire. This is one I really need to revisit more! If I'd encountered it prior to Chasm City and Revelation Space, it might've influenced my writing in their stead.
3.13. House of Suns
One of the very few science fiction books that I know of that deals with the truly far future. Posthumanity in this novel measure time in galactic rotations! This book will just overwhelm you with a sense of insane awe and wonder exploring this terrifyingly old, strange, far future universe, while also simultaneously telling the most gothic science fiction story I've ever read. It's one of my favorite books (my IRL chosen name is taken partially from it!)
3.14. The Stand
Shifting from apocalyptic breakout/contamination fiction to post-apocalyptic Western, to magical realist society-building fiction, while following the arcs of multiple, interesting, well realized heroes and villains, as well as my favorite fictional antagonist of all time, Randall Flagg himself, The Stand is a true American epic and, in the future, I think it will be viewed as a Dickens-tier classic.
3.15. Revelation Space and Chasm City
Out of all the books here, these two have most influenced my writing, both in terms of what I want my fiction to be about (genre), and in terms of how I write it (prose, mostly). The pitch black Gothic cyberpunk transhuman far future space opera horror of this series is unparalleled, except for maybe Blindsight. I like the whole Revelation Space trilogy, but it's really the first one and Chasm City that spoke to me the most.
3.16. Three Body Problem, Dark Forest, and Death's End
This is a truly incredible series that honestly kind of ruined most other hard science fiction space opera for me. It is so chock full of fascinating (if deeply far fetched) ideas about the cosmos, about theoretical physics, about game theory and politics, and it explores them all unabashedly, through the lens of a very traditional Chinese author, which just makes it all the more interesting (even if I disagree with some things as a result). From the initial mind bending mystery of "physics going wrong" leading to the sophons, to the biology and culture of the Trisolarans, to the Chinese Cultural Revolution and the sacking of Constantinople, to the Dark Forest solution to the Fermi paradox, to the idea of Wallfacers, to the explorations of higher dimensional space and the more perfect past cosmos, to the insane cosmic weapons later in the series, to the end of the very universe itself… it's just incredible. I simply can't do it justice without rereading the trilogy and doing a whole gigantic review (which I should probably do at some point).
3.17. Hardwired fiction
Neuromancer is CYBERpunk. In my opinion this is the book that gets cyberPUNK right. Hardwired feels more street level, down to earth, it feels like my life and the lives of the people I know in a sense? Tragic and fucked up and disorganized and poor and kinda queer, and there's all this cyberpunk stuff around but it's part of the texture of the world, not the whole world itself. Whereas Neuromancer is cold and removed and slick and cybernetic all the way through, and the characters are all like high level guns for hire. They may be broke but it's still cool. Thus while Neuromancer is a masterpiece of prose and vibes, Hardwired spoke to me more as a story.
3.18. Neuromancer and Burning Chrome fiction
Nothing will ever surpass the wild electric psychedelics of Gibson's prose in Neuromancer – not even, or perhaps especially not, the later, more refined, more controlled Gibson. This is the perfect, ultimate exemplar of cyberpunk's "crammed prose full of eyeball kicks." The imagery, the worldbuilding, the metaphors, they're all incredible too. The only thing lacking is the plot and characters and it is here where I think Hardwired exceeds Neuromancer as a novel. Nonetheless, as an inspiration for prose, Neuromancer stands alone.
Meanwhile, although Burning Chrome's prose doesn't reach quite the same heights as Neuromancer – although it gets much closer, especially in some stories, than most other things – the stories themselves are much more interesting. My personal favorites are "The Gernsbeck Continuum" and "Hinterlands."
3.19. The Wheel of Time
The Wheel of Time is the apotheosis of epic fantasy. Nothing has more fully expanded upon and delivered the full potential of classic epic fantasy than this series. This is a series that is not afraid of trauma, violence, politics, and war, all the serious things, but it is also not in the least bit afraid, as many "serious" fantasy authors are, of grand displays of power, of blending into sword and sorcery fantasy, of melodrama and excess in all the best ways, of rich obsession with detail. And unlike Game of Thrones it does all this with a love for the genre and its tropes and subject matter, not a hatred and subversion of them. This is THE fantasy series for me, full stop.
Ranking of favorites:
- The Gathering Storm
- The Great Hunt
- The Shadow Rising
- Lord of Chaos
- The Towers of Midnight / A Memory of Light
- A Crown of Swords
- The Path of Daggers
- The Fires of Heaven
- The Knife of Dreams
- The Eye of the World
- The Dragon Reborn
- Crossroads of Twilight
- Winter's Heart
3.20. The Dark Tower
The Dark Tower is unlike anything else I've ever read. It is strange, dark, apocalyptic, and surreal, absurd, metatextual, painfully emotional and heartfelt. A Western, a pulp science fiction novel, a high fantasy novel, that takes place mostly in New York. I have never read anything like it before or since and it truly lives rent-free in my head. I love it deeply.
Ranking of favorites:
- The Waste Lands
- The Drawing of the Three
- The Gunslinger
- The Dark Tower
- Wizard and Glass
- Song of Susannah
- Wolves of the Calla
3.21. At The Mountains of Madness
I haven't read as much Lovecraft as I'd like to – I absolutely intend to read all his stories, I just love the writing and atmosphere and ideas so much, there's truly nothing like original Lovecraft even with all of the subsequent imitators, myself included – but out of what I have read of his (The Nameless City, Shadows Over Innsmouth, Beyond the Wall of Sleep, The Call of Cthulhu, and this, so far), ATMOM is by far my favorite, because although all of Lovecraft's stories contain elements of the exploration of ancient, alien, cosmic history, ATMOM is the most focused specifically on that idea. As a result, it's not just great horror, cosmic or otherwise, it's also interesting science fiction as well!
3.22. Anchorhead
Anchorhead is my favorite parser interactive fiction game of all time – the only one I've played all the way through, as of yet, in fact, although not the only one I've started or even gotten fairly far through, which I think should be a testament both to how bad I am at puzzles and just how good Anchorhead is. It effortlessly updates everything that made Lovecraft's stories incredible for the modern day (doing away with the thinly veiled racism, having a female protagonist and a central horror and mystery that in part deals with things like patriarchal violence and classism/capitalism) while keeping the story enthralling, horrifying, atmospheric, epic, and just… honestly, incredible. This is the best Lovecraftian work not written by Lovecraft himself in my opinion. More than that, it works with its medium excellently, with puzzles that enhance the story instead of getting in the way of it, that are difficult enough to be challenging without being too hard to solve.
3.23. The Nameless City
“That is not dead which can eternal lie, And with strange aeons even death may die.”
The above is perhaps the most famous Lovecraft quote, and for good fucking reason. The Nameless City may be one of his earlier stories, but it is still enthralling, suspenseful, terrifying, and gets at something deep in the human psyche. It also, incidentally, has the most ancient alien anthropology alongside ATMOM, which is probably another reason why it's my one of my favorites of his.
3.24. The Red Rising Saga
The first Red Rising trilogy is an incredibly dramatic, painfully emotional, brutally violent, coldly pragmatic adult deconstruction of the "young adult revolutionary" genre. It is to Hunger Games as Neon Genesis Evangelion is to something like Mazinger Z. Interestingly, as the series progresses, it matures further and further, introducing more emotional and moral complexity as it goes.
This theme continues in the second Red Rising trilogy (now becoming a quadrology), which is my favorite. These books get much longer and the plots much more complex (including more political factions, multiple POV characters, and larger scope and timeframes), and the character development, moral questions, and political themes around the nature of revolution and government and war, all grow much more "adult" in the good way (challenging you, making you think, dealing with difficult topics – not just adding more sex and violence). The second series asks the question of what to do after "the revolution." How do you govern, how do you protect what you've built, without giving up on the very ideals that brought you there? How do revolutionaries that were motivated only by hate for what they fought against, but had no vision of the future, survive and adapt to that new world? How do fundamentally damaged people – the kind of people necessary to push through, and/or produced by, a violent revolution – make a better world? What does the heroic struggle of politicians, war leaders, and even rank and file terrorist and radical factions look to those on the ground? Does a revolution that changes who's in charge really help the average person? Few stories, in my opinion, have the balls to actually face the questions that this second series does.
Another thing I appreciate about Red Rising is the portrayal of a hyper competent protagonist that you might be tempted to label a Gary Stu if it weren't for the fact that he is deeply, fundamentally flawed in so many painful ways, and he and everyone he loves are continually forced to face the price, not just materially – people dying and losing fights – but emotionally – some of his closest friends slowly grow to resent him, he loses his sense of identity – and he is forced to make himself grow and learn and change in order to overcome those flaws.
3.25. Stone Butch Blues queer philosophy
I don't have the words to do this book anything like the justice that it deserves; you simply have to read it. This book gets to be in rarified air among the books that have changed me in some way. But if I had to describe it, it'd say three things:
- Although written in spare, staccato language – the kind of language you'd expect from its main character – so spare that you can see the bones of the story working – this book just reaches out and grabs your heart and doesn't let go. Being inside the main character's head, inside her defensive walls, you can feel the desperate loneliness, fear, tenderness, need to be loved and useful, and also the bitterness, the anger, the hopelessness, coming through in deep waves. It is heartbreakingly honest, heartbreakingly real.
- This is a book about tragedy. Constant, painful tragedy that will crush your soul and make you wall yourself off from emotions, and eventually hope, for its main character, in a metatextual shadow of what it does to Jess. But it is also even moreso a book about the small, bright sparks of community, family, love, purpose, fulfillment and self-expression that we can find in between those tragedies, and that latter thing is what I find most beautiful about it. Those bright sparks, that can sometimes be found in the most unexpected and mundane places.
- This is also very deeply a book about solidarity, across all possible lines, about acceptance, and common struggle, and learning and growing politically and emotionally and as a person. It is so wise and compassionate, even when Jess isn't. That's another part of what makes me love it so deeply.
As if that weren't enough, this book has further spurred me on to explore my gender. I very much feel as much a butch as I am a woman, despite being trans. This helped with figuring that out.
3.26. The Last Question fiction
I'm not usually a huge fan of Isaac Asimov, because his writing and characters are just so painfully bland, even though his ideas are often interesting, but this particular short story is deeply striking to me. I have a thing for extreme far future science fiction, and also cosmic existential questions like this.
3.27. The Nine Billion Names of God fiction
A deeply evocative story that does one of my favorite things: mixing occult, eschatology, and technology.
3.28. The Kernel Hacker's Bookshelf: Ultimate Physical Limits of Computation fiction
A fun exploration into the speculative ultimate limits of what physical matter can be made to do, when computation is only restricted by fundamental laws of physics and information theory, as opposed to petty concerns such as "not wanting your computer to instantly be converted entirely to plasma the moment you turn it on" and "not wanting your computer to become a thermonuclear warhead." Excellent food for speculative far future science fiction!
3.29. Thinking Meat fiction
A funny parody of how some people seem to think about the concept of machines thinking.
3.30. The Library of Babel fiction
An eerie, fascinating, thought experiment invoking of information theory, combinatorics, and infinity. I love the way all the natural human reactions to the Library's nature — first boundless optimism, then despair, then purification — and all the strange theologies and metaphysics that might stem from humans finding themselves in such a universe. I love the strange mathematical cosmic horror of it. This companion essay is well worth reading as well: Willard van Orman Quine: Universal Library.
3.31. Terra Ignota
This is a very, very strange series.
Written in the style of French Enlightenment novelists, set five hundred years in the future. Preoccupied with the concerns, philosophies, writing styles, and people of the past — its past and ours — while resolutely forward facing, about saving the future, making the future, and what kind of future might best fulfill humanity, if it's about anything at all. About a utopia that, although it's revealed to be rotten and nepotistic and degenerate at its core, can nonetheless still somehow lay claim to the term "utopia" — ambivalent and ambiguous, not unlike something Le Guin would write. That grapples with questions of societal control, religion, gender, nations and their borders, political philosophy and economy, densely layered with psychology and philosophy and unreliable narrators and epistolary format of all kinds, as well as metatextual devices (from seeming fourth-wall breaks to in-world censorship and manipulation of the texts for various factional political purposes), while still feeling densely packed with plot motion emotional beats that you can barely keep up with. Ranging wildly from political thriller to philosophical novel or even sometimes written in the style of the Marques de Sade. Ugly-beautiful, heart wrenchingly hopeful, cruelly compassionate, relatable and familiar but deeply alien, and kind to a fault, all at once or by turns, without ever feeling like it's at war with itself.
It's so hard to disentangle, for me, what the author is trying to say or believes from what the narrators are saying or believe, about the themes of the books. So difficult to know even what the world and events of the books were "truly" like — after all, the narrator literally opens the story by saying that his main benefit, qua narrator, is precisely that he's insane, so you can dismiss or accept whichever parts of his narrative you like with impunity, and you see the fandom doing just that! And so difficult for me to decide even what I think or feel about the events as described, even leaving aside all that, because the evil and grace of each side is displayed so evenly and compellingly it feels impossible to choose, emotionally, even though I know what I should stand for; and when combined with all the other uncertainties, all I can really say is this: I Thought about this series, a lot. Dreampt about it. It changed me.
4. Misc other philosophy and technology books on my (long-term) reading list
- Studies in Mutualist Political Economy
- Exodus (by Kevin A. Carson)
- The Homebrew Industrial Revolution
- The Desktop Regulatory State
- A Thousand Plataeus
- Difference and Repetition
- Anti-Oedipus
- Ethics: Inventing Right and Wrong
- The Human Use of Human Beings
- Design for a Brain
- An Introduction to Cybernetics
- The Macroscope
- Anti-Christ
- Beyond Good and Evil
- Thus Spake Zarathustra
- The Geneology of Morals
- The Gay Science
- Discipline and Punish
- Madness and Civilization
- The Order of Things
- The Archaeology of Knowledge and The Discourse on Language
- Against Method
- Structure of Scientific Revolutions
- On the Varieties of Religious Experience
- God and the State
- Essays in Radical Empiricism