Accelerationism Triptych 02: Technology is an engine of possibility
In the first part of this thematic triptych, I discussed the affective aspect of accelerationism — the strange, almost sadomasochistic but also freeing allure of deterritorialization and the space it makes for change, the new, the outside, the different, new freedoms and new arrangements. In this essay, I want to discuss how the rejection and fear of this allure, both in itself and because of a belief that deterritorialization must always be combined with exploitative capitalistic reterritorialization, has lead the left down a blind alley, and what we might be able to do to free the forces of accelerating deterritorialization from capitalism, and learn to work with and enjoy them.
The Dismal Left
Robin Mackay and Armen Avanessian, in Accelerate, once described the crisis of the contemporary left as:
[a] crisis [that] perversely mimics its foe, consoling itself either with the minor pleasures of shrill denunciation, mediatised protest and ludic disruptions, or with the scarcely credible notion that maintaining a grim 'critical' vigilance on the total subsumption of human life under capital, from the safehouse of theory, or from within contemporary art's self-congratulatory fog of 'indeterminacy', constitutes resistance. Hegemonic neoliberalism claims there is no alternative, and established Left political thinking, careful to desist from Enlightenment 'grand narratives', wary of any truck with a technological infrastructure tainted by capital, and allergic to an entire civilizational heritage that it lumps together and discards as 'instrumental thinking', patently fails to offer the alternative it insists must be possible, except in the form of counterfactual histories and all- too- local interventions into a decentred, globally- integrated system that is at best indifferent to them. The general reasoning is that if modernity = progress = capitalism = acceleration, then the only possible resistance amounts to deceleration, whether through a fantasy of collective organic self- sufficiency or a solo retreat into miserablism and sagacious warnings against the treacherous counterfinalities of rational thought. —
It really gnaws at me how some leftists seem to have given up on technology, the future, and desire in favor of a sort of Canutism, always wishing to go back before the last technology was invented, or still further. Even number, language, reasoning, and mathematics are seen as perhaps a step too far from the garden of Eden that surely must once have existed. Every technology is met with a cynical lack of imagination as to how it could be made or used without exploitation, only because it's made that way under capitalism; the shallow dismissal that "all technologies carry a political agenda" — a thought-stopping cliche that allows infinite sociological, material, and genealogical analysis to continue, but no counterfactual analysis, no imagination, no repurposing — meets every resistance to that mentality.
Even the left-accelerationists — those of "Fully Automated Luxury Communism" — are, in a sense, partizans of reterritorialization, of fear and retrenchment. Yes they embrace technological development, the machine, but perversely only as a means of that reterritorialization: not something that brings new possibilities in from Outside, unimaginable possibilities and subjectivities that we'd barely recognize as human, natural, or even good, upending control and foreknowledge, the linear flow of time, but instead something that makes it easier to bureaucratically control, make legible, and harness for humanist ends the operations of the present and future; the use of cybernetics not to create a negative feedback loop away from restricted economies and into the sovereign and inhuman, but a cyclical feedback loop that maintains meta-stability — the management cybernetics of Stafford Beer.
Note here that this is not every strain of leftist: there are leftists still committed to decentralization, transhumanism and the posthuman, radically new subjectivities, free and open source software and hardware, open access, DIY making, building alternative stacks, using technology and the new subjectivities technocapital makes possible (e.g., all the leftist education happening for younger generations on TikTok) to undercut power. Who aren't afraid of markets and interdependence and technological development. But often even these are able to imagine an accelerationist tech-friendly leftist future in some areas, but not others, or call for regulation and law as an aid to their projects, and even these people are becoming increasingly marginalized in the general social discussions on the left.
Every possible advance is dismissed with nihilistic hopelessness, hollow chested ressentiment, or the naturalistic fallacy.
Life extension
Take life extension. Anti-tech leftists claim it's contrary to human nature, or undercuts the "meaning of life," but that's just the worshipping of arbitrary limits imposed by contingent material conditions as inevitable — the naturalization of present facts, which so many leftists rail against in other scenarios. As William Gillis says in the Anarcho-Transhumanist FAQ:
The notion that an objectively “good life” extends to seventy or a hundred years but no further is clearly arbitrary, and yet such an opinion is both nearly universally held and violently defended. Many early transhumanists were shocked by the bizarreness and brazenness of this response, but it illustrates how people will become staunch proponents of existing injustices for fear of otherwise having to reconsider standing assumptions in their own lives. In the same way that people will defend mandatory military service or murdering animals for food, the arguments for death are clearly defensive rationalizations:
“Death gives life its meaning.”
How is death at 70-years-old more meaningful than death at 5-years-old or at 200-years-old? If an eighty-year-old woman gets to live and work on her poetry for another five decades, does that really undermine your capacity to find meaning so badly that you’d have her murdered?
“We would get bored.”
So let’s build a world that isn’t boring! Never mind the wild possibilities embedded in both anarchism and transhumanism, it would take almost three hundred thousand years to read every book in existence today. There’s already 100 million recorded songs in the world. Thousands of languages with their own ecosystems of conceptual associations and poetry. Hundreds of fields to study on rich and fascinating subjects. Vast arrays of experiences and novel relationships to try. Surely we can do with a few more centuries at least.
“Old static perspectives would clog up the world.”
It’s a pretty absurd and horrifying to instinctively appeal to genocide as the best means to solve the problem of people not being plastic in their perspectives or identities. […] There are no doubt infinite myriad ways we might live and change, but it would be strange indeed if the sharp binary of sudden, massive and irreversible loss that is currently standard was universally ideal.
This is an illustrative example in that it gets to the heart of what transhumanism offers as an extension of anarchism’s radicalism: the capacity to demand unexamined norms or conventions justify themselves, to challenge things otherwise accepted.
Space travel
Or space travel: all anti-tech leftists can seem to see is the opportunity cost, seeing it as "stealing" resources for "useless projects," when the secondary benefits to space travel, in terms of jobs created and technologies innovated, are well known.
Or they talk about space colonization as violent and colonial, in a triumph of political aesthetics over substance. In each case, something that is vaguely similar to something bad from history, but not in a meaningful way, is treated as morally equivalent:
Biological Contamination and Ecological Devastation: The spread of deadly pathogens was used as a form of biological warfare, playing a part in genocide against Indigenous peoples, both intentionally and unwittingly. […] Biological contamination is not a politically neutral or accidental phenomenon and will always have an effect in the environment in which it is taking place amongst all actors involved – both human and nonhuman. This is true for both forward and backward contamination in missions to other planetary bodies. Forward contamination will irreversibly change any extant extraterrestrial microbiome.
They try to nail this down to something practical that actually means something — making sure that we don't leave marginalized groups behind if there's backward contamination, like we did with COVID — but even they admit this is a far-fetched possibility.
Commodification and Appropriation of Land and Resource Extraction: The commodification of land through extractive practices has led to significant disruption of the ecosystems that Indigenous communities rely upon for their livelihoods. […] Current structures for in-situ resource utilization on other worlds are analogous to some of these past and current practices on Earth. Most immediately, lunar resource maps seek to enable public and private sector mining actors to plan for extraction of water ice and other resources. Similar proposals exist for asteroid mining. This is presented under a guise of “sustainability,” but in actuality replicates the practices of extractive capitalism that have contributed to the environmental degradation of Earth. In the long-term, this exploitative approach to extraterrestrial exploration will be similarly detrimenta
How is the prospect of eventually, possibly, in the deep-time far future (the solar system is vast and has a lot of resources), using up brand new raw material, which is just an invetible byproduct of existance and entropy, similar to environmental destruction in a place we actually live, in the near future? How is using the resources space — where no one lives — presents to us similar to land appropriation that actually hurts real Indigenous people? Who are we appropriating land from, guys, fucking Martian Manhunter?
Public-Private Partnerships as a Colonial Structure: Private individuals and institutions, in collaboration with governments, are a key aspect of the colonial structure. For example, the East India Company was fundamental to British expansion across the Eastern hemisphere and took a central role in colonial domination and political control as well as trade.24 More recent examples include the influence of American fruit companies in the United States’ interventions into Latin American politics during the Cold War.25 In the United States, treaties signed with Native American nations have repeatedly been broken, often by settler colonialist individuals working in tandem with the US government and military. […] These examples are mirrored in the active role private industry is currently taking in space exploration. Presently, there is little to no oversight by national governments or international structures. […] For example, the privately-funded and state-operated Beresheet lunar lander crashed on the Moon and accidentally released thousands of tardigrades.
Once again, this is a prioritization of associative aesthetic similarity over substantive argument. Private corporations taking part in exploration and trade during the colonial age was bad because, as they state, they dominated, exploited, harmed, and broke treaties with real Indigenous people. Who is being dominated, exploited, harmed, and who do we have treaties with, in space? The fact that they give accidentally releasing tardigrades onto the Moon as a case in point example shows you just how weak this similarity is.
Moral Consideration of Extraterrestrial Microbial Life: There must be further discussion of what moral consideration microbial life on other worlds should have, beyond their scientific significance, as others have considered previously.29 Considerations of “intelligence” or “non-intelligence” should not be used as the framework for this discussion. Not only do biological distinctions of intelligence have a racist history, they do not hold scientific merit. It is clear that microbiology is foundational to Earth as we know it, and microbes are deserving of moral consideration
Ah yes. The very idea that some beings are intelligent and others aren't — however we choose to define that — is racist and "does not hold scientific merit," and therefore we have to treat microbes as moral subjects.
This is metaphysical nonsense. Matter does not have mind, therefore it cannot desires, intentions, or assign meaning to itself, nor have mutually reciprocal moral relationships; therefore, it is not a moral subject. The physical world does not come with meaning or ethical values embedded in it — that's ontologically incoherent — the human mind gives things their meaning and value. (For more on my ethics, see here.)
Obligations to Potential Future Life: Even if there is no extant microbial life on Mars or beyond, we must consider the impacts of our actions on geologic timescales.
I thought we were against valuing hypothetical future life on infinitely long timescales that we can't possibly know or predict over present human interests, needs, and wants, based on the reality we actually know? Isn't this just longtermism, but only for microbes, not people, and couched in leftist language? But it sure sounds good, because it says "obligations."
Ethical Interactions with Potential Microbial Life: Any first contact scenario with extraterrestrial microbial life will occur at the microbial scale, one that human explorers will not be aware of. It will be a conversation between two microbiomes we will not be privy to and which we will have minimal ability to influence. […] We must first reject the idea that microbial life is beyond moral consideration due to the label of “non-intelligence” or the claim that Mars is an empty place. We cannot repeat the notions of “terra nullius”31 that perpetuated colonial violence on Earth. Instead, we must explore anticolonial perspectives and implement those philosophies into our mission designs and scientific practices, letting these guide our approach to interactions with extraterrestrial life.
They literally are non-intelligence, though. They will not have "conversations" or "first contact." They say we "must dismiss" the idea that microbes are non-intelligent, and treat them as moral subjects that are capable of conversation and first contect — an animist position that should be defended with philosophical argument — and yet they present no arguments to this effect, just statements. Extrodinary claims can be dismissed without extrodinary evidence. Likewise, they say that we "must dismiss" the idea that Mars is a terra nullius, but on the basis of what? Nothing more than the fact that this was said about colonized lands — but the reason it was unethical to use that excuse for colonization is that it wasn't true: indigenous people were living there, cultivating and maintaining the land, and depending on it. It actually is true, if not literally than ethically, in the case of a fucking space rock.
Preservation of Environments on Non-Habitable Worlds: Current plans for the Moon place in-situ resource utilization as a fundamental component of a long-term presence. Current policy does not adequately address questions relevant to preservation beyond sites of scientific value, and ignores questions of whether certain environments should be preserved for […] their intrinsic value.
Again, more animist metaphysics that requires defense to be taken seriously — and I probably wouldn't find their defenses convincing anyway.
Aesthetics should also be considered. If Moon mining is to be an extensive enterprise as is planned, those changes will be visible from Earth, fundamentally changing one of the few communal human experiences of gazing at the Moon.
This is that fear of change again. Things change all the time. What's the big deal?
In addition, the Moon and other planetary bodies are sacred to some cultures. Is it possible for those beliefs to be respected if we engage in resource utilization on those worlds? Lunar exploration must be prepared to adjust its practices and plans if the answer is no.
If one culture's spiritual beliefs about the sacredness of things that don't materially effect them causes them to seek to restrict other culture's actions, then that's wrong, end of story. If a Christian thinks fetal life is sacred and wants to stop an abortion, or that "the body God gave you" is sacred and wants to stop gender affirming care, or a Jehova's Witness wants to stop people from getting blood transfusions, or some other religious sect wants to take down all "graven images" or something, we'd recognize that as wrong. Why not here?
Transhumanism
Or what about transhumanism? Once again, the popular strain of anti-tech sentiment in the modern left sees transhumanism as neo-eugenics, despite transhumanism stemming from leftist values, as "Conspiracy Theories, Left Futurism, and the Attack on TESCREAL" points out:
Transhumanism has been associated with Silicon Valley libertarianism for decades, but in fact has been a loose global culture that leans more to the political left than to libertarianism. […] The roots of transhumanist thought can be traced through Marxists like J.B.S Haldane and John Desmond Bernal […]. […] In 2014 many transhumanists around the world signed The Technoprogressive Declaration […]
[…] Trans rights can be seen as one of the first major political confrontations over transhumanism, with technology completing the feminist deconstruction of gender, as outlined in Martine Rothblatt’s 2011 From Transgender to Transhuman: A Manifesto On the Freedom Of Form.
As the fights over birth control, trans rights and universal healthcare make clear, the progressives have argued the case that everyone should have access to technologies that have been proven safe and effective regardless of hypothetical future consequences […]
and there being positive strains such as democratic transhumanism. Transhumanism also does not require any coercive or centralized control, nor eugenic selection of babies, which is the real ethical problem with eugenics, as even the anti-transhumanist left admits:
In contrast, “modern transhumanism,” […] imagined that by enabling individuals to freely choose whether, and how, to undergo radical enhancement, a superior new “posthuman” species could be created. According to Nick Bostrom (2013, 2005a), a “posthuman” is any being that possesses one or more posthuman capacities, such as an indefinitely long “healthspan,” augmented cognitive capacities, enhanced rationality, and so on.
One might balk at the idea of a "superior" species, but are some humans that exist today not already superior in just that way, aided by better healthcare, better neutrition, better training and education, and even medical enhancements such as psychiatric medication? Currently, medical enhancements are limited to "restoring deficiencies," but that is an arbitrary limitation, especially when a large proportion of the human suffers from those deficiencies on average and thus anyone who has them remediated is de facto "superior"? And since when did ethics or democracy or equality depend on the equality of capabilities of human beings, as opposed to recognizing their moral equality? What is this, Harrison Bergeron?
Moreover, as "Morphological Freedom – Why We not just Want it, but Need it" says, transhumanism is a natural extension of core human rights that most libertarian leftists would already agree to:
I am hoping to demonstrate why the freedom to modify ones body is essential not just to transhumanism, but also to any future democratic society. […]
From the right to freedom and the right to one’s own body follows that one has a right to modify one’s body. If my pursuit of happiness requires a bodily change – be it dying my hair or changing my sex – then my right to freedom requires a right to morphological freedom. My physical welfare may require me to affect my body using antibiotics or surgery. On a deeper level, our thinking is not separate from our bodies. Our freedom of thought implies a freedom of brain activity. If changes of brain structure (as they become available) are prevented, they prevent us from achieving mental states we might otherwise have been able to achieve. There is no dividing line between the body and out mentality, both are part of ourselves. Morphological freedom is the right to modify oneself.
Computers and AI
Likewise, these same strains see generally capable machines or programs are inherently "unsafe", as if we should not have computers, or anything we cannot predict the applications of:
Unlike the “narrow AI” systems that TESCREALists lamented the field of AI was focused on, attempting to build something akin to an everything machine results in systems that are unscoped and therefore inherently unsafe, as one cannot design appropriate tests to determine what the systems should and should not be used for. Meta’s Galactica elucidates this problem. What would be the standard operating conditions for a system advertised as able to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more”? It is impossible to say, as even after taking into account the number of tasks this system has been advertised to excel at, we still don’t know the totality of the tasks it was built for and the types of expected inputs and outputs of the system, since the advertisement ends with “and more.” More generally, system safety engineering expert Heidy Khlaaf wrote: “The lack of a defined operational envelope for the deployment for general multi-modal models has rendered the evaluation of their risk and safety intractable, due to the sheer number of applications and, therefore, risks posed” (Khlaaf, 2023). In contrast, “narrow AI” tools that, for instance, might specifically be trained to identify certain types of plant disease (e.g., Mwebaze, et al., 2019) or perform machine translation in specific languages [106], have task definitions and expected inputs and outputs for which appropriate tests can be created and results can be compared to expected behavior. The Galactica public demo was removed three days later after people produced “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil” (Greene, 2022).
Complex manufacturing
Some particular extreme versions popular on the post-left (yeah, that's right, not even the post-left is safe! Often, they're the worst offenders, since many are anti-civ, post-civ, or primitivist) state that complex manufacturing is impossible without capitalism, because they view division of labor, specialization, commidification, and markets as capitalist, and have bought into the capitalist lie that hierarchy is necessary for coordination and information processing:
Manufacturing these chips requires approximately 400 steps in a complicated process that begins with mining silicon dioxide (silica) […] This factory is over twice the length of a football field and contains over 100 different brands of machinery from around the world. The chips must be manufactured in “clean rooms” that use powerful air filters […] Workers in the chip factory use microscopes, ultraviolet light, photosensitive chemicals and chemical baths (all toxic), and precision instruments […] then ship them off to the factory that makes circuit boards. […] The plastic used in making the computer’s exterior comes from oil which requires extensive refining, not to mention the complicated process by which it is extracted from the Earth. […] Finally, all of these parts are put together in yet another factory and shipped around the world to various distribution centers.
As you can see, the manufacture of a single computer requires a great deal of division of labor […] complex technologies such as these required alienated labor supposedly anathema to anarchism. […]
Computer manufacturers generate millions of pounds of toxic waste each year […] a lot of manufacturing-generated pollution, such as contaminated groundwater and acid rain, can’t be limited to one location either. What will the non-computer-users do when their drinking water is ruined by the computer-makers upstream?
[…]
The process would still be unimaginably complex and certainly would be geographically diverse […] It is conceivably possible to coordinate a global effort based upon anarchist principles, but such an effort would likely be less “efficient” and thus produce less than desired. […] Management positions would invariably develop in order to deal with the “problem” of inefficiency, and the managers would probably receive the latest and greatest versions of computers as compensation for their efforts.
[…]
I hope I’ve shown that you don’t need to be an anti-tech primitivist to see why we cannot expect the production of complex, modern, technological conveniences to continue in an anarchist society, as they require ecological destruction, division of labor, and pronounced hierarchy.
This shows a stunning lack of imagination, an cavalier dismissal of any possible alternative ways of doing things or ways to solve these problems, and is essentially tantamount to admittance of defeat: if I had to choose between living in the world painted by this leftist, where complex manufacturing is impossible and the division of labor is illegal, so we're all forced to be self-sufficient generalists, and a cyberpunk dystopia, I'd chose the latter!
Genuine concerns
There are some legitimate concerns about the new problems or struggles these new technologies could introduce. For example, the furthering of inequality, or creation of whole new types of inequality, by transhumanism, as unequal access to transhumanist procedures between the rich and the poor under our current society run their natural course.
However, the same argument could have been made for almost any medical procedure and treatment throughout history, from antibiotics to vaccines to various surgeries: at the outset, most such treatments were advanced and expensive, only available to the rich and those in the know. Over time, however, through the initial investment and proving made possible by that intial rollout, procedures were made cheaper, easier to supply in larger amounts, and more accessible; in every case, it turned out to be far more of a benefit to humanity not to block the development of these technologies, but to — to the extent it was possible — focus on making these things cheaper and easier to obtain.
Ultimately, we can't and shouldn't try to put a pause on all technological development until we've somehow "defeated capitalism," because the logic underlying this — that change must be held back until society is "fixed" in order to prevent the exacerbation of old problems or the creation of new ones — would prevent all change forever, thus permanently ending new horizons and new possibilities. I will acknowledge that, at some level, this is an axiological and eschatological disagreement: I don't think we're going to defeat global capitalism, undo the centuries long legacy of colonialism, and institute international cybernetic communism anytime soon (if I'm being generous — I think it will never happen), and if my choice is between a dreary static capitalist hell-world with perhaps some regulations to soften the hard edges which just even further retrench the forces of the AOE, and one that is constantly changing, advancing technologically, producing new alien subjectivities and artificial states of being, and spreading access to all these things, if perhaps deeply imperfectly (as, to some degree, it naturally does), I'm choosing the latter.
Baby with the bathwater
Desire for a boundless better future worth struggling to acheive (even if we don't know how possible it is), is rejected as capitalistic, eugenic, or fascist. This doesn't have to be the case — we can have believe in a boundlessly better future that's rooted in wonder, exploration, curiosity, care to improve the conditions of everyone, planetary repair without primitivism or even degrowth, and more, but so often it's hard to get people to see that; even when they claim that's what they want, so often it still involves the tactical removal of technology that challenges their existing categories, assumptions, and beliefs about the natural in the process; how many solarpunk futures have we seen that have no social media, no AI, no space travel, no life extension, that really function not as an optimistic emancipatory futurism at all, but as a sort of tech-aestheticized cottagecore?
This isn't to say that the people pushing what's referred to as the TESCREAL bundle aren't deluded about the actual possibility or immanence of most of what they're talking about, or that they themselves aren't fascists and neoeugenicists leveraging technological development toward their unconscionable ends (Timnit Gebru and Emile P. Torres absolutely make a good case for that, at least). It's just that dismissing all of their goals and methods wholesale along with them, just because some highly privileged and out of touch white guys who read too much science fiction and misunderstood the purpose of it, is throwing the baby out with the bathwater. They would infuse whatever ideology they took up with eugenics, capitalism, ableism, techno-feudalist centralization of infrastructure, surveillance capitalism, and IP, and unspoken white supremacy. That doesn't mean we can't have longevity research, research into or hope for space travel and exploration, or even space mining and terraforming if we wanted to, automation serving even white collar intellectual and artistic jobs as well as manufacturing jobs, morphological freedom, or open AI models.
This act of throwing the baby out with the bathwater is a product of observing how capital has used new technologies and innovations to create new inequalities and reinforce existing power structures, leading to transcendental miserablism — the belief that nothing can change, and if anything does, it must be bad — and future shock — the feeling of unmoored fear and confusion as things change too fast underneath you — galvanizing into overdrive the traditional leftist pastimes of the genetic fallacy and guilt-by-association, not a product of reasoning, thoughtful debate, and strength. Yet this is self-reinforcing: when you approach technological development and innovation from the perspective of miserablism and future shock, perpetual critique and reaction, then you are never able to actively take part in shaping it or letting the "street find its own uses for things," in taking it away from power and using it for something better, in leveraging the new subjectivites, the melting into air of solid things, the deterritorialization, for creating a new world. Which just ensures power will create and control it, furthering your impression that its negative use is inevitable.
This vision of a world is borne from a feeling of impotency: that under current conditions, there is no hope not just for us to control society through collective action overall (which is fair and true) but even for technology to be turned to the undermining of power, hierarchy, and control; thus, all technological and scientific development must be postponed until society is perfected, leads to an attitude where every technological advance that we do make must be met with the endless chanted litany of all its bad aspects and never any of its good ones, and its creation lamented and resisted.
This worldview is also a product of direct fear of the deterritorialization that new technologies often bring. This fear of deterritorialization — a breakdown of familiar borders, patterns, expectations, property, ownership, identity, assignment — is also, partly, what creates future shock, and past experience of deterritorialization and future shock is also part of what constitutes transcendental miserablism, so it's all bound up, mutually reinforcing in a negative feedback loop. Yet, the fear of deterritorialization is also a distinctly separate fear that acts within this worldview.
You can see this in the concerns around the destruction of a "common reality" through centralized gatekept media, or the fears around not being able to "trust" images and video anymore — as if, before images and video were a thing, when there was only text, which is easily falsified, we couldn't survive; or as if we won't be able to adapt to AI images and video, finding ways to verify undoctored and ungenerated images and videos when necessary — or, especially distinctly, in the quick about-face artists did regarding intellectual property when generative machine learning models came on the scene and threatened to deterritorialize intellectual property and the distinction between "uniquely human" endeavors and machinic production. In every case, instead of leaning into the deterritorialization, the opening possibilities, while trying to steer around or resist the co-option and negative possibilities, they only call for more ossification, more reterritorialization! There are logics embedded into current social media platforms that are indeed harmful when iterated out; but the core concept of social media is not, and just critiquing the logic of social media, telling people it's "bad" and "not to use it," won't win anyone over or achieve anything. You can't resist something by just standing astride it and saying "no." The solution is not to reterritorialize, but to adapt, to move with and then redirect the flows — wu wei, while learning to embrace the fate of the darker aspects and become worthy of the Event, because resisting a future you have no control or agency over won't win you anything.
The flaws of this worldview
First: it acts as if we could or should chain the human desire to discover and build, to explore possibilities whose consequences have not been completely mapped out from the beginning. This is a dismal, sick view of the world, a nay-saying one that's afraid of life and existence and time itself, because those aren't safe either. Worse, it is ultimately a bureaucratic project: it would require all human thought and innovation with any amount of resources behind it to be subordinated to a central bureaucratic board or council, which must pre-approve any use of resources like energy and computation, since, in a world of distributed research and a market economy, someone will always invent, whether it's "safe" or not, and outcompete the rest, and post hoc regulation will struggle to keep up, if it even can at all. Even if this isn't a central world council or something, no matter how polycentric you make it, it's ultimately still going to be centralized bureaucratic control at a smaller scale, just like how Hoppean covenant communities, no matter how small and polycentric they become, are still ultimately little absolute feudal monarchies.
This is a lovely thought, considering we've never been able to accurately predict the positive or negative impacts of technology before we see it iterated out, let alone the possibility or practicality of any such technology! Not to mention the corruption, favoritism, anti-intellectualism, demagoguery, and more such a system would encourage, and the single point of failure or political capture it would introduce. Take the current Trumpian takeover of the federal health and scientific apparatus, with the massive loss of research funding and knowledge it has created, and the funneling of its remaining resources into ideological veins, and extend that to the entire knowledge economy.
It'd be like the World Council of Scholars from Anthem, a story that is widely panned by leftists and progressives as a cheap straw man attack on leftism by a hack writer, and yet which eerily mirrors the implications or even outright statements of many of the Neo-Luddite movement the left is developing into.
This belief that we can or should try to map and control technological exploration is due to the fact that the anti-tech left fails to see technology as something that can open totally unforeseen horizons up to us, to explore and build on even further, or destroy old territories. They only see it as something whose consequences can, and must be known, or as a risk.
Second: it only furthers the narrative of capitalists, fascists, and neoeugenicists like Peter Thiel, Elon Musk, Marc Andreessen, and their followers, that only they can satisfy desire, only they can offer a future that's boundless, that overcomes human limitations and offers novelty and exploration and change. Meanwhile leftists are left offering Anarres, degrowth, primitivism, post-civilization, and other visions of a future that slowly relaxes with a sigh into old age, cyclical stability, closed horizons, and satisfaction with less achievement, change, and possibility than what came before, picking over the remains of a relinquished age.
Again, this stems from a failure to see the relinquishing of technology as an enclosure, an ending, of the expanding of possibility of degrees of freedom.
Third: in the present, this opposition toward technology ensures that the left remains forever behind the curve, as it refuses to actually build or use anything on the cutting edge that actually matters. The left in kept in a perpetually reactionary position, spending all its time decrying new developments instead of creating its own. You can't contribute to the course of the future — or even the present — or a meaningful and attractive picture of it — if you're not at the bleeding edge yourself, building and doing things. If you spend all your time critiquing, then that's just what you will become: yet another critical voice which the doers, and the masses, dismiss with a roll of their eyes as a perpetually miserable naysayer — which you will become. It's like the fate of most Lisp weenies.
Fourth: indeed, this aversion to all technology, industry, development, and futurism, under the assumption that it's all inherently evil and inextricable from capital, leaves the modern left often completely technologically unsavvy, leading to hilariously off-base criticisms of modern technologies, and putting them at an inherent disadvantage compared to their fully technologically-enhanced enemies on the futurist right. As with semiotics and desire, by swearing off technology, the left essentially ties one hand behind its back.
Yes, under capitalism all technologies will be produced and perpetuated through exploitation, neocolonial and otherwise, but so is everything else under this system that we use, engage with, and create, whether strictly necessary to survive in the modern world such as clothes, food, and housing, as well as data centers, the vast majority of the power/compute of which is dedicated to serving the websites you visit, the streaming services you watch, and the social media that uses you, not generative AI (yes, GenAI is driving growth right now, but that's not the point: the point is that the energy and water resources, and impact on local communities, that AI data centers are having, were also happening with regular data centers as well; in fact, it's not significantly worse for water and energy usage per unit time than streaming video). That's not a reason this exploitation is good, but it's a reason to look carefully at the reasoning behind selective condemnation. Is it just because it's new and unfamiliar?
Moreover, even leaving aside the exploitation created in their creation, running, and use, while actually-existing technology may have inherent tendencies and logics built into it, that does not mean that the core ideas and functions of such technologies cannot be separated and remade from those bones. For example, as much as much social media (like Twitter and Facebook) is, now, a platform for neo-Nazis, it was once a platform for the organization of protests by the left, as with the famous Arab Spring, the Black Lives Matter protests, and more. Those protests, too, may have been partially undercut by the logics of the platforms they were on — due to the attention economy and the gamification of social relationships, among other things — but this too is not inherent, and their existence is a proof of concept that even concepts like social media can be used to organize more than just evil.
Likewise, social media may enable a quantum leap in the spread of mis- and dis-information, but so does the invention of printing, and distributing 'zines, compared to the time before the printing press, but few leftists, even the Luddites among them, are calling for the destruction of the printing press.
Social media may have led to a fragmented information ecosystem, but is a homogeneous information ecosystem truly better, when all it leads to is unreliable, horrendously harmful (1, 2, 3) gatekeepers like The New York Times, and the easier flow of manufactured consent? The assumption always seems to be that, if we returned to a centralized, gatekept media ecosystem, the "right people" would be in control of that media ecosystem, but that's an assumption that need not be true, and has never been true in the past. The fracturing of the information ecosystem creates cracks through which truly radical ideas and ideologies can grow like weeds in concrete; it also creates horizontal, peer-to-peer media ecosystems where power can more easily be held to account — witness the filming of police officers, and the posting of those videos to social media — instead of just a few gatekeepers pretending to do so when their livelihoods and even legality, not to mention social networks, are actually deeply intertwined with power, since they need to get interviews.
Ultimately, this is the thing: technology increases the options of those who wield it, giving them more autonomy and power. In the hands of the elites, that turns sour. In the hands of others, it is productive, as all power is productive when it does not clot into domination. Technology also locks in the power and logic of those who get to shape its architecture and algorithms. That's why we need to take part in shaping it, instead of only critiquing it, assuming all parts of a technology are bad because the architecture of the current implementation was shaped by capital.
Instead of rejecting the development of new technology, even technology that we might balk at, we must be more creative, more flexible, and more imaginative than our adversaries. We must pirate it, mod it, steal it, reprogram it, make open source versions of it, adapt it, use it to our own ends in every way possible. We could make social media free from ads, surveillance, lock-in, algorithms, and even likes and quote retweets, based on open standards like ActivityPub. And yes, also sabotage it — but not as a sort of incohate angry attack on technology itself, as if it is to blame, nor as our only approach for a given technology, but as one of many approaches. Even surveillance technology can be used positively.
We must also take a broader view: consider the Luddites. It's often claimed that they "weren't against technology per se, they were against the capitalist use of technology to cheapen labor." But the use of technology they were against was industrialization and automation "stealing" their jobs — both by requiring less of them and at the same time making the job less skilled, such that more people could participate — and changing their way of life:
[…] At issue were new types of machines —the stocking frame, the gig mill, and the shearing frame— that could produce and finish cloth using a fraction of the labor time previously required, transforming a skilled profession into low-grade piecework. […]
The Luddite opposition to machines was, it must be said, not a simple technophobia. […] Their revolt was not against machines in themselves, but against the industrial society that threatened their established ways of life, and of which machines were the chief weapon. To say they fought machines makes about as much sense as saying a boxer fights against fists. — Gavin Mueller, Breaking Things At Work
It's understandable that they would have been afraid and angry, protested, and destroyed the machines that were stealing their livelihoods:
E.P. Thompson … acknowledged that militant reactions against industrialism “may have been foolhardy. But they lived through these times of acute social disturbance, and we did not.” — ibid.
But if they'd gotten their way, there would actually be fewer (skilled, as well as unskilled!) jobs today, and we'd all be poorer and have lower living standards, as we'd have remained in an only partially industrialized and automated economy, mostly of artisans. That would have been better for them, within the time they lived, perhaps, but it would not have been better for everyone else, or posterity.
That doesn't mean that we shouldn't fight back against uses and implementations of technology that work against us socially, economically, or politically — or that, had the Luddites never existed, we'd actually be better off:
That the Luddites were ultimately unsuccessful is not itself an indictment: final success is a poor criterion for judging an action before or during the fact. And, as I hope to demonstrate, Luddism was not altogether pointless. Our history is the Luddites’ as well, and their insight—that technology was political, and that it could and, in many cases, should be opposed—has carried down through all manner of militant movements, including those of the present. There is much to learn from this tradition, even among the most technophilic current-day radicals.
But it is to say that we need to be more careful than I think many modern self-described Luddite leftists are about how we do that.
First, we need to consider the fact that, ultimately, the Luddites were not successful, and whatever changes they might have brought about are largely speculative and invisible, in the face of the hyperstitional progress of technology.
But more than that, we need to be careful and aware of how often a Luddite-like struggle on the part of workers can actually come at the detriment of society, the future, and other workers, even if it is also justified, and use that awareness to try to find a balance.
Instead of fighting automation, for instance, collective ownership of the machines that automate, and distribution of the rewards; instead resisting productivity growth, ensure that that growth is captured by workers in terms of higher wages or less working time; instead of trying to reassert intellectual property, push for its abolishment, or the open sourcing of AI models so that we can all benefit from what is built from the commons by massive AI companies; instead of refusing to use AI, figure out how to use it in a way that is complementary to human skills and abilities. Neo-Luddites on the left claim they aren't anti-technology, but in practice, they often just are. It doesn't have to be that way.
Finally, the eternal calls for reterritorialization due to fear of its opposite are always counterproductive. This reaction to new technologies, even those who claim to be anarchists, is always and ever "there ought to be a law" (or something like a law), but the laws that are called for out of fear of deterritorialization and future shock almost always just serve reterritorialization, which in turn only serves established centralized hierarchical power, ossifying it further. We could call for laws that serve deterritorialization — the right to repair, antitrust, fair use, data portability — but when we call for laws from a place of fear and longing for stability and an understood past, that's not what we'll end up calling for or getting.
Instead of opening up new possibilities and avenues, leading us to new and unimaginable horizons, allowing creative destruction, laws made from this place of fear of deterritorialization just chains us down to the power structures we hated already, serving the enclosure of the future and its redirection into the past, the lack of vision and change at the core of leftism. Well-intentioned laws create radical monopoly, for instance by introducing minimum requirements on the services that must be provided, blocking substitution effects that might break a product or power player's hold; regulatory capture, which leads to powerful existing interests influencing the creation and interpretation of laws, but also compliance barriers to entering the market and disproportionate impacts on small, lightweight, less hierarchical players in the market, as well as ossifying what the market can even offer. Labor organizing that attempts to prevent automation out of fear of changing or losing jobs — in essence, out of future shock and fear of deterritorialization again! — instead of finding ways to ensure that the benefits of automation flow to everyone, just ensures a logic of workerism: the worship of the aesthetic of "people working" as an end in itself. This can be framed in terms of preserving people's livelihoods, ensuring they don't go hungry and homeless — and those are good concerns! But holding back the hyperstitional tide of automation in order to achieve that is just as hard, if not harder, than trying to restructure society enough that people can support themselves without a job, or creating new jobs, or retraining people. Both, in the end, would require a major beachhead against capitalism, and the logic that if a human can or once performed a job, then a human must do that job forever — that humans cannot be replaced by automation either at an individual level or through the increased productivity that machines allow humans causing fewer humans to need to be employed — just ensures the creation of more bullshit jobs, and make society poorer in the long run.
A good example of this is with generative AI: artists' attempts to strengthen intellectual property in order to get a taste of the profits AI companies are making with their novel and innovative use of publicly posted art and writing, or to prevent this novel use of it altogether because it was unforeseen, despite it not taking anything from these artists that they truly have a right to in my view — they just aren't getting a cut of an idea they never had, nor could ever have executed on, that they didn't care about or make their art with the assumption of getting, before it was created — will only result in:
- Large intellectual property holders like Disney and major authors gaining more power to prevent the remixing, deconstruction, reinterpretation, and extension of the artistic works they have control over by the art world.
- Bad actors will only gain more power to abuse small artists and creators, through things like YouTube's copyright systems and DMCA takedown requests.
- IP holders gaining more power to crush preservation and open access efforts that help millions of people and preserve history, whether ones that work through piracy (Anna's Archive, Library Genesis, Sci-Hub, and The Pirate Bay) game emulation (e.g. Dolphin), archiving (e.g. the Internet Archive) or other means.
- The total destruction of any competition in generative AI, locking it into centralized organizations and providers which are big enough to actually negotiate for licenses with the big IP holders, giving them more power to manipulate and control society and our minds, and more economic power.
- Worsen the system that already prevents truly open source AI by making it a huge legal risk to share training data, because that data is copyrighted.
- Further AI hype and grifting due to training sets being closed, making it harder for proper benchmarks and tests to be made.
- Incentivizing the creators of tools that artists are required to use for their jobs, such as Adobe, to force artists to sign away the rights to artwork created through their tools so that they can be sold to AI companies anyway.
None of this achieves what artists actually want. All it does is create more centralized ossified power.
A different way forward?
Ultimately, what I'm saying is this: technology and futurism represent possibility; by fighting them, we are restricting possibility.
What we should do instead is fight to shape that possibility, by innovating and creating for ourselves, and also by adapting ourselves and our social structures to the future as it changes, to take advantage of what is being created as well as taking part in creating it, instead of trying to put a cap on it.
If we could build creative new solutions and demonstrate ways to capture and repurpose existing ones, then we could demonstrate that a better world, a world that takes into account desire and development while still being emancipatory, is possible, and from there we could attract allies, build coalitions, and free ourselves and each other from the negative uses of technology. There's so much we could build: open frontier models, civic compute clouds, co-op chip packaging, community fabs, privacy-preserving identity, open medical devices, healthy distributed social media and communication infrastructure, community intranets that archive and provide a mirror of essential resources like the internet archive and Wikipedia (perhaps accessible through libraries?), tools to protect ourselves from surveillance capitalism like Invidious, adblockers, AI agents to deal with bureaucracy on our behalf, and more that I can't even begin to imagine.
More broadly, technology and futurism represent desire itself: the means by which we acheive desire, and the projection of our desires into the future, imagining better, or at least different, worlds. If we can embrace these things, we might finally be able to offer a political program that's truly attractive, instead of acerbic critique. Perhaps one day we could even shift norms and policy, and change society. I don't know if that's possible — the rising complexity profile of our society indicates this is all out of our control one way or the other, for good or bad — but retreating into the past or rigidly and resentfully critiquing the future as it arrives, instead of throwing ourselves into it, adapting to it, and trying to move with and use it, won't gain us anything.
In the third and final part of this accelerationist triptych, I explain why I think any program to control and direct the future, in any kind of organized manner, instead of embracing wu wei, working with the flow and directing it, while also recognizing that it, too, directs you, instead of trying to dam it, is ultimately impossible for specific human organizations anyway. The only thing that controls it currently is the distributed emergent superorganism/AI that is capitalism as a whole.
Some other reading about alternative technological futures:
- Xenofeminism: A Politics for Alienation
- Reaching Beyond to the Other: On Communal Outside-Worship
- Walkaway by Cory Doctorow (not the best written, imo, but presents an interesting future)
- Terra Ignota (a highly imperfect world, but a truly different one, deeply enabled by technological progress)