Neon Vagabond Sitemap Index About the Author MirrorsKiwix Archive

Category: Philosophy

Table of Contents

1. How to use a large language model ethically

  1. Use local models
    1. So you're not supporting the centralization, surveillance, and landlordization of the economy any further
    2. So you're not taking part in necessitationg the creation of more data centers which – while they don't use a total amount of water and energy that's out of line with many other things in our technological society – concentrate water and energy demands in communities in a way they can't prepare for and which hurts them
    3. So you're harder to subtly manipulate, influence, propoagandize, and censor through the invisible manipulation of the models you're using
  2. Don't shit where you eat – don't spread unedited AI slop everywhere into the very information ecosystem it, and you, depend on. Unedited/reviewed AI stuff is only for your own consumption, when you find it useful
  3. Only compress – going from more text to less text is usually a good idea; going from less to more is just decreasing information density for no reason
  4. Don't shell out responsibility – i.e., don't say "ChatGPT says…" or whatever.
    1. That's an escape hatch fosetcr pasting bad/wrong stuff you didn't double check into everyone's face. Take ownership of what it outputs, that way you're incentivized to make sure it's more or less correct.
    2. Although if you're not going to check either way, not saying anything just makes things harder on people, so if you're going to be an asshole, announce it!

2. Large language models will never be able to reason reliably… but that doesn't make them not useful

The fundamental structure of large language models is not just different from the structure of the human brain, but different in a way that fundamentally leans them away from truth and reasoning capabilities. The differences include:

  1. Trained on a reward function that only optimizes for generation of superficially human-seeming (by probability) token sequences, not token sequences which correspond to true or useful ideas or accurate reasoning or anything of the sort. Human beings, on the other hand, are trained on a mixture of both – and human beings which have been mostly trained to say things that sound correct, rather than things that are really truthful, are usually called idiots or narcissists.
  2. Trained without any form of embodied computing, which means that the tokens they manipulate can have no meaningful referent for them, which means that it isn't even really theoretically possible for them to reason with them in a way that is not just valid, but actually sound, since they have no knowledge of what the facts actually are or what any of these words mean. Moreover, this makes it unlikely for the detailed reasoning LLMs might be desired to perform to be valid either, since a concrete knowledge of what a word or symbol is referring to helps infer the principles and rules by which it might operate and how it might relate to other words in a way you can't get from the simple statistical likelihood or unlikelihood of appearing next to those other words, which is how the inferred "meanings" of embedding space work.
  3. No long term memory in which might be stored relatively concrete notions of rules, principles, definitions of concepts, and so on, against which current thought processes might be checked in a meta-cognitive process, which is typically how humans ensure that their reasoning is actually rational. Everything goes into an undifferentiated probability soup, or it's in working memory and thus acts as input for the existing trained weighting of token probabilities (the closest thing the LLM has to trained reasoning methods) to work on, not changing those fundamental probability responses to input tokens.
  4. A fundamentally nonsensical meta-cognition system through chain of thought. The idea behind chain of thought is to resolve the problem that LLMs are not capable of the meta-cognition necessary for strong reasoning skills and factuality – as mentioned in the previous point – by having them actually output their "thoughts" as part of the token stream, so that those, too, can become part of the token window, and influence further reasoning. The problem, of course, is that these thoughts are not at all what the LLM is actually "thinking," insofar as it can be said to be thinking – they are not a picture of the internal processes at all. They are just what the model thinks a plausible chain of thought from a human might look like, with no referent to its own internal states, because it has none, or if it does, it couldn't have access to them either. This means that LLMs are not actually capable of analyzing their own logic to see if they got something right or wrong.
  5. A "yes, and…" structure: they are only attempting to find a likely completion to whatever was input, which means they aren't going to actually be able to engage in any kind of rational dialogue or debate without extremely leading questions.
  6. An inability to have novel thoughts, due to their reward function. This means that if actual rational thought and inquiry necessarily leads to an unusual conclusion, they would still be unable to reach it, because they can only say what it would be likely for a human from their corpus of training data to say, whether it's true or accurate or not. And if we were to remove that, and make them often choose very unlikely next tokens, the illusion of their coherence would evaporate immediately. Whereas humans can remain coherent while saying very unlikely things.
  7. An inability to actually look things up.

See also.

3. My opinion on large language models

There has been a lot of controversy around large language models lately.

In my opinion, they have fundamental flaws that mean that you can't use them for many of the things people are claiming you can use them for, such as obtaining factual information, programming, or writing things for you. This becomes clear if if you look at how large language models actually work:

Furthermore, I think the hype around large language models is extremely harmful. Not only because it's creating an entire sphere of grifting that's taking advantage of people, and inflating a market bubble that will inevitably pop costing many people their livelihoods in the process, but also because while we shouldn't be afraid that large language models actually can take the jobs of writers, journalists, and programmers, the fact that they are being marketed as capable of doing so will be equally damaging. Capitalists have an inherent strong incentive to want to decrease labor costs and deskill and disempower workers, and the marketing hype that convinces them that we can be replaced with LLMs will enable them to do that whether or not LLMs can actually replace us.

On top of all of that, there's the way that the use of large language models for things like programming – and even writing – effects learning. Instead of actually going through the trouble to actually understand how things work and why at your chosen level of abstraction in the software stack, or even delving blow it so that you can use your abstractions more effectively (since all abstractions are leaky) – the principles, rules, and relations between things – and more importantly building up the meta-skills of problem-solving, critical-thinking, and autodidacticism, you instead rely on the AI to do your thinking for you.

This would theoretically be fine if the AI was deterministic, so you could actually rely on it to behave in a reliable and understandable way, and if it didn't make mistakes or made mistakes in consistent and comprehensible areas, like a compiler almost, but AIs are the leakiest of all possible abstractions over real code, which means when something goes wrong or you want to make a change AI can't seem to do, you very much will still have to interface with the code it's written and thus flex all the muscles you atrophied. Not to mention that in the case where you want to use less popular technologies – which can often be far better than popular ones – suddenly you'll be stuck without your LLM safety blanket.

Even someone deeply invested in the AI hype – literally building an AI developer tool – has come to realize this, and actually state the consequences of it quite eloquently… although he can't bring himself to give up his crutch and actually use his brain again for more than a single day a week. Avoid using LLMs to do your writing or coding for you like you avoid gambling or anything else that's easy, psychologically addictive, and makes you worse off in the long run. It's easy to think "oh I'll just use it responsibly" at first, but you won't – human nature is to fall into things that are really easy, and once you do, it's difficult to dig your way out.

On the other hand, I don't think LLMs existing, or even using them, is inherently evil.

I don't think they make the problem of SEO-optimized slop flooding the internet significantly worse than it already was (see also, which also makes the excellent point that modern search engines effectively hallucinate just as bad if not moreso than LLMs), and the solution to that problem remains the same as it ever was, because the problem isn't solely with LLM-generated content slop, but with content slop in general, irrespective of who or what it's generated by. In a sense, the slop having been generated by an LLM is just a distraction from the real problem. So the solutions will be something like:

None of these solutions are panaceas, of course – they will all have their own perverse incentives and distortions of human nature, but my point is that whatever solution we were going to come up with to existing problems that we already had, will also apply to solving the problem of LLM-generated content slop, and moreover, we really need to try something new and different, because we know what's going on now is particularly horrible and damaging to the noosphere, and maybe the distortions and perverse incentives of a different system will at least be more manageable, or smaller.

Likewise, I fundamentally don't think that large language models' use of "copyrighted" material is particularly unethical, because I'm fundamentally opposed to the idea of intellectual property and I think it's just completely absurd and contrary to how art and knowledge is built. A funny comment I've seen on this subject:

One thing I find interesting is how as soon as things like ChatGPT and StableDiffusion became popular, lots of people did a complete U-turn on their copyright stance. People who used to bang on about IP trolls screwing over creators suddenly went for full ‘RIAA in 2005’ IP maximalism.

My predominant problem with commercial large language models is simply that, typically, the collected training data, the weights, and the code to run them is not made part of the commons once more, and that distilled forms of these models are not made widely available so that the average person whose data was collected to construct these models can at least have a hope of running them themselves, rendering proprietary LLM companies hypocritical in their treatment of IP and constituting an enclosure of the commons. This is why I refuse to use anything other than open-weights local large language models like LLama3.2, and even then those models aren't good enough in my eyes because they don't allow commercial use and use deemed illegal or immoral by the powers that be.

Similarly, I find the argument that large language models are fundamentally hurting the environment or something fundamentally unconvincing. Even the largest, most resource intensive LLM – in no way comparable to the tiny 3 billion parameter model I run on my laptop locally – can only be painted as harmful to the environment by considering its impacts totally in isolation, without context and comparison to other common things like turning on a light bulb for proportion. See here.

I think the correct approach to large language models is to realize that they are very cool, very interesting, and surprisingly generally useful, but ultimately they're just another natural language processing tool, with specific capabilities and limitations determined by their architecture and training method, and that's it. They're not some kind of magical route to artificial general intelligence, although they may be a stepping stone there as part of a much larger system, and they're not going to replace human reasoning and creativity, nor the necessity for computational languages, accurate information retrieval based on search, or anything else like that. They are probably going to be okay at simple natural language processing and transformation tasks like summarization (preferably summarization of text you've written for the benefit of others, so you can more easily double check) and text transformation tasks such as converting natural language into structured data or doing basic copy editing or high level text modifications (such as "split this paragraph into multiple smaller ones"), as well as possibly semantic search using their embedding space, but they should never be exclusively trusted for these tasks, always double-checked, and that's about it. Useful as part of a well-rounded human intellect augmentation system, especially combined with hypertext knowledge management like org mode, but only when used carefully and considerately.

As a result, I think we need to stop focusing on pouring resources into endless scaling and instead focus on making the models we already have smaller, faster, and more efficient. Even very small and relatively old LLMs like LLama3.2-3b are already good enough at the only tasks LLMs will actually ever be reliably good at, and scaling will only ever make LLMs marginally better at tasks they'll never actually be good enough at to be anything other than a hindrance, while at the same time sucking up resources and attention that should be put to other things.

4. Capitalism innovates?

Capitalism does not innovate, because innovation is risky, whereas rent-seeking and financialization are profitable and mostly guaranteed-safe. Even when it doesn't choose rent-seeking and financialization, capitalism will choose to pander to the obvious gaps in the market that are easy to satisfy, or take existing desires and use advertisement to give them concrete referents in the world of products. And in all these cases, it will aim for the common denominator desires to satisfy, the ones with the widest appeal, because that is what best guarantees profits. I.e. it regresses to the mean.

Who does innovate, then? Only individuals or very small groups of individuals, who are motivated for intrinsic reasons around a common set of goals and values. Only people like that innovate, and that's usually orthogonal to capitalism at best – what those people most often want is a stable income to pay their bills and feed their families while they work toward their passion; they're not interested in "striking it rich" except insofar as it will help that goal. There are a few greedy exceptions, like Steve Jobs, but always behind them is another innovator who does it for intrinsic reasons, like Alan Kay.

Sometimes capitalism can provided the context for this kind of innovation, like with Xerox PARC and Bell Labs. But other times it's the government, like with SRI, SAIL, the MIT AI Lab, and CERN. What's important is a stable means of supporting yourself and your loved ones, and an environment of free intellectual play and experimentation, and a visionary set of common goals or interests. These can be created anywhere.

5. Freeing the noosphere

Author's note: the historical references found herein are meant to be general and impressionistic. I am intentionally simplifying and linearizing this narrative to make a point about how the representation media for ideas effects the nature of the noosphere-economy, not to make any historical point. I have linked to relevant respectable sources for each historical thing so that you can go learn the real history in all its proper complexity if you are interested.

The noosphere is the world created by human cognition: where ideas are born, grow, develop, are shared, split, merge, multiply, and sometimes die. It is emergent from and dependent on the physical world, deeply shaped by it, and also deeply effects the physical world, but it is also conceptually its own thing, having some of its own properties and laws.

A key feature of the noosphere is that while it is not free to create the things that exist there (ideas) because it takes time and effort to do so, once they are created, they are not scarce, nor rivalrous: they can be shared indefinitely and will not run out, and someone getting an idea from you does not take it away from you. When you communicate an idea to someone, you do not lose that idea and have to go back to the "idea factory" to make a new one or a copy of the old one – you and the person you shared it with now both have that idea. And if that person goes on to share that idea with other people, that is no burden on you; infinite copies of your idea can spread with near-zero cost to you.

Now, it may be argued that if someone "steals" an idea from you, you do actually lose something. Not the idea itself, but some credit, or opportunities like sales, that you might otherwise have gotten. However, I think conceptualizing these things as properly your possessions is actually an error in reasoning. Someone stealing an idea from you can't take away past credit you've received – awards, accolades, the knowledge in the heads of all the people that already knew you came up with the idea – and it also can't take away past sales or opportunities that you got as a result of the idea, because ostensibly you've already taken advantage of those. Instead, what they're "taking" from you when they copy an idea of yours is possible future credit – on the part of people freshly introduced to the idea – and possible future opportunities – such as future sales from people looking to buy something adhering to your idea.

The problem is that I don't think one can be coherently said to "possess" future possibilities.

First of all, they inhere in other people's actions and thoughts, not in anything you concretely have (by have I mean the usufruct definition, as usual in my work, of regular use, occupancy, or literal physical possession). I think it's wrong to give any person any sort of enforceable rights over the actions and thoughts of others that don't materially, concretely, effect them in some way – which, since they don't effect your own possessions, they don't. By saying that you have some sort of right over future credit or opportunities, you're saying that you have a claim on other people's future thoughts and resources – a right to control them!

This line of thinking is also confused, secondly but perhaps more importantly, because those future possibilities were only that: possibilities. Things you think you might have gotten. But any number of other things could have gotten in the way of that: maybe the idea isn't as good as you though it was; maybe a competitor with a different idea would've arisen; maybe you would've gotten sick and not been able to carry it out to completion. Even the person who copied your idea being successful isn't an indication that you would've been successful with that idea: maybe your execution wouldn't have been right in just the right way to catch people's imaginations and interests. Maybe your competitor was actually the right hands for the idea. So attempting to enforce your claim on such future "possessions" is attempting to enforce your claim on an ephemeral future thing which you might not have gotten anyway.

As a result, I don't think there's any coherent way in which it can be said that an idea is meaningfully "stolen." It's certainly terrible to see an original creator languishing in obscurity while an idiotic copycat with none of their original genius strikes it rich, and we should use all the social mechanisms – including ridicule, perhaps especially ridicule, because those who can't even come up with their own ideas are very worthy of it – available to us to rectify such situations. We should make giving credit to original creators a strong social norm. But in the end, ideas are non rivalrous. They can't be stolen, they can only be spread.

Already, I believe this to be a radically liberatory thing: the ability to share knowledge, ideas, discoveries, with anyone, infinitely – to spread them around, so that everyone has access to them, is a wonderful thing. Knowledge is power, as the old saying goes, and the freedom of ideas radically lowers the bar for accessing that power. The fact that a sufficiently-motivated person can get a college level education in anything through the internet, the fact that radical literature and ideas and history can spread through it, the fact that anyone can share their ideas and beliefs through it, these are incredible things.

I'm no idealist – material power is needed too – but at least we can have one world where there need be no push and pull, no worry about allocating resources, no necessity to divvy things up and decide who gets what and who doesn't. Everyone can have their fill of ideas, of knowledge, and there will be infinitely more to spare.

The noosphere has the best potential of any human realm to reach post-scarcity anarchy. Trying to bind this up, to turn ideas into property that only some can have and share, and then to use that monopoly on ideas to limit access to them, is to reproduce the hierarchies of the material world in the world of the mind – perhaps inevitable as long as we have hierarchies here in the physical world from whence the noosphere arises, but it is something that should be fought, rejected as a sad degradation of what the noosphere could be. Yes, a creator not getting the future benefits we would like them to get is horrible, and we should do something to rectify it, but destroying the radical power of a liberated noosphere is not the answer to that problem.

There is a catch to this, though. In order to share ideas, you have to transmit them somehow. That's nearly free in one on one conversations, but that's slow and exclusive – costly and scarce in its own way. Before the invention of writing, that's standing on the street corner or in the town hall spending actual hours of labor far in excess of a simple one on one conversation reproducing the idea for people to hear, or teaching in schools, or preaching in places of worship, or being a wandering teacher of various kinds. All of these require at least labor, and often physical material as well, that must be paid with each marginal increase in the amount of transmission of the idea. Moreover, actually turning the noosphere from a few shallow disconnected tide pools at the edge of a vast sandy beach by virtue of geography into an interconnected network was vastly expensive to do, involving costly and time consuming physical travel. Some would do this for free, realizing the potential of the noosphere in the truest form they could, but people have to eat, so often a price was asked for this dissemination of knowledge. Plus the time, labor, and material costs involved kept the spread of the noosphere slow and difficult. Thus, for most of history, while the noosphere had the potential to be post-scarcity, in its practical application it was not.

Then, in 3,400 B.C., came writing. Writing allowed someone to express, or even teach, an idea once, and then all that needed to be done after that was to pass around that book. It radically reduced the costs of disseminating ideas, bringing the noosphere even closer to its ideal. It still wasn't there yet, though: books could degrade over time through use, and if you've given a book to one person, that means another person can't have it. As a result, the dissemination of ideas was still limited, expensive, and rare, and thus ideas were de facto scarce. So more was needed.

The monastical institution of copying certain books en masse that arose in 517 B.C. was another improvement. While before books had been copied ad hoc in earlier ages by those who had access to them and happened to want another copy, now books were intentionally copied many times through a factory-like division of labor and rote performance of tasks. As a result the marginal cost of transmitting ideas became much lower, because the cost of creating a written representation that could infinitely transmit the same idea without further work by the author was much lower, and such representations were more plentiful. Scriptoriums created many copies for low work, and then each copy transmitted an idea many times with no extra work, and at the same time as each other. (We will see this recursive dissemination structure later.) Nevertheless, not enough of these could be created by this method to bring down the price in labor and scarcity by much, so focus was placed on making the copies beautiful through illumination, and the were preserved for a lucky few. Ideas were still scarce, even at this stage.

The natural extension of the scriptorium was the printing press, invented in 1455: now, the copying of books could be done by machine, instead of by hand. Infinitely faster and cheaper, suddenly knowledge could be spread far and wide for a relatively cheap sum. First books, then newspapers, then pamphlets and zines. As printing technology got more advanced and mass production printing was figured out, things got cheaper and cheaper. Now ideas could be disseminated for a few cents at most, and then the representation of those ideas was durable enough to be disseminated from there too. However, the longer and more complex the idea was, the more it cost, and if it was really long and complex and extensive, it could still be prohibitively expensive for other people. Additionally, it was impossible for the average person who got a representation of an idea to reproduce it further for others in a meaningful way – you can't perform mitosis on books. And getting ideas to widely different locations was still time consuming, expensive, and difficult. Ideas were still not yet free.

Then came 1989 and the World Wide Web, and with it, a total paradigm shift. Whereas before each individual transmissions (in the case of teaching) or representations that can perform transmissions (in the case of books) of an idea costed labor, time, and/or material, now individual transmissions and representations of ideas, in the form of bits, were just as reproducible, just as non-rivalrous, as ideas themselves. Instead, the cost was in infrastructure, as well as in bandwidth: a mostly up front, or fixed and recurring, cost for the capability to transmit, not each transmission or reproduction itself, and one which scaled incredibly slowly with the amount of representations of ideas disseminated, making individual ideas essentially free. The fundamental asymmetry between ideas and the representations needed to spread them was beginning to break down.

Even more game-changingly, even the bandwidth problem could be solved through the post-scarcity and non-rivalrous nature of the digital noosphere. Every download of information from one location creates a copy of it essentially for free (post-scarcity), and that can be done infinitely without losing the information (non-rivalrous), and furthermore each person who downloads information can themselves disseminate the information infinitely, and those people can in turn do so, recursively (unlike books). No one person needs to bear much of the cost at all for the total dissemination of an idea!

Another fundamental structural difference to the noosphere that the advent of the World Wide Web enacted was that geography suddenly mattered far less: once infrastructure was established once between two locations, geography no longer mattered: communication was nearly as cheap, and nearly as instantaneous, in comparison to the cost and time lag it had had before, with someone across the globe as it was with someone next door. The noosphere was no longer tide pools that a few brave crabs managed to scrabble out of and move between, but a vast information ocean.

Not only that, but the very ideas that could be disseminated changed: once enough bandwidth was built, audio and video could now be disseminated, meaning better reproductions of ideas and reproductions of ideas that would have been difficult to disseminate before. Still later, interactive information became possible, with things like Java Applets, Flash, and eventually JavaScript, making the better dissemination of knowledge and ideas through teaching programs and interactive notebooks, and the dissemination of still more novel ideas, possible. Once, film, music, interactive teaching, and performance art were not ideas, but concrete products, or performances – the world wide web made them part of the noosphere. Once, you could only get transcripts of a lecture, not see a great teacher performing it.

All this information could suddenly be created and shared much, much faster than before – almost instantly – allowing the dissemination of ideas in real-time, to individual audiences or broadcast to many, as cheaply and easily as the dissemination of any other idea. Actual discussions, with all the back and forth, the detail, the texture, and the interactivity of conversations could happen across the entire world, and be preserved for future generations to read.

Ideas could also be spread, and received, anonymously or pseudonymously, depending on your preferences. Social inequality, prejudice, bigotry, ostracism, mob bullying, and exclusion didn't disappear, but suddenly they depended on a person intentionally choosing to make aspects of themselves known and to maintain a consistent identity. They were still a problem, but one that was less baked into the system.

I cannot begin to overstate the revolutionary potential of the noosphere so liberated. It had the potential to be a world where the barrier to entry for obtaining and disseminating knowledge, ideas, and experiences was radically lowered, and the marginal cost nearly zero. Where people could freely communicate, think, learn, and share, become educated and empowered.

There were dark sides, of course. With that radically lowered barrier to entry, fascinating new ideas, creative works, remixes of existing ideas, and radical texts, that would not have been widely available, or available at all, became instantly and globally available for sharing; but so did an endless sea of garbage. With access to all that information, some could educate themselves, and some could find alternative facts and "do their own research."

Is trying to dictate who can share ideas, and what ideas can be shared, through centralized, bureaucratic, highly status-oriented, elite institutions, really the right solution to those problems, though? Those who would find alternative facts and "do their own research" today would likely have been equally dismissive of what came out of centralized, "legitimate" institutions, equally likely to substitute their own beliefs for reality, to pick things up from hearsay and what their Uncle Bob told them while he was drunk at thanksgiving and ranting about the out-of-towners. The things they pick up in the digitized noosphere are just cover for their existing predilections.

More, there's no reason to think that whatever institutions happen to have power and legitimacy in our society will always necessarily be systemically more factual, less propagandistic, less blinkered, and less manipulative – they will just be so in service of the status quo, and so their problems less evident, and the status quo can change for the worse. In this historically contingent situation our institutions are better than much of what is shared in most of the noosphere, but relying on that to always be the case is dangerous – and they're only better as far as we know. When will the next revolution in thinking happen? Where will it start?

Instead of trying to relieve people of the burden of thinking by filtering their information for them like a mother penguin chewing a baby's food for it before vomiting the sludge into its mouth, we need to systemically and societally to get to people first, before their introduction into the wider noosphere, so we can provide people better tools and support networks to shoulder the responsibility of thinking for themselves. This should be the responsibility of parental figures and communities.

Finally, the radical, liberatory, empowering, potential of the noosphere made free by the world wide web is, in my opinion, well worth having to try to figure out how to mitigate the risks.

The problem, however, is that the system is afraid of the noosphere. Thus it introduced the framework of intellectual property to pin it down, so that some could be given "exclusive rights" – monopolies – to certain ideas or pieces of knowledge. The system has always justified this in terms of protecting and benefiting the producers of ideas by giving them first-mover advantage, but the system always ultimately serves the interests of the rich and powerful. So as much as small artists may cling to the notion of copyright, for instance, should they ever have their work stolen by anyone, they won't have the money to go to court and actually do anything about it; meanwhile, the mega-corporations and mega-wealthy who run our society can steal with impunity and there's nothing anyone can do about it, while cracking down harshly on any imitation and iteration on their own work. Even though imitation of and iteration on ancient work is the lifeblood of art and science throughout history, the absurd logic of copyright has been expanded inexorably throughout modern western history.

And this has been extended to the noosphere itself, smashing many of the radical, liberatory possibilities it held within it, leaving us with the worst of both worlds: much of the revolutionary potential of a digitized noosphere crushed under the weight of intellectual property while the mirror image dark consequences of the noosphere run totally unchecked, because it is not profitable to check them. In fact, the hate engagement is very lucrative.

It's worse than that, though: information wants to be free – because digital representations of ideas can be infinitely copied and disseminated by default extremely easily, because copying is the very nature of how computers work and how networks transmit things, it isn't enough to lock the only copy of the representation of an idea in some lock box somewhere and only give a copy to those who pay for it, confident that they couldn't produce more representations to give away for free to all their neighbors on their own, and even if they did it would be easy to notice and shut down. Instead, an entire perpetually metastasizing surveillance and control system must be created to make sure copyright isn't violated – things like Denuvo and DRM – stealing trust and control from people to rub salt in the wound of the destroyed potential of a digital noosphere.

(Moreover, with the increasing migration of people away from decentralized services – because the cost of individual autonomy is personal responsibility and choice, and that is too high an asking-price for many part-time vacationers in the noosphere – centralized gatekeeping institutions for the dissemination of facts and information are being formed ad hoc and for profit, but that's out of scope for this essay.)

If we want to bring the noosphere to its full potential, we must put a stop to this, and that can only be done by recognizing some principles:

  1. Once you put an idea out in public – or its representation in digital form, since it is economically identical – you do not have the right to control what other people do with it, because what they do with it does not harm you or take anything away from you or require any marginal labor or time from you, and controlling what they do with it is domination of what they do with their own minds and bodies.
  2. Copying, imitation, and iteration on existing ideas is a fundamental part of knowledge production. Without the ability to pull from a commons of free flowing, up to date, interesting ideas, art and knowledge production will wither.
  3. Since the digital noosphere is a non-scarce economy where once one puts out an economic unit (in this case, an idea) it can be infinitely and freely shared with anyone, one cannot put a price on an idea, or the digital representation of an idea, itself. One can put a price on the performance of an idea, or a material representation, or on the production of further ideas that you might otherwise withhold, though.
  4. Copyright law has never, and will never, actually serve artists. It is a tool to be used against them, and for censorship.
  5. Anonymity is important and should be preserved as much as possible.
  6. Mirror things you like, to make bandwidth less of a cost in disseminating ideas.
  7. The digital noosphere must be seen as:
    1. a gift economy in the sharing and creation of new ideas: this means that ideas are shared freely in the expectation that improvements of them, iterations of them, or new ideas will be shared in return, and also in return for credit – which, while not a right, should be strongly encouraged by social norm – which can be transformed into reputation, and from there into material compensation, if needed, through things like Patreon and Kofi;
    2. and an economy centered around a huge shared commons of existing resources: this means that all shared ideas go into the commons, and, to protect this communal wealth from extraction and exploitation, where the communal labor and wealth is enjoyed but not contributed to, iterations and modifications of ideas from the commons must also be part of the commons.

These principles are why I license all of my work as Creative Commons Attribution-Sharealike 4.0: such licenses are not legally enforceable, or should not be, but they represent an informal contract between me and my readers, as to what they can expect from me, and what I would like to see from them: attribution, and contribution of any derived work to the commons in the same manner that I contributed my work to the commons, are what I expect from them, and in return I will allow them to copy, redistribute, modify, and make derived works based on my work as much as they like. I know this won't make a change systemically – I don't know how we can, in the face of "those weary giants of flesh and steel" – but that's my small part to play.

I also don't think the right to restrict the use of your work once you've publicly released it should exist, so using a license that uses the copyright system against itself, to disable it by forcing any derived works to go into the commons – where they belong – seems ethical to me: I'm only restricting people's ability to dominate others through introducing IP, not to exercise autonomy. Don't confuse domination for an exercise of autonomy.

6. The intellectual property views of traditional artists in the age of NFTs and Generative "AI"

I recently came to a really interesting realization.

So, okay. We all remember the huge cultural phenomenon that was NFTs, that appeared for like a couple months and then immediately disappeared again, right?

What were NFTs exactly?

I'll tell you: they were a way of building a ledger that links specific "creative works" (JPEGs, in the original case, but theoretically others as well – and yes, most NFTs weren't exactly creative) to specific owners, in a way that was difficult to manipulate and easy to verify. Yes, it was implemented using blockchain technology, so that ledger was distributed and trustless and cryptographically verified and blah blah blah, but the core of it was establishing hard line verifiable ownership of a given person over a given piece of content, and to prevent copying and tampering. It was an attempt to introduce the concepts and mecahnics of physical asset ownership into the digital noosphere, to make it possible to own "digital assets."

The backlash against NFTs that I saw from indie artistic and progressive communities was centered on three fundamental observations:

  1. The concept of "theft" of a digital "asset" that you "own" is fundamentally absurd, because someone else creating a duplicate of some digital information that you "own" but publicly shared doesn't harm you in any way. It doesn't take away money or assets or access that you previously actually had, it doesn't involve breaking into something of yours, or damaging anything of yours, or threatening you.
  2. Physical-asset-like "ownership" of digital assets is not only also absurd, but completely impossible, because as soon as you publicly broadcast any digital asset, as many copies are made as people view your work. That's how broadcasting digital information works: it's copied to the viewers' computers – and from there all they need to do is "Right click, save as…" and then make as many copies as they want and distribute them themselves; and furthermore, any attempt to prevent this will always violate the freedom and privacy of everyone (see also: DRM).
  3. Treating infinitely copiable digital echoes, patterns of information stored as bits in a computer, as ownable assets, introduces distorted, insane dynamics into the noosphere, because now you have market dynamics, but not actually grounded in any kind of actual value or labor or rivelrous, scarce asset. And that's what we saw.

And what was the praxis, based on these critiques, against NFTs? Nothing less than widespread digital piracy. Not against coporations, but against individual artists. Now, you might dismiss this characterization, because that piracy wasn't technically illegal – as the right to own NFTs had not yet been codified into law – or because those artists were often grifters – incompetent, unoriginal, soulless techbros looking to make a quick cash grab – but the quality of a piece of art doesn't dictate whether it's a creative expression of some kind (we've all seen tons of incredibly lazy fanfic in our day, I'm sure), and the technical legality of what was done doesn't actually change the characteristics of the action (if all IP was abolished tomorrow, I'm sure most indie artists would still insist on it, in the current cultural climate, but we're coming to that)!

So the response to NFT was fundamentally just the idea that you can't own an image or other artistic work that is purely represented as digital information because it's infinitely copyable and piracy is a thing. And because owning pieces of the digital noosphere is illegitimate and introduces all kinds of bad mechanics into the economy.

And I'm sure you all can see where I'm going with this now.

Because, now that GenAI is on the scene, what has become the constant refrain, the shrill rallying cry, of the indie artists (as well as the big megacorporations, funnily enough)? Nothing less than the precise inverse of what it was in the face of NFTs:

  1. Copying information – a digital "asset" of some creative work – is now theft, and causes real damage to those who've had it copied; they somehow lose something deeply important in the copying.
  2. We must rush to introduce centralized registries, or embedded metadata, about who owns what digital "asset," and rigerously enforce this ownership with controls on copying and viewing and usage, at whatever cost, through means like DRM.
  3. Treating infinitely copiable digital echoes as if they're ownable physical assets is not bad, but in fact important and necessary to save the economy, freedom, democracy, and artistic livlihoods!

Not only that, but suddenly piracy, especially piracy of an individual artist's work, is the highest crime imaginable. Look at how people are talking about Meta using libgen – a tool all of us use to pirate the works of individual artists every day, from what I can tell looking at online discussion in artistic and progressive circles – to get books to train Llama!

Suddenly, it feels as if every independent artist that hated NFTs when they came out would actually be a fan of them, if they'd been introduced by a different cultural subsection of the population (artistic advocates instead of cryptobros), if they'd been explained in different terms (in terms of "preventing exploitation of labor" and "worker ownership of the products of their labor" instead of in terms of capitalist property and financial assets), and if they'd arrived after the advent of generative AI.

What the fuck is going on here?

I think it's two things.

One, as much as we valorize independent artists and progressive activists as vanguards of morality and clear sightedness and human values, they're just humans like the rest of us, and ninety-nine percent of the time, their reactions to things are dictated by tribalism – if something is introduced to them by a side of the culture wars they don't like, it's bad; if it's introduced by a side they do like, it's good, and it's as simple as that. So since NFTs were introduced by cryptobros, they found whatever reasons they needed to say NFTs were bad, and when techbros (often former cryptobros) introduced GenAI, progressives and artists found whatever justification they needed to say GenAI was bad.

The other aspect, I think, is material interests. When NFTs originally came around, they were solving an economic problem no one had yet – needing to own digital assets to protect economic interests – so they were mostly peddled by grifters and scam artists, and they offered no material benefit to artists, while coming from a side of the culture war artists are rightly opposed to – so it was easy (if also, but perhaps only incidentally, right) for artists dismiss and make fun of them. But now that GenAI exists, the underlying goals of the NFT technology and movement, its underlying philosophy, actually does serve the economic interests of artists, so now they're embracing them, mostly without even realizing it. Basically, it's as simple as that: the economic interests of artists weren't in play before, so they were free to make fun of grifters and scam artists and play culture war with an easy target, but now that their economic interests are at stake, they've been forced to switch sides.

So it's not as if this shift is exactly irrational or nonsensical. It makes sense, and is even sympathetic, at a certain level. The point I'm trying to make here is that no matter how morally justified and principled the popular backlash against these things may seem, it fundamentally isn't. It's just about base, selfish economic interests and culture war tribalism all the way down. Artists are not the noble outsiders we make them seem to be; they're just as much an economic class with a tendency to wide amoral backlashes to protect their interests as Rust Belt workers are. That doesn't mean individual views on the matter can't be nuanced and principled, or that you can't even find some way – although I don't see a convincing one – to thread the needle and condemn both NFTs and GenAI, but on a societal level, the public outcry is no more principled than the reaction to negative stimulii of an amoeba.

7. Why am I being so mean to indie artists? Am I a tech bro?

To be perfectly clear, the purpose of this post, and all my other posts on this page expressing frustration at popular views concerning information ownership and "intellectual property," is not to punch down at independent artists and progressive activists. I care a lot about them, because I'm one, and I know many others; I'm deeply sympathetic to their values and goals and their need for a livelihood.

The reason I write so much about this topic, directed as often if not moreso at independent artists as corporations trying to enclose the commons, is that while I expect big corporations – whether Disney or OpenAI – to be unprincipled, to push for convenient laws and regulations that expand their property rights, introduce scarcity, and lock down free sharing, expression, and the creation of 8 for their own material gain, I expect so much better from artists and activists, and so it's deeply frustrating to see them fail, to see them fall back on justifications and calls to action that only help companies like Disney and Universal which have been the enemies of a free culture and artistic expression for time immemorial, ideas which will only lend power to forces that have been, and with their legitimation, will continue to, make the creative ecosystem worse and give capital more control over ideas. It's not because I want to defend the big GenAI companies – the world would be better if they all died off tomorrow – but because I think there is something deeply valuable at stake if we have a public backlash against free information and open access, especially if that backlash also aligns with, and thus will be backed, by powerful lobbyists and corporations and politicians.

Not to mention the fact that none of this will achieve what they hope: if we force GenAI companies to only train on licensed data and creations, they won't just stop training on people's data and creations, nor will they pay individual artists. They'll just offer huge, lucrative contracts to big content houses like Disney that already take ownership of all the work their artists do, and every possible platform under the sun that artists use to distribute or even make their work, and all that will happen is that all those content houses will take the contracts, and the monetary incentive will motivate every platform and tool to require artists to sign over ownership of their work so that those platforms and tools, too, can take the contracts, and in the end GenAI will end up with the same training data, but in a situation where we've now encoded hardline ownership of rights over information and art, but no artist actually has those rights, only capital does. Not to mention that the need for such lucrative contracts will make any truly open source development of AI, to take away the monopoly that companies like OpenAI have, finally impossible, only solidifying their power.

8. Are LLMs inherently unethical?

In my view, large language models are just tools.

Just like all tools they can have interesting uses –

LLM agents; summarization, even in medical settings; named entity extraction; sentiment analysis and moderation to relieve the burden from people being traumatized by moderating huge social networks; a form of therapy for those who can't access, afford, or trust traditional therapy; grammar checking, like a better Grammarly; simple first-pass writing critique as a beta reader applying provided rubrics for judgement; text transformation, such as converting natural language to a JSON schema, a prerequisite for good human language interfaces with computers; internet research; vision; filling in parts of images; concept art; generating business memos and briefs; writing formal emails; getting the basic scaffolding of a legal document out before you check it; rubber duck debugging; brainstorming

– and really bad uses –

programming; search (without something like Perplexity); filling the internet with slop; running massive bot farms to manipulate political opinion on social media sites; creating nonconsensual deepfakes; shitty customer service; making insurance or hiring decisions; creating business plans; .

They can also be used by bad actors towards disastrous ends even when they're being used for conceivably-good proximate purposes –

as an excuse to cut jobs, make people work faster, decrease the quality of the work, deskill people, and control people –

or positive ends – make people more productive, so they can work less, and endure less tedium, to produce the same amount, help disabled people, etc –

…just like any other tool.

But that's not how people approach it. Instead, they approach it as some kind of inherently irredeemable and monstrous ultimate evil that is, and must, literally destroy everything, from human minds to education to democracy to the environment to labor rights. Anyone who has the temerity to have a nuanced view – to agree that the way capitalists are implementing and using LLM is bad, but say that maybe some of the ethical arguments against it are unjustified, or maybe it has some uses that are worth the costs – is utterly dragged through the mud.

This behavior/rhetoric couldn't, I believe, be justified if it was just in response to the environmental impact of LLM, or the way data labellers are exploited: the answer to that, like any other thing in our capitalist economy that's fine in concept but produced in an environmentally or other exploitative way, such as computers themselves, shoes, bananas, etc., would be some combination of scaling back, internalizing externalities, changing how it's implemented to something that's slower and more deliberate, all achieved through regulation or activism or collective action; not to disavow the technology altogether. (This is even assuming the environmental impact of LLM is meaningful; I don't find it to be).

In fact, all of the negative environmental pieces on LLM (two representative examples: 1 and 2) fall afoul of a consistent series of distortions that to me indicate they aren't written in good faith – that unconsciously, the reasoning is motivated by something else:

  • failure to provide any context in the form of the energy usage of actually comparable activities we already do and aren't having an environmental moral panic about, such as video streaming;
  • failure to take into account alternative methods of running large language models, such as local language models running on power efficient architectures like Apple Silicon;
  • the unjustified assumption that energy usage will continue to hocky stick upward forever, ignoring the rise of efficiency techniques on both the hardware and software side such as mixture of experts, the Matryoshka Transformer architecture, quantization, prompt caching, speculative decoding, per-layer embedding, distillation to smaller model sizes, conditional parameter loading, and more;
  • comparison to the aggregate power usage of other widespread activities like computer gaming, since its power use may seem outsized only because of how centralized it is;
  • and more I can't think of right now.

It also can't be justified in response to the fact that LLM might automate many jobs. The response to that is to try fight to change who benefits from that automation, to change who controls it and how it is deployed, so it's used to make workers able to work less to produce the same amount (and get paid the same), or to allow them to produce more for the same amount of work (and thus get paid more), instead of being used to fire workers. Hell, even if that's impossible, we know how automation plays out for society in the long run: greater wealth and ease and productivity for everyone. Yes, there is an adjustment period where a lot of people lose their jobs – and you can't accuse me of being callous here, since I'm one of the people on the chopping block if this latest round of automation genuinely leads to long term job loss in my trade – and we should do everything we can to support people materially, financially, emotionally, and educationally as that happens, and it would be better if it didn't have to happen, but again, if the concern were truly about lost interim jobs during a revolution in automation, the rhetoric wouldn't look like this, would it?

Fundamentally, I think the core of the hatred for LLM, then, stems from something deeper. As this smug anti-LLM screed states very clearly, the core reason that the anti-LLM crowd views LLM the way it does – as inherently evil – is because they've bought fully into a narrow-minded, almost symbolic-capitalist, mentality. If and only if you genuinely believe that something can only be created through what you believe to be exploitation, then it would be justified and to act the way these people do.

Thus while I wish anti-LLM people's beliefs were such that discussing LLM "on the merits," and how to scale it back or make it more efficient or use it wisely, was something they could come to the table on, thier moral system is such that they are forced to believe LLM is inherently evil, because it requires mass copyright "theft" and "plagerism" – i.e., they're fully bought into IP.

Because yeah in theory you could make a copyright-violation free LLM, but it'd inherently be a whole lot less useful, in my opinion probably not even useful enough to break even for the time and energy it'd cost, because machine learning doesn't extrapolate from what it's learned to new things in the way human minds do. It just interpolates between things it's already learned – I like the term "imitative intelligence" for what it does – so if it doesn't have a particular reasoning pattern or type of problem or piece of common sense or whatever feature in its dataset, it can't do it or tasks like it or involving pieces of it very well. Now, it learns extremely abstract, very much semantic "linguistic laws of motion" about those things, it isn't "plagerising," but the need for a large amount of very diverse data is inherent to the system. That's why large language models only began to come to fruition once the internet matured: the collective noosphere was a prerequisite for creating intelligences that could imitate us.

So, if anti-LLM advocates view enjoying or using something they've already created, that they bear no cost for the further use of, that they publicly released, as "exploitation", simply because someone got something out of their labor and didn't pay rent to them for that public good (the classic "pay me for walking past my bakery and smelling the bread"), then like… yeah. LLM is exploitative.

Personally, it just so happens that I do not give a flying fuck about IP and never did – in fact I hate it, even when artists play the "I'm just a little guy" card. It is not necessary to make a living creating "intellectual property," and it only serves to prop up a system that furthers the invation of our rights and privacy and control over our own property, as well as the encroachment of private owners – whether individual or corporate – into the noosphere, and foster territorial, tribalistic approaches to ideas and expressions. Sell copies of your work only as physical items, or special physical editions that someone might want to buy even if they have a digital copy, to pay for your work. Or set up a Patreon, so that people who appreciate your work and want you to continue doing it can support you in that, or do commissions, where, like Pateron, payment is for the performance of labor to create something new, instead of raking in money for something you already did.

I really don't believe that if you make information or ideas available to the world, you get to dictate what someone else does with them after that point; I think the idea of closing off your work, saying "no, scientists and engineers can't make something new/interesting/useful out of the collective stuff humanity has produced, because they're not paying me for the privilege, even though it costs me nothing and requires no new labor from me", while understandable from the standpoint of fear about joblessness and lack of income under capitalism, is a fundamentally stupid and honestly kind of gross view to hold in general.

But that's what they hold to, and from that perspective, LLMs truly can't really be neutral.

9. Rules for ethical use of large language models

10. TODO Analytic philosophy argument for philosophical egoism

11. TODO My system of nihilist ethics

12. TODO The problem with utilitarianism

13. TODO Perspectivist epistemology is not epistemic relativism

14. TODO Weberian disenchantment is a spook

15. TODO In defense of parsimony

16. How to do a revolution

And tonight, when I dream it will be
That the junkies spent all the drug money on
Community gardens and collective housing

And the punk kids who moved in the ghetto
Have started meeting their neighbors besides the angry ones
With the yards
That their friends and their dogs have been puking and shitting on

And the anarchists have started
Filling potholes, collecting garbage
To prove we don't need governments to do these things
And I'll wake up, burning Time's Square as we sing
"Throw your hands in the air 'cause property is robbery!"

– Proudhon in Manhatten, Wingnut Dishwashers Union

Many leftists seem to have this idea that there will be one glorious moment, a flash in the pan, where we have a Revolution, and the old system is overturned so that we can construct a new system in its place. Some believe we can't do anything but wait, study, and "raise consciousness" until, then, while others try to take useful, but limited, action of some kind in the meantime, like fighting back against fascism or various other forms of activism.

The problem with this idea is that, as flawed as our current system is, many people depend on it, often desperately and intimately and very legitimately, with no clear way to do without it. Yes these needs served by the system could be provided-for in other ways; if that weren't possible, then overturning the system would be wrong. However, the presence of the system, providing for those needs, and often explicitly shutting out and excluding other means of providing for them, as well as propagandizing us against even thinking of still other means, have ensured that those new systems we envision are not in place, and our muscles for constructing them are atrophied.

Thus, if the system were to be overturned overnight in some glorious Revolution, there would not be celebration in the streets, there would not be bacchanals in honor of the revolutionaries. There would be chaos and destitution, the weeping and nashing of teeth, the wearing of sackcloth and ashes, even as the glorious Marxist-Leninist-Maoists scolded those mourning for mourning an exploitative system.

What can we do, then? This system must be overturned – or, at least, we must struggle toward that end – so how do we avoid this outcome?

The key is to build our own system in the interstices of the old one. Each of us must go out and try to create some part of the system we would like to see, according to our expertise – if you're a doctor, practice medicine for your friends outside the traditional healthcare system, inasmuch as you can; if you're a programmer like me, build systems for your friends to use that exist outside the software-industrial complex; if you're an academic, steal the papers and ideas you're exposed to and make them available for others, give impromptu classes; no matter who you are, take part in making and distributing food and resources if you can, however you can; take part in skill-shares; call each other instead of the police and mediate your own disputes; protect each other – perhaps institute a rotating duty of protection for group events; in short: do what you can, according to your skills and abilities, to provide for those immediately around you, an alternative to the system. Don't just "organize" for activism or to fight fascists. Organize to actually provide useful services. Organize to fill potholes!1

The next step is to slowly grow whatever practice or service or community event you've started so it can serve more people, and so that more people can get involved and help. Do this according to whatever ideas about organization you have – I'm not here to talk about that component of it. But the important part is to to do it. Don't focus on growth at all costs; make sure to maintain the original spirit, purpose, and values of the thing; don't let legibility, acceptability, and so on corrupt what it is; and don't let it grow beyond whatever its natural size is. But let it grow. And when it reaches the point past which you don't think it should grow anymore, try to encourage the creation of similar systems, the following of similar practices, in other places far and wide, on the basis of your practical experience. Maybe, if you can afford it, travel, and plant those seeds yourself. Then network those growing trees together, so that they can aid each other in times of need.

Remember, the point is to provide things people need. Not to grow for its own sake. Not to "do leftism" – so it shouldn't even be overtly ideological, or overtly targetted at leftists, or anything like that, and it should especially not exist purely in political domains, to fight political battles – but to do something people need done.

If we do this, then if the system is ever toppled, we'll be ready: we'll have built things that actually have a shot of taking over from the old system and providing for people. There will be horrible growing pains to be sure – shortages, bad organization, unprepaired networks, what have you – but at least there will be something there. More, we'll have practiced, grown experienced, actually learned how to be adults and do the things we wanted take over from the system, instead of just demanding them be done, but not learning how to do them. Even better, we'll have had time to experiment with all the different ideas and ideologies around organizing, and figured out which ones work and which don't, which are more successful, and which aren't.

In fact, if we do this right, there may not even be a need for us to initiate a "Revolution" against the system. In my ideal vision of a "revolution" against the system, we just continue building our alternatives, providing for more and more people, and in the process purchasing investment and buy-in from them in our ideas and in our systems and networks and organization, building good will and loyalty with them, until finally our alternative systems threaten the system as it exists enough – as the Black Panthers did – that the system descends upon us to throttle us. And maybe, hopefully, we'll be strong and numerous and self-sufficient enough to resist, and have enough love and good will and investment, from all the people we help, that we'll be able to make it a public relations disaster for the powers that be to grind us beneath their heel, and they'll be forced to withdraw and let us live our new, free lives in peace.

And hey, if the revolution doesn't work out? At least we helped some people.

17. AI enables privacy laundering

I think this video is really emblematic of a serious problem that we are going to have as a society in the future: privacy laundering by means of AI.

They say at the beginning of the video that they have a rule at corridor that they don't record people without their knowledge and consent. However, they have a goal they want to achieve that surveillance will make significantly easier, so they have a motivation to come up with a rationalization for that surveillance, and AI offers the perfect opportunity for that: they convince themselves that just because they have the AI look at that non-consensual surveillance footage and then answer questions about it, instead of them directly looking at the footage, that it's somehow better.

It isn't.

The AI narrates specific details about the footage, including identifying characteristics of individuals; they're getting everything they would have gotten from the footage anyway, just from the AI as a middleman.

Maybe, being generous and assuming they only ask specific questions, instead of general ones like "what can you see?" or "what happens in this video?", the range of the information they can access is slightly more limited, in that they can only get responses to specific questions, so they can't ask things that they wouldn't think to ask about themselves. But even still, this is still meaningfully non-consensual surveillance, and the fact that there's an AI intermediary makes no material difference to the moral and practical implications involved.

We see this same logic more worryingly in various government regulatory proposals for client-side scanning, including the "Online Safety Act" from the UK, which passed, and the thankfully rejected "Chat Control 2.0" EU proposal and Australian "online safety standards" (coverage of its modification here). The idea here is the same: just because a human isn't directly looking at the raw data, it's supposed to be private – even though the AI that's doing the scanning of the actual data itself is controlled by the human doing the querying, so it could be expanded to look for anything, and the humans looking at the AI reports are still getting a ton of data about users, most of it not illegal at all, but flasely flagged.

Footnotes:

1

I will admit, I fall short on this. I focus on trying to educate the people in my local community on tech-related things because that's my strong suit, but besides that I tend to be very reclusive, mostly because my disability means being in non-controlled, changing environments, especially if there's a lot of noise or visual complexity, and talking to people, is completely overwhelming and exhausting.

This work by Novatorine is licensed under CC BY-SA 4.0