How to use a large language model ethically

How to use a large language model ethically   ai personal philosophy

  1. Prefer local models

    1. So you're not supporting the centralization, surveillance, and landlordization of the economy any further
    2. So you're not taking part in necessitationg the creation of more data centers which – while they don't use a total amount of water and energy that's out of line with many other things in our technological society – concentrate water and energy demands in communities in a way they can't prepare for and which hurts them
    3. So you're harder to subtly manipulate, influence, propoagandize, and censor through the invisible manipulation of the models you're using
  2. Don't shit where you eat – don't spread unedited AI slop everywhere into the very information ecosystem it, and you, depend on. Unedited/reviewed AI stuff is only for your own consumption, when you find it useful
  3. Only compress – going from more text to less text is usually a good idea; going from less to more is just decreasing information density for no reason
  4. Don't shell out responsibility – i.e., don't say "ChatGPT says…" or whatever.

    1. That's an escape hatch foster pasting bad/wrong stuff you didn't double check into everyone's face. Take ownership of what it outputs, that way you're incentivized to make sure it's more or less correct.
    2. Although if you're not going to check either way, not saying anything just makes things harder on people, so if you're going to be an asshole, announce it!

Note: There is a caveat to my point on local models, however: datacenter models are more energy and CO2 efficient than running an equivalently sized model locally. Additionally, they can run larger and much more useful proprietary models, and sometimes there's a certain threshold of capability above which it's worth the energy and time spent on a model, and below which it's completely not worth it, but not using it at all will just waste more time and energy — after all, saving a human some time is more important IMO than the totally neglibable and overblown energy usage of their individual use of AI.

Moreover, not supporting the first two points by refusing to use datacenter models is largely a symbolic gesture: your individual usage of the models is not even remotely what's driving the expansion of data centers or the centalization of anything; not just because you're a tiny part of it, but also because, as the AI hype bubble has shown, they'd do it whether anyone uses it or not. So this is less of a hard and fast rule than the others, and more about just personally keeping yourself "pure," and avoiding manipulation and privacy breaches.

It's more like the usage of any other centralized Big Tech service: avoid it if you can, but sometimes it really is the best option.

If you're going to use a datacenter model, my advice is:

  1. Don't use xAI or OpenAIs models, prefer Google's instead. They're still an evil capitalist megacorp, but at least they seem to care about green energy, and not be actively producing sycophantic or nazi models. Their Tensor architecture for model training and inference is also significantly more efficient.
  2. Prefer smaller and more efficient models. Prefer mixture of experts models.
  3. Use models through your own locally hosted interfaces and through proxy SDKs like LiteLLM or OpenRouter when you can to avoid lock-in.
  4. Prefer to pay for your models, so you're not externalizing costs, and so you're kept honest and incentivized to prefer local models when you can use them.
  5. Read privacy policies carefully. Prefer local models.