- Machines are difficult to hold accountable. Not being people, we can't directly hold the machines accountable, so we have to go looking for the people behind a machine to hold accountable. But who that is can often be very difficult to ascertain, since there is always a vast army of people behind the implementation and operation of any machine, and any one of them can claim to not be responsible, with some reasonability. For instance, the corporate executives who put a machine in place don't know the full code or setup of it, so they could always claim they didn't know it would make a certain decision and didn't want it to; whereas engineers and programmers, not being remotely in control of the deployment and application of the machine, can always claim that they didn't want it to be used like that (perhaps absent necessary checks and balances, or making decisions over that particular thing, or whatever) or that they simply couldn't be asked to predict the infinite complexity of the world with all its edge cases and account for it all in their software, so punishing them would be punishing them for human fallability and limitation that they did try as hard as they could to overcome, given the limitations imposed on them by their own exectuives; and the executives of the companies providing the machines can always argue that it was the engineers' and programmers' fault they made mistakes, and so on…
- Perfect rule-followers. An important component of making human lives within a greater social/civilizational system bearable is the flexibility of the human decision-makers operating that social/civilizational system. The ability for them to bend the rules, or make exceptions, or go out of their way to figure out something to help you out, that allows the system to adapt to particularities and individualities, to care for people and help them out, even when the overall structure of the system isn't necessarily aware of that. The key to making a better system, in my opinion, is to have more of that case-by-case individual decision-making flexibility, and using machines as default decision-makers directly counteracts that, because machines rigidly and absolutely enforce rules from the top down, with no situational awareness.
- No other values. Machines only have the values their designers explicitly program (or RLHF) into them. That means they are perfect servants to the will of the powers that be in any hierarchy, which is good for the hierarchy but not good for the rest of us. While a human decision-maker may be forced to go along with those above them in the hierarchy most of the time, they can still rebel sometimes, even in small ways, through their other values of empathy, loyalty, justice, fairness, and so on. They can bend the rules, as mentioned in point 2, or strike, or complain, or whistleblow or any other of a myriad of actions that let them push back against the weight of the hierarchy above them. Machines will not do this, so in decision-making positions they centralize power more and provide no relief.
Instead, at the most, I believe machines should be used to automate helping human decision-makers gather information and understand it, in order to further human decision-making power. Some key rules for this are:
- No metrics. Such information gathering and understanding machines must not produce a black box "metric" that's just a final number or rating or something; they should instead provide all the components necessary for a human being to make an informed decision themselves. As soon as you have the machine outputting vague highly collapsed and abstract "metrics," you open the gate to introduce rulebooks by which humans should make decisions based on that metric, and suddenly your "human in the loop" has become simply a cog in the greater machine wheel.
- Draw on real data. The information any machine that helps human decision makers gather and understand information must do so based on externally-stored information entered by and understandable by humans that could be consulted separately and is known-correct, such as databases and documents, not on the basis of vague associations and ideas hidden in their weights or code even if that machine has been specially trained/programmed for the specific domain.
- Citations. Any machine that gathers, summarizes, or synthesizes data must provide citations (as links) back to the real data sources from which it drew, preferably based on breaking down its output into discrete statements of facts and then using a vector database to find the pieces of original data that align with that statement, and not just the AI's own generation of citations. The more localized the citations are to a specific part of the source data, the better, as well. Preferably something like this.