Do AI images and video mean "reality is broken?"

Do AI images and video mean "reality is broken?"   ai philosophy

A lot of people seem to be angry, upset, or panicking about the advent of generative AI image and especially video models like Veo 3 and Sora 2, claiming that since these systems allow us to effortly create photorealistic videos and images completely out of whole cloth, our society is finally completely screwed: we have no way of getting information that we can know comes from actual reality. We can no longer trust anything.

Leaving aside the possibility of technological solutions — such as cryptographic content authenticity verification systems (made in conjunction with organizations such as the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity) — I think this fear is overblown. Prior to photo and video existing, all that was available for storing, recording, and transmitting information were the spoken and written word, which are and always have been extremely easy to falsify, much in the same way that AI is now making photo and video evidence easy to falsify: anyone can just make up anything. And yet, we didn't all have psychosis; we weren't all incapable of knowing the truth; we didn't all lack evidence for what was real and what wasn't. Many people of course fell for frauds, lies, and hoaxes — as humans always will — but it wasn't inevitable or inescapable; it was a matter of education to establish priors, and media literacy for ascertaining how much a given piece of prospective evidence should cause us to adjust those priors.

The reason people are panicking is because they're framing the problem as one of being able to tell only from the piece of media itself, without looking at any other context clues, at first blush, visually, whether it's fake or not, and the problem with that is that it's fundamentally a losing battle, because whatever signals one might key off to tell whether something is fake are precisely the signals tech companies are directly trying to change and/or eliminate in their generative AI systems, and in an adversarial relationship like that, against a gigantic billion-dollar tech company that always gets to make the first move, the average Joe — exhausted after work, without infinite time to keep up with developments — will lose. But I think this is a bad framing, a misdirection almost. Instead, it's worth remembering the tools we used to be rational, to stay in touch with reality, to ascertain and gather accurate evidence, prior to the invention of hard to falsify visual information: namely, carefully paying attention to the source of the information, instead of just trusting anything you see. Check the source's track record, their fact-checking systems and institutional checks and balances, as well as their funding, associations, and possible biases. Think about it: newspapers could have (and often did) easily just print complete falsities — but we could and did still figure out which were more or less trustworthy.

And how do you ascertain the factual track record of any given source? By comparing it to peer-reviewed scientific evidence if available, to what other sources say — including how they tell the same story, and how they criticize the source you're inspecting — as well as, you know, the results of actual court cases, investigations, and so on where physical evidence is inspected with a proper chain of custody and system of double checking, independent observers without conflicts of interest, and so on to avoid falsification of reality. And yeah, it's a little bit harder than just being able to trust any picture or video that you come across on social media; and it does make informational legitimacy more available to large organizations over individuals or "the people" — but it's not an insurmountable end of the world, and radical counternarrative sources will continue to exist as well.

It is predominantly the generations that are alive today that are facing a problem, not society as a whole ad infinitum, and even for us it's surmountable. The problem is just that we're all used to being able to trust any pictures or videos that we come across online, so we are going to struggle with knowing what's real and what isn't, until we find a way to educate ourselves and others on the necessary techniques. And I truly have hope for future generations — if we manage to fight back against the post-truth and anti-intellectual society in general — that they will be able to use the same strategies that our ancestors did for trying to figure out what's true.

I think it's also worth pointing out that no generative AI images or videos are ever going to meaningfully affect, for instance, court proceedings, because there, as I mentioned before, we have chain of custody, metadata, physical evidence, multiple witnesses, and evidentiary standards. Not to say, of course, that the legal and court system we have is perfect, or cannot be willfully manipulated or make horrible mistakes with relative frequency; my point is simply that this won't make it meaningfuly worse in my opinion, because we already have means of determining whether something was created out of whole cloth or edited or whatever.