AI enables privacy laundering
yt:https://www.youtube.com/watch?v=DPkRwUR7eoc
I think this video is really emblematic of a serious problem that we are going to have as a society in the future: privacy laundering by means of AI.
They say at the beginning of the video that they have a rule at corridor that they don't record people without their knowledge and consent. However, they have a goal they want to achieve that surveillance will make significantly easier, so they have a motivation to come up with a rationalization for that surveillance, and AI offers the perfect opportunity for that: they convince themselves that just because they have the AI look at that non-consensual surveillance footage and then answer questions about it, instead of them directly looking at the footage, that it's somehow better.
It isn't.
The AI narrates specific details about the footage, including identifying characteristics of individuals; they're getting everything they would have gotten from the footage anyway, just from the AI as a middleman.
Maybe, being generous and assuming they only ask specific questions, instead of general ones like "what can you see?" or "what happens in this video?", the range of the information they can access is slightly more limited, in that they can only get responses to specific questions, so they can't ask things that they wouldn't think to ask about themselves. But even still, this is still meaningfully non-consensual surveillance, and the fact that there's an AI intermediary makes no material difference to the moral and practical implications involved.
We see this same logic more worryingly in various government regulatory proposals for client-side scanning, including the "Online Safety Act" from the UK, which passed, and the thankfully rejected "Chat Control 2.0" EU proposal and Australian "online safety standards" (coverage of its modification here). The idea here is the same: just because a human isn't directly looking at the raw data, it's supposed to be private – even though the AI that's doing the scanning of the actual data itself is controlled by the human doing the querying, so it could be expanded to look for anything, and the humans looking at the AI reports are still getting a ton of data about users, most of it not illegal at all, but flasely flagged.