Who Should Analyse Synthetic Media?

Marc on 2021-09-08

AI Brain

Yesterday an article appeared over on Tech Policy Press that raised an interesting question: How should we govern access to synthetic media detection technology?

Programs for generating synthetic media have come leaps and bounds in the past half-decade; what would have taken vast computing power five years ago can now be produced with little more than a slightly above-average laptop. As such, with the ease of access to these tools getting lower and lower, a key question emerges: How can we defend against synthetic media?

The first idea that may come to mind is to simply release these detection tools to the public. Surely then if a user is unsure of a piece of contents authenticity, they can just use one of these tools to analyse it? Unfortunately such an approach may not be the best way forward, as the article points out:

The more broadly accessible detection technology becomes, the more easily it can be circumvented.

If we make access to detection technologies easier and more open, we risk compromising their effectiveness by giving bad actors unrestricted attempts to seek out blind spots and evade notice. This is where a second question arises: How do we prevent these detection tools falling into the wrong hands?

Instead of allowing anyone to run content through detection algorithms, the article instead proposes various 'Levels of Access' to combat the issue, ranging from the aforementioned completely open approach to more secure methods. By staggering degrees of access out over several levels, we can potentially achieve a so-called 'Goldilocks access' where individuals and entities have just the right level of insight on how a system works. In doing so this may help curb bad actors abilities to thwart detection systems by restricting information to a 'need to know' basis.

Each of these levels has their own benefits and drawbacks. If detection systems are only available as black boxes (I.e a user submits a piece of content and can only receive an 'authentic' or 'inauthentic' response) then we may risk users losing faith in the system as they have no way of knowing what steps it took to arrive at a decision. Conversely, too much information puts us back in the position of bad actors being able to seek out faults.

The issue of synthetic media is only going to become more pressing in the coming years as the barrier to entry continues to lower. As such, we must address the very fine balancing act of ensuring users can meaningfully verify the content they view without giving attackers the ability to exploit detection systems. By opting for a tiered system of access, we may be able to help strike a balance on who should be able to use these systems and how much access they should be granted.