It takes just 200 milliseconds to register and analyse what passes in front of our eyes. What damage could disinformation campaigns cause in that window?
Visual media is a key form of how humans communicate with the processing of images, videos, and photos being among our fastest senses. So fast in fact that we’re able to analyse and respond to what we see in under 200 milliseconds. As such it is vital that systems be in place to ensure we don’t fall prey to the doctored images, videos, and memes used in disinformation attacks.
Like many issues facing us technologically, the first proposed solution is to simply apply an algorithm to figure things out. Although such an approach may help address the most blatant attacks, algorithms cannot effectively deal with the more advanced, and more subtle, campaigns we see on an increasingly regular basis. One key area causing issues are disinformation attacks that leverage memes and other visual formats. Such attacks pose serious challenges to AI systems owing to the ever-changing format of the medium; a meme that was everywhere one week ago may have been replaced by something new the next.
As it currently stands AI systems may be able to identify the content of an image but often lack the ability to determine its meaning and how it fits into a broader disinformation campaign. As Michael Yankoski (et al) state in their article on the issues facing AI confronting visual disinformation tactics:
AI systems will need to understand history, humour, symbolic reference, inference, subtlety, and insinuation. Only through such novel technologies will researchers be able to detect large-scale campaigns designed to use multimedia disinformation to amplify or magnify how a group of people feel about their preexisting beliefs.
Evidently identifying where falsehoods fit into a larger disinformation campaign is a challenge for AI systems that requires further research. Promising steps do look to be getting taken however with regards to combating deepfakes - those modified/synthesised forms of media that make individuals appear to do or say things they never did. Alongside a vast number of academic papers, Microsoft has recently partnered with various deepfake research organisations around the world to trial a video authenticator application along with a slew of other tools to assist with ensuring the integrity of data. To further test the effectiveness of this technology, Microsoft has also partnered with a range of news organisations to aid with minimising the risk of an outlet accidentally reporting on deepfaked content.
As promising as these developments look, the cat and mouse game that exists between creators and detectors of deepfakes means we may eventually reach a point in which falsified media is too convincing even for the most sophisticated AI. When that point eventually comes focus will likely turn to determining the provenance of a piece of media - where it originated from. This is another area seeing extensive assistance by AI and machine-learning to automatically track disinformation back across a platform to the original poster with promising results.
The proposed algorithmic solution to disinformation is one that is seeing considerable forms of exploration presently, from simply identifying falsehoods to determining the origins of a piece of content. Although media literacy and limiting echo-chamber effects may perhaps be more effective at targeting disinformation at the source, utilising algorithms shows promising first results that automating the analysis and removal of such content may be possible.
View a plain text version of this post.