Fighting Disinformation Fairly


I received a comment some time ago regarding my post on fledgling tools aimed at combating disinformation. Of the tools discussed in the post, one was a Microsoft-lead application to automatically authenticate video footage; the tool in question being trialled by various news organisations to see how it performs in the field. Deploying algorithms for this purpose however raises concerns among some, as the commenter writes:

Placing trust in algorithms that remove or that hide content without any human input sounds like a recipe for disaster. Ultimately, these algorithms are designed by people so they are not infallible and they are still subject to our own biases.

An excellent point. Issues with using algorithmic solutions to tackle complex matters online has seen numerous areas of fault, ranging from excessive censorship to inaction when needed. As such I thought I’d spend this post looking over an approach being taken to tackle disinformation that helps ensure a more level playing field.

Generally speaking there are three possible approaches to tackling disinformation online: human-centric, algorithmic, and a mix of the two. Although an algorithm may be better able to rapidly comb through data faster than a human, we cannot guarantee that the algorithm has been able to build a suitable degree of context.

The human-centric approach however also has its drawbacks. An immovable bottleneck for human reviewers is always going to be how fast they can meaningfully review the claims made by a given piece of content. Such a drawback is only furthered when we take into account various biases that may be at play.

As such this is where the algorithm-human approach may offer the best solution to tackling disinformation. By taking the rapid data acquisition skills of an AI and pairing it with the manual analysis of a human, we can offer a more robust and dependable approach that combines the best of both worlds.

But how do we implement this approach? How much of the workload should be split between human and AI? Do we allow reviewers to double-check every piece of content after the algorithm has assessed it? Do we give the algorithm priority over human assessment due to its faster processing speed? It can be hard to determine just where to draw the line. Particularly when we are dabbling with determining what counts as freedom of expression.

One organisation where the human-algorithm approach is being put to use is the GDI (Global Disinformation Index) which assesses an entire domain (such as a news outlet) for potential disinformation risk. This is as opposed to the approach of fact-checking a single piece of content (such as a tweet) adopted by some other organisations. Although not explicitly stated by the GDI themselves, I feel this approach offers a more proactive rather than reactive means of tackling disinformation. By this I mean if we can determine the risk that a domain is likely to carry disinformation then we can be better equipped to step in and prevent damage faster if and when it does publish falsehoods.

In practice this human-algorithm approach is carried out by offering a selection of articles from the domain to a Researcher to assess on a range of metrics. These metrics could include the neutrality, impartiality and sensationalism present in the article(s) in order to determine the risk a domain has of displaying disinformation. It is while this manual analysis is being carried out that the domain is also being assessed using AI to check for a range of signals. This automated side checks the sites format, choice of language, and discussion topics; the data-set of the AI being trained off of several thousand known disinformation domains to assess for similarities.

While this approach does not produce a definitive yes or no on whether the domain is spreading disinformation (The GDI instead opts to produce a numerical ‘Risk Rating’ for the domain), it does offer a template from which others could look to. Many social media sites seem to hand over fact-checking and general reviewal of content to either human moderators or opaque algorithms, eschewing the more balanced approach that can be found by deploying the two in tandem. By combining these two approaches, a more nuanced and balanced image can appear that may allow us to better handle the ever changing landscape of disinformation.