Context is important, particularly when combating misinformation. But how much context does a typical user need, and how can we can clarify content at scale?
Visit the ‘trending’ or ‘explore’ tab on any given social network and you’ll likely be met with a fire hose of posts, comments, and discussions. While lively engagement on a platform is good, hidden in amongst that stream of content is going to be misinformation - deliberate or not. Although a single piece of half-truth or misquoting is unlikely to have a serious impact in the long-run, when we scale that to millions of users the ramifications can become far more pronounced.
The issue of tackling misinformation has received considerable attention from social platforms in recent years as conflict and divisiveness has grown. While the widely adopted content label, used to indicate potentially misleading posts, can be useful for making users think twice about what they’re viewing; labels usually either aren’t particularly detailed, or do not directly address what is disputed about the content in question. Some platforms have instead opted for a more aggressive stance by outright banning contentious posts however this can also be problematic. Repeatedly censoring discussion when uncalled for can hinder the social aspect that brought users to the platform in the first place. Alternatively, and just as concerningly, censorship can also push users deeper into echo chambers that lack much needed alternative viewpoints.
Social media is at its best when people can freely discuss and share in a healthy, balanced environment. That’s why it’s so important that we strike a middle-ground between curbing the impact of harmful content while also letting users decided what content they consume.
One such approach is to keep controversial posts available but offer additional context on just what a user is, or isn’t, seeing. While I’ve already mentioned how content labels do not usually offer sufficient information, various more detailed initiatives do exist in this space. From the likes of the Four Corners Project to the New York Times R&D News Provenance Project, multiple approaches are being taken to help clarifying the content presented to us online.
The two approaches in question seek to address the issue of context in visual media. As the name suggests, the Four Corners Project allows for a given image to have additional information be made available by mousing over any of it’s four corners. For example the top left corner might display information on the original photographer, the location the image was taken, and when it was captured. Another corner might show additional images taken around the same time or location so that users can see the same event from a different perspective.
The News Provenance Project meanwhile shares a similar goal however it places more emphasis on maintaining context through a journalistic lens. While sharing many of the features of the Four Corners project, NYT R&D’s approach also tracks where the image in question was used. By adding this ability, users can see how the headlines attached to a piece of media changed as the image traversed across various news outlets. This certainly offers a unique means of seeing how the same event can be interpreted by different newsrooms.
While these approaches may help address misunderstandings that arise when seeing an event from only one perspective, it’s uncertain how they may operate at scale. As hundreds or even thousands of posts stream past in a users social feed, can we expect them to stop and assess each piece of content they view? And if not, does the ability for users pick and choose what content they inspect help or hinder the misinformation situation?
View a plain text version of this post.