Misinformation and Why it’s So Hard
Tech giants continue to be the scapegoat for the spread of misinformation. So why haven’t they made meaningful progress?
Unless you’ve been living under a rock, you’re no stranger to fake news. Misinformation has plagued our democracy for years, a contemporary form of psychological warfare for our digital society.
The media has largely levied the blame for it’s existence onto technology platforms. It’s easy to see why: Facebook, Twitter, Reddit, and others are special cesspools of misleading and polarizing fakery that draw out some of the worst of human nature. These journalistic characterizations have undoubtedly imposed significant damage onto these salient brands. Consumer sentiment trust towards tech platforms is at an all time low, with no sight of respite. It’s been years since misinformation first took center stage, yet it feels like nothing has changed. Why?
To understand the challenges of plugging its proliferation, let’s explore why misinformation exists.
The nature of misinformation can be dissected in 3 stages. First, how it is created, second, the mechanisms by which it is spread and wreaks havoc, and third, how it is identified and eradicated.
CREATION
The internet has demolished any barriers to content production. Blog on Medium or Substack, or create short form content on any social media platform. Think about what it would have taken to create public disinformation in a pre-internet world. You could run around your community, vocally spewing and repeating your rhetoric to anyone who will listen, you could visit your local newspaper and convince them to write an article, or you could scribble your thoughts on post-its and throw them into the wind. The internet has eliminated these challenges and unlocked production, for better or for worse. And while widely available, real time information has generally been a positive for society, the negative externality is that the sheer volume of bad information is magnitudes greater today than it has been at any other point in history.
VIRALITY AND DISTRIBUTION
The most damaging aspect of the misinformation lifecycle is how content earns its reach. There are both structural and psychological factors that contribute to its spread.
The web has boosted content production, but more importantly, it has democratized distribution. It’s one thing to have lots of people producing bad information in silos of 1. It’s another to give these people an audience. Social media has given us a megaphone. Not only are our networks more accessible than ever, but anyone can pay to run sponsored posts and get in front of larger, unsuspecting audiences. Cambridge Analytica’s abuse of Facebook’s ads products in 2016 is a prime example of how ad-driven profit models unwittingly facilitate the spread of damaging information. If you are a bad actor with deep pockets, you’re virtually unstoppable.
Social channels have risen as a mechanism for news consumption. According to the Pew Research, 62% of adults in the US get their news from social media. A follow up study from 2019 reinforces the notion of social media as a primary news source: 63% of participants reported that if they did not obtain news from Facebook, they would only be slightly news informed and 4% reported that they would not be informed at all. Only 33% of the 1.5K participants felt that they would be well informed or fully informed without social media. While the early days of social media were about connecting with close friends and family, you would be hard pressed to see much of that in your newsfeed today. Instead, the widespread adoption of these platforms has given way to environments rife with social posturing. Signaling value comes not just from being informed, but informing others that you are informed. CNN reported that 37% of people regularly share news articles on Facebook or Twitter. So people are not only relying on social media for news, but are increasingly becoming vessels for spreading news across their networks. This hastens the spread of misinformation, generating organic traffic for these publishers.
REMOVAL
To correctly pinpoint posts to remove, we must not only assess their degree of truthfulness, but also their intent.
The chart above presents the prime challenges of identifying bad content. On the X axis, there is the amount of relative truth in a statement (i.e. to what extent can this post be proven factual?). On the Y axis, there is the intent to mislead (i.e. to what extent was its creator looking to obfuscate the truth and deceive its audience?). In the bottom right, we see what happens when a statement has a high degree of truth without intent to mislead. In this case, you are simply right. In the bottom left, you might say something that is blatantly untrue, but still have no intent to mislead. In this case, you are simply wrong.
Moving to the top half of the chart, if you create content with little to no truth to it with the intent to mislead, you are creating disinformation. This includes fake news, hoaxes, and other internet garbage. Exhibit A:
Finally, it’s possible to use a high degree of truth to mislead. This includes most propaganda and other ways of cherry picking statistics to fit some worldview. Watch any health-related documentary and you’ll come face to face with this reality. These films often advocate for or against a diet (e.g low carb, high carb, veganism, keto) with scientific evidence on both sides — contradictions so prevalent that they render the truth indiscernible. Another common example of cherry picking statistics can be found in discussing climate change, in which global warming deniers often time box intermittent periods of global temperature declines to advance claims that global warming does not exist.
In this environment, we see that there are 2 issues in evaluating content:
Platforms refuse to be the arbiter of truth because most of what’s written online is not cannot be graded on a black and white scale of true vs. untrue. The categorization of information shown above is a dramatic simplification. In reality, most information falls somewhere along the spectrum, neither unequivocally true nor untrue. (e.g. “Anna loves peaches!” Does Anna really love peaches? Maybe she did when she was younger, but grew to feel lukewarm about them. Now what?) The interpretation of reality is bound to subjectivity. Any company claiming to show only the “truth” endures significant liability when various interpretations of reality collide. In an era in which tech giants are no stranger to lawsuits and courtroom hearings, it’s easy to understand why these platforms have been reluctant to act with decisiveness.
The evaluation of intent is also subjective and cannot be enforced upon in a consistent manner. How should companies action on content whose intent is difficult to evaluate? If a comment is factually inaccurate, is that being wrong or purposeful disinformation? If something is true but omits key opposing facts that dilute its argument, is that propaganda or carelessness? The problem is amorphous. And because there is no framework that details the conditions distinguishing fake news from simply being wrong, no policy can be enforced consistently.
So, misinformation is hard. Where then, does the social responsibility lie for these tech platforms and how should they be held accountable?
- Flagging potentially pernicious content
Just because companies don’t have the basis to take every suspicious comment down doesn’t mean they can’t have a voice of their own. As stewards of the community, brands should express their own evaluation of content by actively flagging and notifying users of content on the margin — posts that seem suspicious but are not yet policy violating enough to remove. Inevitably, this would result in a large amount of false positives in inadvertently flagging benign content, but is much better than the opposite of not flagging enough and letting bad content seep through the cracks. So long as platforms have transparent, rational and systematic reporting criteria, it’s always better to be safe than sorry. - Identifying bad actors
Scammers tend to have a track record and share common characteristics. There’s a lot platforms can learn by studying the behaviors of bad actors. For instance, spammers tend to leverage tactics like bots to post excessively and share scripts to hack distribution. Newer accounts are also more prone to be bad actors than older accounts, since most accounts used for malicious acts are created expressly for that purpose. Cybersecurity experts often use what’s referred to as a “honeypot” to study bad actors. In this scenario, a user account that has been established as bad is placed in an environment in which everything looks and acts normally, but is in reality just a sandbox with no real users in it. This way, experts can study how the individual behaves when “interacting” with others and detect any common habits or tricks being used to further his or her agenda. Tech companies can use these tactics to effectively quarantine accounts in question and understand the mechanics of bad behavior. - Penalizing bad actors
It’s true that eradicating misinformation is a bit like a game of whack-a-mole. Where you get rid of bad actors, new players will emerge. However, enforcing harsh penalties is crucial towards deterring bad behavior and preventing communities of bad actors from forming. Online governance should also disincentivize repeat offenses, and if you are a repeat offender, there should be guardrails in place that severely restrict your ability to make new accounts and easily rebuild your setup.
Ultimately, it’s clear we have a long way to go. The battle against misinformation will plague us so long as the internet continues to freely enable creation and distribution. Tech platforms will continue to cower until they have surefire ways of evaluating truthfulness and intent. We, as consumers, must continue to raise our voices rationally, push for action over indecisiveness, and hold them accountable to a higher standard. In psychology, there is a phenomenon known as the Rosenthal effect, whereby high expectations from those around you lead to improved performance. Let’s push big tech to step up to the plate.