Tuesday, December 31, 2019

The Perils of Social Media Fact-Checking

I don't normally do "guest posts" on VFTP. After all, it's weird letting someone else write in your diary. But my friend Jon Hauptman, HMFIC at PHLster holsters and half of the podcasting team at The Guns Guide to Liberals and one of the smartest dudes I know, wrote a post on Facebook the other day that I asked to copy here.

It was a thoughtful piece on how, in an attempt to mitigate the spread of inflammatory and divisive "fake news" stories, the social media giant's fact checking process was going to have the opposite effect as the one intended, and widen the rifts it was hoped to mend.

He went one better than letting me copypasta the FB post, and went and polished it up & sent it to me via email. Take it away, Jon:

      "I’ve been noticing a new invasive, interrupting, and patronizing feature implemented by the do-gooders over at Facebook which attempts to screen posts and alert users to the potential that they’re viewing content which could be classified as misleading or “Fake News.” The clumsy implementation, at this time of launch, gives me the impression that, at best, this is a superficial attempt to “do something,” while utterly misunderstanding the how’s and why’s of people’s motivation to share and consume content (something which seems like it should be entirely within their comprehension, given their position as a nexus of information sharing and personal data harvesting). Combined with their choice of fact-checking affiliate (Snopes, about which I have no personal opinion, other than being aware that they’re not universally trusted), I’m left with the impression that Facebook’s implementation of this suffers from some amount of monoculture and wishful thinking. 
  This rant is inspired by my first encounter with the new system, in which it identified an entirely innocuous meme related to the Die Hard movie as an attempt to spread false information. 
“Fake news,” at the root, is an information literacy problem, not an information delivery problem. If it’s treated as the latter rather than the former, this kind of ridiculous false positive will be inevitable and rampant. And it’s already causing me to have zero faith in the process behind their information verification (and neither they, nor Snopes, have that kind of credibility to burn in the circles which will inevitably be fact-checked the most). Although, to be entirely fair to Facebook, if they’re under some kind of pressure (self imposed or otherwise) to curb the spread of fake news, the primary tools at their disposal are ones of information delivery, so it’s not that hard for them to kludge this shoddy fix together. However, it’s still myopic and I expect it to be dramatically counterproductive.  
Fact checking, unfortunately, isn’t what we think it is. Despite the superficial appearance, fact checking isn’t a helpful tool for determining the truth and for forming an accurate opinion. Instead, it’s actually an in/out group filter which segregates people by belief and value, while allowing each group to believe they hold the Factual High-Ground, and to claim any subsequent moral position which proceeds from being “factually correct.” 
Facts don’t change people’s minds. If you don’t believe me, just Google, “facts don’t change people’s minds,” and review the plethora of articles and studies about this. I don’t necessarily expect you to change your mind (of course) but at least be prepared to argue against an enormous body of evidence, while performatively proving my point. 

“Fact’s don’t change people’s minds,” is a substantially different assertion than, “Facts don’t matter.” Facts matter to our BELIEFS. Our beliefs are largely constructed around our feelings and values. And then, we seek out the facts which support those beliefs. In the event that a mind is changed by facts, that’s because that mind VALUES a new set of facts over the ones it already has.  
If a set of facts has led someone to negative consequences, they might feel anxious, angry, frustrated, or confused. Those negative feelings result in an evaluation of values, beliefs, and supporting facts. Subsequently, a new set of facts is sought and collected, resulting in positive feelings. People get a dopamine hit when they learn something or recognize a pattern. Most facts gathered by people are taken on board in order to fit the patterns they already have in place. Or, often more accurately, we all have cognitive biases which look for and enjoy facts which fit our patterns. It’s usually a decent coarse reality check which helps keep us motivated on the various paths of progress surrounding our values and beliefs. At worst, it’s a dangerous set of cognitive blinders (but, that’s why information literacy and discipline are important).  
It feels good to be right and it feels bad to be wrong, and being right and wrong (in most of our lives) has to do with recognizing patterns and behaving in ways which generate desired results, more than they have to do with objective correctness. People are literally happy to do things “wrong” for long periods of time, until a time comes when their beliefs and values severely clash with a more “right” way of doing things. People feel, first. Then, they believe. Then, they gather facts to support the beliefs. If you question or deny their facts, that’s a proxy attack on The Self. 
This is why fact checking is counterproductive. 
Confronting people, in a moment of their enthusiasm and dopamine reception, to tell them that this thing they already believe and are excited to share is actually, in fact, false, according to an authority they don’t already recognize, is a Backfire Effect generator. (Google Backfire Effect.) Frankly, I’m concerned that this multibillion dollar company with access to all the data and experts they could possibly want, which sits as a primary communication medium between us and the world, fails to understand and recognize this fundamental human behavior. And that obliviousness makes me very concerned that they simply lack the ability to execute this properly without making the problem of fake news, disintegrating consensus reality, and polarization IMMEDIATELY worse.  
They’re making the mistake of thinking that facts have anything at all to do with what people believe. And that people have any tolerance at all for being told what is true (especially by source they don’t value as an authority). This revelation of the 2016 election is embarrassingly old news and the inability to see the connection between that and efforts to mechanically mitigate the spread of fake news belies an irresponsible obliviousness, regardless of how well intentioned it may be.
Here’s the War Game I think they should have played over at Facebook before implementing this, and what I would have brought up in a meeting, were I unlucky enough to be employed in that kind of monoculture: 
My main concern is doing this wrong, based on how people evaluate information in terms of their pre existing beliefs and biases. There’s no way for this screening strategy to be perfect and unbiased enough to garner the necessary credibility among those who (according to Facebook) need it the most. And every single mistake (or anything which could be interpreted as error or bias) will be viewed as representative of the system and rife in the implementation. This system has to walk an incredible tightrope of being entirely correct and completely unbiased, while appearing as such to the people who are most inclined to view it as the opposite. But, it’s like Facebook somehow has no idea that some significant number of Facebook users aren’t going to extend them the benefit of the doubt and evaluate this imposed fact checking in good faith. To them, both Facebook and any fact checking authority to which they appeal is ALREADY biased, and this imposition is additional proof of their ham-fisted or malevolent social engineering.
Even if there was a 100% perfectly neutral and unbiased fact checking organization. And even if Facebook's system had a perfectly neutral and unbiased filter for selecting content to be fact checked, people would still perceive it as biased. Given the degree to which we’re politically compartmentalized, liberals and conservatives will have far more exposure to their preferred content being subjected to fact checking filters. If you're in a X bubble, you'll see a lot of X content being fact checked and filtered. And even if there was some third party neutral audit which clearly showed equivalent fact checking was taking place, it will do basically nothing to change the perceptions of people who see their content of choice being "censored" constantly. And that's assuming a perfectly unbiased system. 
I’d be on the side of fact checking if facts actually had anything to do with what people believe. Given the relationship between values, beliefs, and facts, “fact checking” is values enforcement, even if it’s accidental. This is going to reveal itself to be a way to “check” people who believe unapproved facts, more than it’s a tool for improving the information diet."