Quantcast
Viewing latest article 7
Browse Latest Browse All 39

What happens if automated fact checking succeeds?

Virtually everyone who is working on the Fake News / misinformation problem takes the same fundamental approach. They all want to identify Fake News, from True News. They want to be able to definitively say “This here is untrue!” or “This article/paragraph/sentence, is backed up by the available evidence.”

The most ambitious efforts are trying to automate this with AI, but most are simply relying on crowdsourced fact checking, or, as in the case of Facebook, agreement between highly regarded fact checking services.

While I fully support these efforts, I do have a problem with their approach. Several actually, but this article is about one.

Firstly, I support them because I also want misinformation gone. I don’t like people being tricked into believing things which aren’t true. I don’t like people around me voting and acting on misinformation which is likely to hurt me and other people.

However, I have one particular concern which I haven’t seen expressed anywhere else yet: What happens if they actually, fully, succeed?

The True-or-False Bot

Imagine a not too distant future where we have an AI that can correctly identify misinformation 100% of the time. We already have Watson beating Jeopardy champions and doctors at their job, maybe the next iteration of the Watson code could surprise us, and suddenly we find ourselves able to fact check reliably at high speed. Maybe it will take 20+ years. Either way, what happens to the world?

Do we stop encountering false information? That certainly seems to be the implication to me.

What does that do to our minds? Do we start to take for granted that we can simply trust everything we read?

More importantly, what happens to the next generation who knows only this world? How do they learn to critique what they read? How do they learn to question ideas, when all ideas they encounter in their news feeds are reliably true?

OK, there will always be some level of spin, bias or interpretation that won’t be subject to the true-or-false bot,  but if children grow up in a world where all ‘facts’ are guaranteed to be true, how will they ever learn to investigate new facts and ideas for themselves?

How vulnerable will this next generation be to propaganda by malevolent outsiders? Or by misguided fundamentalists who want to convert them to their belief system?

The fear that I have here is that complete success in removing all misinformation from our newsfeeds will lead to the end of critical thinking. And with critical thinking gone, what basis do we have for holding any beliefs? What do we have to protect us from a particularly persuasive false belief?

We will have cultivated a memetic monoculture with no immune system.

A single virulent idea could potentially wipe out all competing beliefs with little resistance, destroying decades, if not centuries of philosophical, socio-political, and scientific progress. And I haven’t even mentioned the potential for abuse of the AI system itself, the problem with ostracising people who believe differently and several other problems I have with this approach.

Image may be NSFW.
Clik here to view.
Carl Sagan John Stuart Mill Quote on silencing an opinion

This is why I continue to believe that the only solution to misinformation is a system which automatically drags the best argument for that belief out into the spotlight and says “Give us your best argument! ….And then let us reply.” If you want to beat fake news, you need to organise the web so that the best critique of any webpage is immediately available from anywhere that that webpage is found. It is simple. Reliable. Robust. Effective.

No other approach comes close to solving the problem nearly as well as this one does.

 

The post What happens if automated fact checking succeeds? appeared first on rbutr.


Viewing latest article 7
Browse Latest Browse All 39

Trending Articles