Can or even should Google sort fact from fiction?

Posted by Owen Powis on 22 Dec, 2016
View comments Search News
What is true, correct, a ‘fact’ is difficult to pin down. On both sides of the Atlantic this has been shown to have somewhat dramatic effects first in Brexit and then the US presidential election. In both campaigns there were accusations of lying, dubious statistics playing a key role and vocal groups attempting to drown each other out. So what role does Google have to play in any of this?

Social media has largely been put to blame for the preponderance of fake news stories and falsehoods which have been spread in both Brexit and the Presidential campaigns. Facebook has been seen as especially guilty on the fake news front but Google has been quietly playing it's part as well. We expect our results to be increasingly relevant and have gotten used to search results providing a higher and higher standard of results. Maybe we're still making that same mistake of placing a little bit too much trust into what we read.

The filter bubble

The ‘echo chamber’ on Facebook is a concept we’re fast becoming familiar with. We’re more likely to be friends with people who share our opinions. The types of stories they share with us are less likely to challenge our viewpoints. Facebook’s own targeting then further reinforces this by showing us the content it thinks we will like. We get wrapped up in a little bubble where everyone agrees with us and everything reinforces the opinion we already have.

The thing is, it’s not just Facebook which does this. Google has been personalising your results for years, since 2005 in fact. Based upon the websites you have visited previously Google will change the order of your results.

This means that, just like with Facebook, you can start seeing more content that reinforces your opinion and less that disagrees with it. The same search for someone on either side of a political divide will bring up different results.

This effect is known as ‘the filter bubble', as first coined by Eli Pariser. It’s another version of the echo chamber in action. Consenting opinions are promoted to us, dissenting ones buried. Personalisation is spreading, and has been for a while. More personalised content gets more clicks and so drives more revenue.

Fake news

Facebook may have gotten the majority of the attention over this issue but they're not the only ones guilty of promoting content that simply isn’t true, as this aptly shows:

Google has taken steps to remedy the situation by removing sites promoting fake news stories from its ad network. But this isn’t the only way for these sites to gain revenue and removing them from the search results entirely is really the only way to stop them getting views. However, that creates its own problem.

Guardians of ‘truth’

Who decides what’s true and what’s not? Do we allow Google, where 80% + of our searches for information begin, to be the gatekeeper of facts? In effect that’s what we’re saying if we want Google to remove fake stories from our search results. They get to decide what is true and what’s not. In fact just researching this article I came across an example of an accidently misleading result:

On first glance you'd be forgiven for thinking that Google's market share is around 60%. But it's not. Not only is that figure out of date but it's from a single source which does not correlate with other sources and is actually taken from an article which questions that figure. As seen previously Google has trouble with understanding that kind of nuance. But is happy to rip the information out of context and display it as fact.

There are some much bigger and more complex examples, take for instance the flat earth theory. Google it and you’ll find over 10 million results, with many of them proposing the theory to be correct. So what should Google do? Remove every result which says the earth is flat?

OK so they’re not all entirely serious. But how can Google distinguish between satirical and serious articles, something us humans find difficult enough as it is. Beyond this you have the moral problem - who are we to say that someone shouldn’t be allowed to believe the earth is flat if they want, and then who decides which falsehoods are OK to believe and which are not.

AMP is just making things worse

We have all learned to judge the validity of what we read on the internet based on the source it’s from. Just like in real life we tend not to believe what the person standing on the street with a sign saying ‘The end is nigh’, we dismiss content based on the context it's given to us within.

 

 

Then again what chance have we got if the legitimate news outlets can’t tell the made up stuff from the real? Like the time the NYT ran a (now corrected) article featuring a hilarious and fake magazine cover, from The Onion, about how Obama likes to sing in the shower. I mean maybe he does, we know he likes to sing.

It seems a harmless enough mistake. But this was from what is possibly the most well-known satirical news site on the internet. Once it’s taken out of context and posted by the NYT suddenly it goes from complete nonsense to an (almost) plausible thing.

And this is the trouble with AMP. It takes news articles out of the content in which they originally appeared, pumps them to the top of your results and strips them of many of the design cues we use, without even thinking about it, to judge the legitimacy of content. Leaving legitimate looking stories looking much the same as the not legitimate ones.

A story from a relatively unknown site poorly written and badly sourced is boiled down to an image and a headline with much the same formatting as any other publication - meaning we’re going to be much more likely to think it’s a legitimate story.

It's worth remembering that once a story, no matter whether it is true or not, is reported on by a major news outlet that is viewed as trustworthy, that's it. It becomes 'fact' (even if only temporarily) at that point to the readership. The NYT have now actually published a case study into just how a fake story gets reported on as real.

So what should Google actually do?

Removing the revenue stream from these sites is a great first step for Google. But one of the main places that these theories come to flourish is another Google controlled entity that managed to completely sidestep the debate.

Many conspiracy theories, fake stories and just good old fashioned completely wrong content owe their origins and popularisation to YouTube. Google has promised to remove sites espousing fake news from their Ad networks yet YouTube videos are still happily generating income. The thing is these videos generate millions of views. Then again there are likely only very select advertisers who would want their ads running on these sorts of videos.

But once again, do we want to rely on Google to decide what sort of content we can watch on YouTube? Perhaps I want to watch conspiracy videos and prep for the imminent solar flare which is 'definitely' going to cause global chaos on January 12th 2017 according to the Mayan Calendar. If it's not harming anyone (more than any other ridiculous video) then why should it be restricted?

‘Post-Truth’ is meaningless

We’ve always been surrounded by falsehoods. From myths and legends to dubious media fact checking and political campaigns which selectively use or bend the truth. There’s also talk about ‘emotional voting’ as if that’s something new.

Saying we live in a post-truth era suggests we were somehow previously living in the ‘truth era’. We’ve never lived in a time when we could just blindly believe everything we read. People have always chosen to absorb the media that correlates with their opinions. Pick up a selection of papers and you’ll see the same stories reported a variety of different ways or different stories being given prominence.

We need to be as careful as we’ve ever been about believing what we read. The solution shouldn’t be to expect corporate entities, profit making companies, to decide for us what’s true and what’s not. Freedom of information exploded with the growth of the internet and is now under increasing scrutiny and censorship. Let's not ask for that censorship to be increased.

Google undoubtedly has a role to play in removing harmful content, but going beyond that it really should be up to us to decide what we choose to believe rather than expecting that to be done for us. I know I’d rather decide for myself. But in order to do so I need to see the context of the information. Google is bringing more information into the search results and stripping it of that context. This is creating an environment where the validity of content is harder to judge as everything gets the same sanitised look and feel. We are instead trusting Google with the verification of that information, we believe it just because we read it in the knowledge graph results.

We need to ensure that we continue to have the tools available in order to judge content for ourselves, rather than rely on it being done for us. Stripping away those tools is dangerous. We don't need Google to decide for us what's real and what's not. What we need is for them to leave us the ability to decide for ourselves.

Recent articles

Google rolls out November 2024 core update
Posted by Edith MacLeod on 12 November 2024
14 essential types of visual content to use [Infographic]
Posted by Wordtracker on 3 November 2024
OpenAI launches ChatGPT search
Posted by Edith MacLeod on 31 October 2024
Google expands AI Overviews to over 100 countries
Posted by Edith MacLeod on 31 October 2024
Google unveils new AI-powered Shopping experience
Posted by Edith MacLeod on 21 October 2024