The internet knows me well.
When I open YouTube I am greeted with the suggestion to watch videos entitled The Missing Identity of The Lady of The Dunes, Why Did I Serve 16 Years for Murder When I Didn’t Kill Anyone? and Most Iconic TV moments #5. Twitter notifies me when my favourite people tweet. When I ask Google where I can buy Jerusalem artichokes it proffers suggestions that are within close proximity.[restrict]
Three weeks ago my brother sent me a YouTube video called PEDOGATE 2020 Pt.II – Tom Hanks (New Info). In it the creator, who goes by Mouthy Buddha, presents information that alleges Tom Hanks is a paedophile. The accusations mainly come from actor Isaac Kappy who, according to the video, is privy to the information because of years spent working with Hollywood’s elite. A tweet from a woman named Sarah is also included in the video. In it, she states that Tom Hanks purchased her from her father for sex. During the “investigative” portion of the video, Mouthy Buddha uses a Russian search engine to look up a combination of letters spotted in a photo on Hanks’ Instagram. The search eventually leads to images of children, then a forum with some disturbing comments, then a picture of Bill Clinton and honestly the whole thing just spirals. It is obvious conjecture and yet the video has more than 500,000 views since its release.
MORE MORE MORE
The problem we now face is not just the creation and spread of fake news, but rather the echo chamber that begins to form around those that consume it. As soon as this video ends, I am presented with another and another and another. They are different videos, from different creators around the same theme of an elite Hollywood paedophilic ring, the deep state and of course, Hilary Clinton’s involvement in a pizza shop that traffics children (#PizzaGate). The more I watch, the more I am presented with. My internet universe suddenly looks very different.
Eli Pariser, the author of The Filter Bubble: What the Internet Is Hiding From You, says personalisation is becoming so precise that we may not even know what we’re missing: the views and voices that challenge our own thinking.
“The danger of these filters is that you think you are getting a representative view of the world and you are really, really not, and you don’t know it,” Pariser told The Guardian.
Much of the problem with #Pedogate is that the video is produced to a professional quality, alluding to authority.
“For better and for worse, authority and the ability to publish or broadcast went hand in hand. Now we are moving into this world where in a way every Facebook link looks like every other Facebook link and every Twitter link looks like every other Twitter link and the new platforms have not figured out what their theory of authority is,” says Pariser.
“As a result we live in this information environment that is, on the one hand, more filter-bubbly, but also the bounds of what is considered acceptable to talk about, acceptable to think, and the norms, seem to be shifting. It is changing the bounds of what the conversation can be in a way that I think is pretty corrosive.”
CAN I MAKE IT MORE OBVIOUS?
The countering of fake news and reducing the spread of misinformation is high on the priority lists of most networks. When the Cambridge Analytica scandal unfolded, Facebook introduced a flag feature to highlight fake stories. The social media giant also reconfigured their algorithm (the robot brain that determines what content you see and don’t see) to prioritise content from friends, family and trusted traditional publishers.
Similarly this year, YouTube realised the spread of unfounded Covid conspiracies would threaten our collective response and so they introduced a homepage banner pushing trusted sources of information. When an independent creator mentioned Coronavirus, pop-ups suggested that you might be better seeking your information from your country’s health board.
These measures help somewhat, they are little cracks of light in our online echo chambers, reminding us not to believe everything we see – even when it’s produced to look like a Netflix docuseries.
Problem solved right? Not quite.
David Rand is a Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan. For the past number of years he has been studying the spread of fake news and those who share it. David’s research combines mathematical models with human behavioural experiments and field studies to understand human behaviour. In his latest work, David has found that the majority of subjects are capable of recognising fake news and misleading headlines but choose to share it anyway.
“One explanation is based on people’s preferences, and stipulates that people care much less about veracity than other factors when making sharing decisions – and thus share misleading content even when they are aware that it is likely to be untrue,” writes David.
“We find that participants can discern between true and false news content, yet their social media sharing decisions are barely affected by veracity. The fact that participants are willing to share ideologically consistent but false headlines could thus be reasonably construed as revealing their preference for weighing non-accuracy dimensions (such as ideology) over accuracy.”
Simply put, people will willingly share misleading information that advances their personal beliefs, gains social attention or sparks a reaction. Wonderful.
TURN OFF THE INTERNET
Listeners to The Creep Dive will know that I have long campaigned for the internet to be switched off entirely so that we can all resume analogue, balanced lives in Wicklow. Of course, I’m being facetious. The internet may seem like an untameable beast but the solution is simple. Consume your news from trusted sources, don’t rely on a single-sourced stories and listen to both sides of debate.
And of course, use YouTube for what it was intended for; cat videos and Most Iconic TV moments #1 – #20.