Liar, Liar, the Algorithm’s On Fire!

Image Courtesy: Change

Reader, meet ‘the algorithm’. ‘The algorithm’, meet the reader. 

In order to thoroughly evaluate whether social media is fundamentally designed to lie to us, we must first understand the language it uses to speak. This is often referred to as: ‘the algorithm’.

While seemingly enigmatic, the term ‘the algorithm’ is often simply defined as a recipe. The first of its kind was ‘The Newsfeed’, introduced by Facebook in 2006. What initially was an inanimate book of faces transformed into an individualised, reverse, chronological summary of user activity. ‘The Newsfeed’ was an immense game changer for Facebook, flooding feeds with interaction-based content and growing user time by 600-700%

Harmless, right?

Shortly after Facebook, we welcomed YouTube and Twitter. Thence commencing, the social media arms race. With increasing members and networks, user-time became a currency, and companies looked to hire the biggest names in AI to compete for this. What was initially designed to keep us ‘entranced’ through the potential of making new friends, now began to foster a culture in which anything and everything would be shown to generate attention. Eventually this would show us content so explicitly far from the truth that we wouldn’t know where to draw the line.

In 2012, YouTube looked to maximise user time by a factor of 10 – from 100 million hours per day to 1 billion – with the ‘Watch Next’ feature as their main generator of views. This function became the perfect breeding ground for misinformation and conspiracy theory videos to thrive. Watch one conspiracy theory? Move straight onto the next, and the next, and the next until the rabbit hole becomes deeper, and deeper. 

An example is the thriving ‘flat Earth’ conspiracy videos. With tens of millions of views, yet no legitimate basis for their claims, the ‘flat Earth’ videos became increasingly popular. ‘The algorithm’ recognised this ‘Watch Next’ rabbit hole as a positive implementation, automatically recommending these videos to users, while failing to recognise the concept of misinformation. By favouring the wrong kinds of interactions, ‘the algorithm’ was becoming a promoter of alt information, incapable of displaying informed content. 

However, ‘the algorithm’ doesn’t just exist on its own. We can find fault with how programmers have attempted to boost engagement, resulting in misinformation. Initially, Facebook’s algorithm would favour posts with the most interactions and comments. These would often be posts that users disagreed with/were angered by, hence boosting misinformation and clickbait.

The following is just one example of Facebook using anger to boost engagement. After the creation of ‘The Newsfeed’ in 2006, several ‘against newsfeed’ groups flooded the platform. Quickly, developers realised that allowing this ‘rage’ to foster was the perfect way to attract new users. This idea continued when Facebook introduced emoji reactions. The weight of one reaction was worth five times the quintessential thumbs up. In 2019, it was revealed that posts with a disproportionate amount of ‘angry-emoji’ reactions were likely to be posts containing misinformation or harmful content  Since this feature was heavily weighted, ‘the algorithm’ pushed it to the forefront of ‘The Newsfeed’. After much back-and-forth, the ‘angry-emoji’ was finally reduced to zero

In this fleeting recap of the history of ‘the algorithm’, we can deduce that its motives might not always be to benefit or inform us, yet we fixate so heavily on what it shows us. Since social media is free, it has always been incentivised to draw our attention to adverts, and as a by-product, vie against other platforms for our time. 

It might be time to ask if  ‘the algorithm’ has gone so far that, even if it wanted to, it is incapable of telling us the truth. Former Instagram engineer, Thomas Dimson, describes ‘the algorithm’ as a ‘model of human psychology’. Perhaps the desire to misinform is, even, innately human?