The Algorithms behind Social Media

In our Computer Science classes, we are always made acutely aware of how code and the information age has had a huge impact on society, especially how we all interact. It is widely known that Computer Science interviews at top universities can often be focused around computer ethics rather than actual ability; coding skills these days carry incredible power that should only be granted to the most conscientious of our society. However, it is still all too possible for algorithms to create harmful and toxic environments online. After studying computer ethics in the A Level Computer Science course and having some fascinating talks put on by the school about the algorithms governing popular social media platforms, one aspect that was made alarmingly clear to me was the way computer algorithms can very subtly change one’s perceptions and expose users to distressing content.

When Nina Schick came to the school a few weeks ago, she outlined how companies and even countries such as Russia monitored Americans’ Facebook accounts during the 2016 US Presidential Election and connected each of them with others who shared their views, thus dividing Americans from each other into distinct groups, and spreading misinformation. Suddenly, the communication between sides broke down leading to riots and inexplicable hate for those with opposing views. Many argue this is a result of personalised social media feeds that only show you what you want to see; having a balanced range of opinions is important to be able to make a valuable decision and many Americans didn’t have that in 2016, or even to this day. Society is now anticipating this too – look at the way LA and New York boarded up their shops before the results of the 2020 election were announced; they knew that either way, there would be a riot. If you watch the Netflix docu-drama The Social Dilemma this is explained in far more detail and is, in my opinion, one of the most valuable watches out there at the moment.

It was later outlined in my Computer Science classes how the Trump campaign hired Cambridge Analytica not just to find out what people wanted and subsequently create attractive policies, but to very subtly change Americans’ opinions so that they were in line with Trump’s policies, or as they call it ‘motivating potential voters’. Over the course of the campaign, CA had collected up to 5,000 data points on 220 million Americans, showing the incredible reach of their system. Once this data harvesting was uncovered, many Americans felt violated and had a newfound appreciation for how every movement online can be monitored and ‘remembered’ by algorithms. No company or country had to ‘hack’ into anything; it is all there on social media for anyone to see. It is a startling thought, that someone somewhere can be tracking every digital move you make, which is why there are so many recommendations made to update apps, as this ensures your data is secure. If you want to learn more about this, I’d recommend watching the documentary The Great Hack.

Another recent example of this is TikTok’s ‘For You’ page. When you first start using it, the app monitors your engagement with a wide range of stimuli in videos and begins to feed you more of the content you engage with the most. This seems like an ingenious learning algorithm; slowly getting to know you and your preferences and feeding you content so you can see more of what you like. However, it only takes a couple of accidental likes or shares and the algorithm can begin to feed you content of violence, racism, homophobia, the list goes on. There is proof of this in the BBC’s Panorama documentary, Is TikTok Safe?.

The vulnerabilities of the ‘For You’ page are primarily down to how TikTok and other similar apps are monitored, as well as how easily content can be shared thanks to the Internet. TikTok is only a few years old and within a few months of its release, it shot up to being one of the most downloaded apps in history. Consequently, its security and child protection policies have constantly been playing catch-up, leaving younger users open to harmful content that should have been taken down. TikTok’s algorithms have not caught up to the vast numbers of videos being posted on the app each minute and its founders claim that it’s an impossible task to sift through all of the content, constantly weeding out everything that can harm or misinform users.

This issue is specifically prevalent for TikTok because of its rapid growth. Other social media platforms such as Twitter and Instagram have had years to devise analytical algorithms that fact-check posts, as well as effective methods of reporting offensive or dangerous content. In comparison, TikTok only had a couple of months before it was being used globally, explaining why reporting issues is still an arduous process and videos are only taken down after a few hours, having been seen by millions.

However, it isn’t all doom and gloom. As well as studying a wide variety of examples where computers and information have been misused in our lessons at school, we also investigate into many of the possible solutions. The most obvious solution would be changing the laws around social media and data harvesting, which has been attempted many times and is often successful. The problem, however, is that the fast pace of technology means no matter what laws are passed, there will soon be a new problem, or a new loophole being exploited; bills can take months to be passed into law as opposed to apps that have trends going viral in a matter of hours, meaning despite politicians’ best efforts, laws can never truly stay up to date with social media itself.

The responsibility more often than not lies in the hands of the app creators and developers; every app states that keeping users safe is a top priority, yet so much content can ‘fall through the cracks’ and harm users. Furthermore, more laws should be passed regarding app development and ensuring each app that has a user creation feature requires an algorithm to monitor all the videos being uploaded to an app, so that nothing harmful or dangerous can be posted. They must also have a quick and easy way to report content that is misleading or promotes hate. Enforcing stricter rules on app development will make sure the next app that ‘blows up’ will already have features that previous apps did not have, making it a safer community.

There is also a portion of user responsibility tied up in this as well. After all, the users are the people creating all this content after having agreed to the app’s terms and conditions, and therefore should take those community guidelines more seriously. Apps have a right to delete accounts due to malpractice on the app and should perhaps enforce this more regularly for dangerous or harmful users.

Online communities have a very similar structure to those in real life, with the one big difference being the phenomenal ease of spreading ideas and information. Because of how interconnected our world is, online content can spread like wildfire (with fake news spreading six times faster than real news, according to The Social Dilemma). It is important now more than ever to be able to think critically about what algorithms are feeding you and why; being aware of how apps are designed can be so valuable when navigating them. After all, the first step to solving any problem is being able to understand it.

Links:

2016 Election
Russia’s involvement: https://www.bbc.com/news/technology-46590890
Cambridge Analytica’s involvement: https://www.npr.org/2018/03/20/595338116/what-did-cambridge-analytica-do-during-the-2016-election

Documentaries
The Social Dilemma: https://www.netflix.com/title/81254224
TikTok: https://www.bbc.co.uk/programmes/m000p3p9
The Great Hack (on Netflix): https://www.netflix.com/title/80117542

Belinda, Head of Osmund Company