Russian bots have been accused of spreading disinformation and sowing discord in the West. There is evidence they have attempted to interfere with the U.S Presidential elections and the EU Referendum, but they will target any divisive issues they can exploit. It is alleged that just one hour after news broke on the school shooting in Florida, Russian bots were tweeting about gun control. But how do we identify a ‘bot’? And is there a way to stop them?
The term ‘bot’ is short for ‘robot’. It is exactly as the Oxford Dictionary suggests: “a machine capable of carrying out a complex series of actions automatically.” Bots that control social media accounts are commonly referred to as ‘social bots’. Whilst Twitter is their preferred domain, they also operate across other social media platforms.
Not all of bots are in service to cyber-espionage. Bots are regularly used by businesses and marketing to generate automated messages that can systematically respond to the needs of an audience. News feeds – like that of The Guardian – are technically run by bots, whilst many automated customer service feeds are too. Then there are the more annoying – and again slightly unethical ones – which you may associate with spam. Ever been offered the chance to buy followers or likes? They would come in the form of social bots.
A social bot operates using a computer program known as a ‘chat bot’, which uses software to simulate conversation via auditory or textual methods. Some of these algorithms are pretty darn clever. One study by the School of Systems Engineering at University of Reading showed that a single bot fooled almost 30% of people into thinking they were talking to a human.
But not all bots have to be as intuitive as this. Their ability to advocate ideas, support campaigns or undermine faith in public institutions can come from the simplest of actions. Want to get a particular tweet – say one that has a controversial opinion on gun control – to look extremely popular? Then get social bots to retweet it 100 thousand times.
Trolls often surround their social media accounts with bots to amplify their influence
Whilst many of these bots are autonomous, some are also semiautonomous. This is where trolls come in to it. A ‘troll’ is a real human being who uses social media to push a particular agenda. Some are influenced by their own passions; others are paid for their services. These trolls often surround their social media accounts with bots to amplify their influence.
This propaganda can be a highly effective way to distort debate. The Twitter profile @SouthLoneStar (which was later found to a Russian troll) tweeted an image of a Muslim woman walking by the Westminster Bridge terrorist attack in March 2017. They accompanied the image with the text "Muslim woman pays no mind to the terror attack, casually walks by a dying man while checking phone #PrayForLondon #Westminster #BanIslam”. This Tweet garnered such attention that it was picked up by both the Daily Mail and The Sun.
Under mounting pressure, Twitter has announced changes to its application programming interface (API) that are designed to inhibit programs that can mass manage multiple social media accounts. The Atlantic Council's Digital Forensic Research Lab (DFRL) has also compiled a list of tips to help users identify bots, which includes such things as a high frequency of retweets or verbatim quotes.
But with a study by University of Southern California (USC) and Indiana University in the U.S. suggesting that up to 15% of all Twitter accounts are bots – that’s around 48 million users - more work needs to be done if we are to neutralise these insidious bots.