A recent research undertaken at Brown University reveals that 25% of tweets about climate change were likely written by bots.
The study analysed 6.5 million tweets from June 2017, specifically from the period when President Donald Trump announced United States will be removed from the Paris climate agreement. The tweets were categorised through the a Botometer from Indiana University, which determines the likelihood whether tweets are posted by humans or by bots.
The Botometer found that the majority tweets posted by bots denied existence of global warming and rejected climate science. When President Trump announced US was leaving the Paris climate accord, there was an increase of Tweets about climate change. The posts created by bots rose from hundreds to 25,000 per day.
The Botometer also showed that on an average day, bots had a 38% proportion of all tweets about “fake science” and 28% of tweets about petroleum company Exxon. Conversely, tweets generated by bots supporting climate crisis were least prevalent at 5%.
With the continuing pervasiveness of bots in the social media, it becomes harder for the public to separate the truth from factually inaccurate statements. Bots deny their inauthenticity, disguising themselves as humans when participating in online conversation. Its sophistication extends further from data it can harvest from every single clicks and discussions users have on social media. The gathered knowledge by bots influences what users see in their digital spaces, aligning with their personal interests and views.
The advancement in artificial intelligence naturalises bots, making it easier for them to resonate human nature. This blurs distinctions between the real and the hyperreal, leaving online users into constant confusion of who they are exactly interacting with. Bots can become more visible in accounts of huge following, retweets or likes. In a digital world, where large numbers equate to authenticity, bots can spread falsehoods on the climate crisis.
Once it maintains an illusionary nature of understanding, surplus of bots can contribute to an unstable distribution of fake news. The inconsistent results on topics can advance political propaganda. This can potentially lead to public manipulation, mistrust and misinformation.
Botometer, formerly known as BotOrNot is an available tool online that analyses data available from a Twitter account to determine the probability of followers being bots. It learns and classifies bots from thousands of labelled examples. The higher the score it gives, the more likely the profile is a social bot.
Last modified: 10th March 2020