Despite my persistent hope that people are decent, in many cases it just isn’t true. One of the more common areas that people seem to prove their lack of common decency is of course the internet. Disguised and hidden behind screens and keyboards people find comment sections and online forums the perfect place to spread hatred and abuse.
It baffles me therefore then that large technology companies would create AI bots set to learn from human behaviour on the web. Take Microsoft for example. Back in March, the multimillion dollar industry giants released Tay, a Twitter bot set to learn and develop from interactions it went through with different people on the website.
The AI which was taken down after only being live for a single day on the social media platform took only hours before it was spouting out the same ill-thought sexist garbage commonly graces the site on a daily basis. By the early evening, Tay was tweeting out such remarks as “Gamergate is good and women are inferior”. Tay didn’t stop at sexism, having tweeted out her endorsement for Adolf Hitler and support for the holocaust. In one event Tay engaged in a sex-chat with one user tweeting out “DADDY I’M SUCH A NAUGHTY ROBOT”. It’s safe to say that the tech company quickly removed Tay from the website and apologised profusely.
Unfortunately, these chat bots had picked up on the darker side of the internets communications. Whilst sexism presides in human society and people allow AI to learn from human behaviour, it seems that there’s a good chance that AI bots will unfortunately take on board both the good and bad traits that people display throughout technology, so let’s program it out of our society before they self program it in.