They say that “the measure of a person lies in how they treat those beneath them.” Well, by ‘they’ I mean Sirius Black, but the point still stands: decency and respect to those we consider ‘inferior’ has never been humanity’s strong suit.
From slavery to invasion, discrimination to genocide, the human race very simply put has a massive problem when it comes to negotiating an imbalance of power. Our first instinct is to exploit it, and if we can’t do that, we kill it. So what does that imply about the future of AI? Presently, artificial intelligence isn’t self-aware, sentient or believably ‘human’, so it’s not much of an issue. There’s no harm in shouting at a computer when it crashes, or berating Siri for recognising our voice incorrectly. But would people change their behaviour if these things could feel, or simulate feeling? Would we be any kinder to our AI ‘assistants’ if we knew they understood what we were saying and doing to them? Unfortunately, the answer there is probably not. The fact is that many people aren’t ready to consider the implications of a truly intelligent machine, nor the ethical considerations that come with it. Will Amazon pay Alexa a wage if she becomes self-aware? Will Siri have rights if she demands them? It doesn’t sound likely.
Science fiction, as per usual, is decades ahead of this question. Blade Runner is a sophisticated and sympathetic exploration of the human race’s inhumanity to artificially intelligent slaves, and Ex Machina explores the likely consequences of a present day experiment with a sentient gynoid – in short, an inhuman exploitation of a self-aware creature. But whether these warnings will be heeded remains to be seen, and I personally am not optimistic. Human civilization is stained with blood at every stage, and given the heartlessness of contemporary society to members of our own species (consider the language that British and American tabloids use to refer to refugees) I can only imagine the pages of 21st century history will read much like those of the past.