Essentially, a computer deepfakes artists voices to generate new music. On TikTok, there's endless 15 second clips of pop artists covering classic songs - think Ariana doing 'Love Story', or Harry Styles covering 'I Wanna Be Yours'. These range in proficiency - some being scarily accurate, some being straight up scary.
These experimental snippets are only small scale demonstrations for the future potential of AI. However, larger scale projects are surfacing - take the newly AI generated Oasis reunion album. At 8 tracks long, the album allows fans an otherwise unfathomable reconciliation of their beloved band. It's even got Liam Gallagher's stamp of approval - with him taking to Twitter to call it: "Mad as fuck I sound mega". So fans are happy, and the frontman himself has no problem - is AI here to make its own name in the music industry then?
Perhaps, but this has an array of ethical implications which need to be considered. The authenticity or complexity of human emotion in song can't simply be replicated by a machine. With AI simply deriving music from an artists previous works, we'll end up seeing a stream of similarly sounding songs, rather than seeing a musician's lyricism and sound evolve over time.
There's also job losses and copyright issues at stake - all in all, it's a bit messy, and leaves the artist with no claim or consent on what words their voice is singing, which is unfair, especially when you consider the average £0.003 they make on a Spotify stream. So let's leave AI out of the music industry, for the sake of the people, emotions, and passion behind the music.