The philosopher David Hume once said “Beauty is no quality in things themselves: It exists merely in the mind which contemplates them; and each mind perceives a different beauty.” Well, now those “different beauties” we all perceive can be understood and learned by a computer model.
Electroencephalography (EEG), is the measurement of electrical activity in the brain. It was used on participants in this study while they looked at photos of people of a range of genders, ages, and skin colours.
Michiel Spapé, an author of the study, said “It worked a bit like the dating app Tinder: the participants ‘swiped right’ when coming across an attractive face. Here, however, they did not have to do anything but look at the images. We measured their immediate brain response to the images.”
The data gained from this initial stage of the test was used to train a “Generative adversarial neural network” (an advanced computer model that can analyse data and draw its own complex conclusions); this model, in turn, made predictions about what kinds of faces would be most attractive to each of the participants.
80% of participants picked out the face the computer had predicted they would find most attractive
After that, the computer generated lifelike artificial faces that they predicted would be most attractive, and placed them alongside other randomly generated images of faces.
The video footage of this process is an eye-opener for those who haven’t already seen how realistic fake images of faces have become.
Eighty percent of the participants picked out the face the computer had predicted they would find most attractive. This validated the results of the study, and is a sign of just how far these computer models are coming in terms of understanding the human mind.
This technology continues to improve; neural networks like the one used to generate these images can become rapidly more advanced because of the way generative adversarial networks work.
The same technology could provide us with insights into other cognitive functions such as perception and decision-making
Basically, you feed the network photos of people. It studies them and comes up with its own photos of people, while another network scans those photos, looking for any tell-tale signs that might give them away as fakes. The two algorithms work against each other, each becoming better and better at tricking the other, and already the faces generated are beyond the human eye’s ability to detect as fakes.
There are wider implications to this study than simply science fiction stories about people falling in love with their digital mate; the authors believe the same technology could provide us with insights into “other cognitive functions such as perception and decision-making." Potentially, says Spapé, “we might gear the device towards identifying stereotypes or implicit bias and better understand individual differences."