AI is being applied to all manner of tasks that humans have always done and which over time are proving to be better than them. One of the first applications was in radiology where machine learning was used to classify medical images to determine if they were cancerous. The method has become so good now that it is faster, cheaper and more accurate than even highly trained radiographers.
Since then, it has been used to classify all kinds of images. The latest, according to The Guardian, is its appearance in the online dating world, where it is being put to the test to decide which photo of someone’s mugshot makes them look the most attractive. The question this raises is how does the AI know? It is quite different from determining if a dark patch on an X-ray image is a lesion.
Come to think of it, how do humans know which photo of themselves to select when it comes to showing themselves off in the best light? It seems such an art form. Recently, I had about 100 photos taken of me by a professional photographer who made me pose in various ways, from leaning against a glass wall to standing in a corridor. I force-smiled my way through all the clickety clicks and cajouling, feeling mightily awkward. Of the ones the photographer showed me, none looked particularly flattering. But being highly trained in this skill, she was able to pick the four she thought I looked good in and asked me to choose one from those for the website it was intended for. I found it really hard and would have appreciated some AI to help me.
It got me thinking, what makes one image look better than another one? Is it the lighting, the way the eyes are open, the size of a smile, the way a head is tilted? And so on? I am sure the AI would be able to work it out but not know how it did it.
Meanwhile, generative AI is being developed to help people write about themselves in a more enticing and personalized way. For example, the dating site Match Group has been using it to “eliminate awkwardness” when dating online. Let the AI tell a few white lies rather than the human, who as we well know will often ‘lie’ about their age, interests, and what they are looking for in a partner to improve their chances of getting a catch.
Touching up photos to make someone look better has been around for ages – especially if they are to appear on the front cover of a magazine. Nothing new or wrong with that. But the photo doing the social media rounds today is intended to put someone in a bad light rather than in a good one. The photo in question is of a girl with pink hair looking askew at the British prime minister Rishi Sunak who has just poured a pint at a beer festival – showing he is a man of the people. The original image had been ever so slightly manipulated so as to change the facial expression of the girl from appearing disinterested (one on the right) to dismissive (one on the left) – in the blink of a few pixels being switched around. It did not need AI to do that – anyone could have done that using Photoshop but the furore it has unleashed about AI-enhanced images is quite astonishing. I thought the photo was quite funny just like the AI-generated image of the pope wearing a puffer coat, which, clearly, he has never worn. We all like a bit of harmless mischief, seeing the bizarre, the funny, and the hilarious – especially if it is of a famous person. However, the ‘beef’ in all the comments and concern of the pint pulling image is that it is politically motivated and could be the start of a bigger misinformation campaign. Some even think it is a threat to the democratic process. The thin edge of the wedge.
Distorting images for political gain is a century’s old trick. For example, the infamous photo that was used by the Tories over 40 years ago, entitled “Labour isn’t working” is a case in point. The image depicts a snaking unemployment queue linking up outside of an unemployment office. It was blown up as a full-scale poster and sent a powerful message that if the Labour party got elected the dole queue would get much worse than it was at the time. Powerful stuff. The image itself was manipulated using an old-fashioned technique. Only 20 volunteers showed up for the photoshoot and so the photographer took lots of photos of the same people and then stitched them all together. If you look hard enough you can see the same people appearing throughout the queue. From a distance, you would never have guessed. The poster was placed on billboards and buses all over the UK and this pervasive presence became etched in people’s minds no doubt nudging them to vote one way rather than the other.
It wasn’t the photo technique that was the problem but the way it was used to distort the message and be the subject of misinformation.
This kind of trick of making something appear much larger than it actually is, can be done now in a matter of minutes using a contemporary generative AI tool. In my mind the focus of concern should not be about the wizardry of what can be achieved with AI tools now but with the advertisers and the politicians who connive to using them to deceive and distort the masses.