An interview with Geoff Hinton in the New York Times this week made me stop and think about where AI is heading next. The last few months have been a bit of a roller coaster since the public was given access to the various generative text and image tools, such as chatGPT, Bard, Dalle-2 and Midjourney which has resulted in millions of people trying them out. Quite remarkably, it has unleashed unprecedented levels of joy, amazement and incredulity from all walks of life at what they can do when prompted by us, humans. It has also been matched by many people worrying about how the AI tools will replace millions of jobs while turning our children into cheats, by using them to do their homework. Not since Google search and Facebook came to the fore has there been so much excited talk across the globe about what a new tech can do for society.
I have always been a glass half full person and start by looking at the benefits and opportunities of a new technology. Instead of seeing them ending up making children lazy, I see them helping them learn how to write and code. I see them empowering many others in their work – for example, enabling clinicians to see more patients by summarising and automatically composing reports from their consultations with their patients. Across the board, it is taking the drudgery out of many of our mundane knowledge-based tasks. It can also spark our imaginations to do more and differently. Writing new kinds of plays, books, scripts and so on.
But then something Geoff said in the middle of his interview made me see the future of AI from the dark side, for a change. The first worry that got me thinking was when he talked about there being ever more powerful tools that can be used by evil, malevolent and greedy people for their own ends. The so-called bad actors of the world (although personally I don’t like that term as it used to refer to people who are not very good at acting on stage but now been generalised to mean malicious rather than incompetent people). Scam porn, fake news, fake videos, fake voices, fake claims, etc. could become mainstream, being used in all sorts of unpleasant ways to extort us. Moreover, it is already starting to get messy; being more difficult to know what is real and what is fabricated. While we have all gotten used to getting email scams and more recently deepfakes we may find it more annoying, confusing and scary when they become commonplace – where videos, images and stories are made about us which are untrue but which become increasingly difficult to disprove. Our sense of reality will be turned upside down and inside out if we are not careful.
The second and perhaps more worrying concern is not knowing what the new AI tools might be used for or do, God forbid, on their own volition as they become ever more ‘intelligent’. What happens when say, chatbots, start to learn from each other? Geoff poignantly pointed out:
“… it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Yikes! It could unleash a new generation of killer robots that could easily get in the wrong human hands causing unprecedented destruction. And other unthinkable things.
I reminded myself that this has always been the case with manmade (sic) weapons – whether it is chemical warfare, machine guns, nuclear arms, and so on. We still see terrible acts and atrocities being committed all the time. But will the new generative AI escalate man’s propensity to destroy life with the latest technological means available and make it easier and more shocking to do so?
The sensible course of action is for societies across the globe to put stops and deterrents in place. The politicians and the officials are indeed starting to discuss what to do. But is it possible or realistic for governments to make sure safeguards, regulations and policies can be developed and implemented in order to prevent or reduce AI from being used for bad? Will the exponential advances in AI mean that the goalposts keep moving making it is difficult to keep up?
Returning to my more comfortable glass half full lens, I think about how the unprecedented speed and uptake of the latest AI technological advances could enable us to make more rapid and more profound scientific and medical discoveries, such as finding cures and ways of preventing awful illnesses, such as dementia, Parkinson’s and cancer. Could we also use them to help society detect and even prevent all manner of crimes before they happen, including terrorism, people trafficking and money laundering? And wouldn’t it be wonderful if it was possible for AI to solve the problems it creates?
The bottom line is one of balance. Is it possible that the new AI kids on the block can be put to more good use than to bad use – now the genie is out of the bottle?
The two inserted images were created by DALL·E