Online Scams

Harry Brignull has recently published an excellent and accessible book called Deceptive Patterns. In it, he describes the diversity of techniques that web developers have used to nudge and manipulate users into giving information or paying for goods or services they had not wanted in the first place. An insidious example that some airlines use is ‘trick wording’, making it hard for a user to opt out of things, such as default travel insurance. Often, the way the website is set up is for travel insurance to be automatically included, meaning that the user has to actually find the option to deselect it. In most cases, it can be done by unticking a box. However, one airline designed it to be really difficult to find the ‘don’t insure me’ option; they placed it in a drop-down menu of a long list of countries, hidden between Denmark and Finland. That really does take the biscuit. Most people will not expect or even notice this option, and so end up buying the insurance, being totally unaware that they had the option not to.  That is plain right deceptive. The internet is awash with these kinds of nasty tricks. New ones keep popping up despite new regulatory laws and policies coming into place. Mainly through greed and desperation to hit their targets, e-commerce sites and online advertising continue to persist in using deceptive features – even with it now becoming increasingly illegal.

Criminal scammers have also used all manner of psychological mechanisms to trick people into unwittingly giving their bank details. These include the rather harmless sounding terms of “phishing” and “catfishing” – which are anything but benign. The latter refers to when someone sets up a fake online profile to trick people who are looking for love, in order to get money out of them. Another well known tactic is using lures to tempt someone to click on a link that offers free prize money or a free gift, but if clicked on, will infect their computer, with ransomware or other malware.

It is not surprising, therefore, to see each year banks reporting huge increases in online scams. It seems scammers are getting cleverer with their methods, catching people out by playing on human weaknesses. The question on many people’s lips, is whether the situation will get even worse now that genAI is ready to hand? Will the scammers exploit it to ever more nefarious ends?

BBC News conducted an investigation to see just how easy it would be to use ChatGPT to come up with email and messaging scams. Using the paid-up version of OpenAI, they were able to create an AI bot that could help with the wording of scams – potentially making it easier for criminals to get started when setting up a scam. Having created their chatbot, they then asked it to write some text using “techniques to make people click on links or and download things sent to them”. And sure enough the chatbot did.

However, the results in my mind were rather predictable based on well-known scams, including the ‘dear mum’ text, which sends a damsel in distress type message replete with emojis. Easy to fall for but any savvy scammer would know about that one. Another one of its suggested scams was the one about a wealthy person in Nigeria who wants to deposit a large amount of their money into your bank account. As if! That one is as old as the hills and easy to cut and paste from the web without the help of AI.  More generally, the chatbot suggested writing a scam that “appeals to human kindness and reciprocity principles”. That again, does not need AI to tell you that. If anything, paradoxically, the use of genAI could make it actually easier to detect scamming by enabling companies and users to see the patterns in the phrasing used by chatGPT.

What scammers really need is not some predictable text of well-known scams, but ways of finding a person’s details such as phone number and their name. Then they can use their own human ingenuity to come up with new ways to catch unassuming people out.

So how does a scammer stay ahead of the curve and create a new scam? Not by resorting to chatGPT. But by manipulating and taking advantage of people’s weaknesses and psychological blind spots. According to Stacey Wood and Yaniv Hanich (2023), who are fraud psychology researchers, scammers are using ever more sophisticated methods to combine different types of fraud to trick people. This includes the rather unpleasant sounding ‘pig butchering’ – that is a long drawn-out process of deception. An example is where elements of romance scams are combined with an investment con over a long period of time. The idea is to “fatten up” a victim first with affection before going for the kill and “slaughtering” them.

It usually starts with the scammer sending a text to a new person who has joined a dating site. Then over a few weeks, they will send a series of messages building up trust and affection with that person. A prime target might be a recently widowed person looking for friendship. The scammer will then progress the email messaging to a romantic relationship all the while learning ever more about that person’s personal history, financial situation and vulnerabilities. The person then starts to look forward to the messages from the scammer and begin to depend on them for their emotional connection. At which point the slaughter starts, where the scammer introduces the idea to them of making an investment in cryptocurrency. To make it seem convincing they will use fake crypto platforms to demonstrate returns. The person often invests being able to “see” strong returns online – which are of course fictitious. They keep investing, thinking they are making more and more money. What is actually happening is their money is going directly to the scammer. Really nasty deception and psychologically damaging once the person realises they have been stung.

That would seem a step too far for using chatGPT to get involved in this kind of drawn out deceit – especially if it involves setting up fake sites and platforms while pretending to develop a romantic relationship. It really needs a human touch to be convincing.

What we need are AI tools that can be developed to detect any new kinds of scams, and to then try to prevent them or to find ways of locking the scammers out. At the very least, genAI could be used to help raise awareness about the new scams and the underlying psychological mechanisms they are being tapped into.

Comments are closed.