Social engineering is entering a new age, and we need to be ready
What is social engineering and how can we protect ourselves?
By Jack Durrant – Associate Director, Howden, BA (Hons) ACII
A recent article on the BBC investigated safety concerns arising from artificial intelligence language modelling. Language models similar to ChatGPT may provide potential criminals easier access to tools which take advantage of unassuming businesses by utilising social engineering techniques; some of which you may have seen or heard about before.
The thing that makes this so potent is that often, perpetrators may not utilise their native language to compose their messaging. Inherently, for native English speakers this makes many of these social engineering attempts easier to identify. For example, spelling errors or grammatic misplacements may easily be spotted and this may rouse the suspicions of the reader.
Couple the spelling errors with some of the commonly used scamming tactics, and for a savvy reader there might be enough red flags raised for even a competent attempt at social engineering to come crumbling down. But there are now more underhand and sophisticated methods at play.
Why does AI change the landscape?
AI models use tried-and-tested techniques in conjunction with cutting-edge sociological research to compose compelling scam scenarios which may not be known to victims. I’ve experienced instances where I’ve been reading through emails and quickly identified ill intentions or sensed that something wasn’t as it should be. However, victims of these attacks frequently cite that these scam attempts happened when they were under pressure of deadlines, in a rush, or tired and distracted. If these scenarios are combined with other modern social engineering techniques, like mail monitoring or interception, a social engineer can implement exactly the right scenario to take advantage of information they might know about their victim – making the attacks so much more potent.
Secondly, AI models can leverage the most advanced engineering techniques. There is almost no time lag between the period in which a new technique is developed and when it can be implemented by the AI models. By being warned about these new techniques, most people can prepare themselves and risk manage to avoid losses. I think immediately of the cruel but also clever ‘Hi Mum’ scam, which utilised a mass message to lots of different people to try to bait mothers into sending money to their children after losing their phone and debit cards – only for the funds to be placed into a scammer’s account. Totally plausible to many parents out there, and I have no doubt when this technique was used against many without a pre-warning that they fell prey to it.
Why should we be more cautious now?
These techniques are constantly being refined and reviewed, and AI can supercharge this. Users only need to put in further instructions into AI software to “up” the odds of a successful scam. Fortunately, common AI platforms like ChatGPT, have safeguards against this. It seems that its own ethical custodians have made good decisions to prevent threat actors from using it for mass attacks like the ‘Hi Mum’ scam. But for every moral AI platform, there is potentially another which doesn’t hold the same high ethical standards.
These platforms also have access to legal information around marketing, which could also be used to leverage would-be victims. Just think of the volumes of academic studies AI models might have with information such as; how to create effective call-to-action requests, how time pressure could be leveraged to increase yields or conversion, or how to implement A/B testing on differing wordings to check which has a higher success rate to maximise their output success.
The world of social engineering is taking some huge leaps forward. Therefore it’s critical that the public are equipped to ensure that they protect themselves and handle even the most professional and organised requests for funds, data, or personal details. It wouldn’t surprise me if over the next few years we see increasingly sophisticated examples of social engineering – not necessarily higher in technology but certainly more cognisant of psychological weaknesses or sociologic responses to be exploited.
Allow me to leave you with something to think about. The Nigerian Prince scam –– or the 419 Scam as its also known is now recognised by many people – its success rate dramatically faltered as more would be victims have become aware of its malicious intent and can defend themselves. I bet even now most readers would know exactly what I am talking about purely from the words “Nigerian Prince scam”. But there are plenty of other convincing pretenders, especially in the world of online dating. There’s even an entire digital industry of celebrity “fakes” known for slipping into someone’s DMs on social media platforms and being utterly convincing that they are someone else, thanks to more and more advanced technology. Read on for my advice on spotting and stopping the mimics. People need to be particularly cautious in situations that try to exploit them in their most vulnerable state – this is coupled with a prevalent uptick in difficulty to validate who people are when they are attempting to mislead a victim due to the increase in techniques to avoid detection.
How could AI manifest itself in future social engineering scams?
AI now empowers the replication of real voices from scripted text, providing perpetrators with the means to convincingly mimic trusted individuals – whether that’s someone you know or someone pretending to be a well-known individual. This capability allows fraudsters to potentially defraud victims having built a level of trust. In the case of those dating apps, it’s possible to play with functions and filters to conceal their true identity, deceiving their victims during voice or video calls to foster a false sense of trust, ultimately enabling fraudsters to orchestrate an attack.
What should I do to protect myself?
As the landscape evolves in response to these challenges, it's crucial to exercise heightened vigilance against unsolicited texts or emails and stick to using known numbers and bank accounts. Take every precaution necessary to verify requests that may jeopardise your time, money, or data. Implement robust risk management systems and consider seeking guidance from your broker and insurer on enhancing your business's cybersecurity. Finally, explore the option of transferring some risk through cyber insurance products by discussing cyber coverage with your broker. Stay cyber safe, stay vigilant, and stay on top of the latest schemes and scams. The digital criminals won’t stand a chance.