Over the past months, a series of high-profile cyberattacks have highlighted how widespread the risk has become.. Marks & Spencer, Jaguar Land Rover and even nursery schools have found themselves targeted by bad actors – the latter causing sensitive details of children and families being exposed.
While these instances of cybersecurity breaches employ ransomware tactics, they’re not the only type of cyber crime a business can fall foul of. Phishing, fraud and other forms of malware are on the rise, underscoring the essential role of robust data protection in every organisation. It’s a board-level issue, and the rise of artificial intelligence is making it even more challenging to defend against cybercrime, especially social engineering scams.
In this article, a team of data protection specialists take a closer look at AI-powered social engineering scams – how they work, why they’re so effective, and what you can do to protect your organisation against them.
The Psychology of the Scam
At its core, social engineering isn’t about taking advantage of patchy code or software loopholes – it targets the people operating those systems. Criminals manipulate trust, sending emails that appear to come from your finance director, or call while posing as your IT team, or even create convincing video messages from senior executives.
What makes these types of attacks so dangerous is how legitimate they seem. Unlike brute-force hacks, these attacks slip past technical defences because the true “entry point” is human behaviour. All it takes is one distracted click or hurried response for the attack – and damage – to take hold.
The Artificial Element
The concept of social engineering is far from new, and most businesses will recognise the classic signs – clumsy phishing emails riddled with spelling mistakes. However, AI has given the face of these scams a fresh lick of paint, and today, they can be polished to near- perfection – tailored to your industry, written in your company’s tone of voice, and delivered at scale.
We’re now seeing phishing emails that are so well crafted, they could have been written by your own comms team, free from awkward phrasing or obvious red flags. Deepfake technology has also allowed bad actors to convincingly mimic voices and faces, prompting staff to act on “urgent” requests from senior leaders.
Scraping tools – once limited in scope – can now act on a much broader scale, compiling detailed profiles of employees from LinkedIn posts, press releases, and even unrelated social media chatter. AI-powered chatbots can adjust tone in real time when someone hesitates, making the exchange feel like a natural conversation rather than a script.
These are just a few areas in which AI adoption is powering a new level of cybercriminal sophistication. With greater speed, scale, and precision, attacks are now outpacing many traditional safeguards.
What’s at Stake?
For Marks & Spencer, the attack caused a loss in customer trust. For Land Rover, the focus was operational continuity. In the case of the nursery school breach, it was safeguarding children’s data. Different sectors and different attack vectors – but the risks are much the same: financial loss, regulatory penalties, and reputational damage that lingers long after the headlines fade.
So, what can businesses do to protect themselves from these types of scams? The answer doesn’t lie in any one piece of technology, but rather a multi-layered defence strategy.
Building Resilience
As with many risks in business, awareness comes first. Staff need to recognise the tricks criminals use – urgency, authority, fear – and feel empowered to question them. A culture of “pause and verify” is worth more than any firewall.
Another important ethos to adopt is “Don’t just train, test”. Phishing simulations might feel uncomfortable to deploy, but they reveal how employees would react in the moment, and you can provide real, actionable feedback to specific individuals who might be acting as a “weak point”.
It’s also key to upgrade your defences, especially if it’s been a while. Intelligent email filters and anomaly detection tools can take the pressure off staff by blocking the most obvious threats before they hit inboxes. At the same time, don’t overlook the fundamentals: multi-factor authentication remains one of the simplest ways to stop compromised credentials turning into a full-scale breach.
Finally, regular, comprehensive reviews of your policies and processes are essential to keep pace with the changing realities of cyber security. Incident response plans should be living documents, actively tested and refined and not left to gather dust on a server. It is also worth reviewing how much staff and company information is publicly available online. To further increase security, it may be beneficial to limit this information to reduce the risk of social engineering.
Conclusion
In truth, cybercriminals don’t need to target your systems if they can target your people, and by adding AI to the mix, the line between genuine business communication and criminal manipulation is becoming harder to spot.
That’s why social engineering, among other forms of cybersecurity, should be high on every organisation’s priority list. True resilience doesn’t come from a single product or quick fix, but from embedding data protection awareness, robust policies and procedures, and smart technology into the DNA of the organisation. Small steps taken now can prevent far greater disruption later.

