Blog

Key Findings – How Cybercriminals Exploit Trusted Tools and Malicious GPTs

Deepak Rana
May 14, 2025
Home

Let's Get STARTED

Abnormal Security’s latest report explores a disturbing new reality: the same AI tools we trust are now being weaponised against us. Inside the AI Arms Race: How Cybercriminals Exploit Trusted Tools and Malicious GPTs offers insights into how everyday AI tools are being twisted for harmful purposes, the rise of malicious GPTs, and crucially, how you can protect your organisation against these sophisticated threats.

The Growing Threat of Malicious AI

AI is a fully embedded part of daily life, making things more efficient, reducing manual effort, and driving smarter decision-making. This growing dependency is driven by two key technologies: large language models (LLMs) and generative pre-trained transformers (GPTs). LLMs have helped redefine what AI can achieve, enabling systems to understand, interpret, and generate human language in ways that were previously unimaginable. But it's GPTs that have really created a step change, representing one of the most powerful advancements in AI. GPTs enable machines to create human-like text and images with remarkable fluency and accuracy. While this makes the technology incredibly useful for tasks like drafting content and summarising reports, these very capabilities also make it highly attractive to our adversaries.

Cyber threat actors increasingly harness AI to augment their attacks, using GPTs for everything from social engineering scams to automated malware generation. However, it doesn't stop at misusing existing tools like ChatGPT. Attackers are now going a step further, creating purpose-built models designed with the sole intent of deceiving and defrauding. These tools lower the barrier to entry for would-be criminals while enabling more targeted, complex, and damaging attacks.

What are Malicious GPTs?

GPT models are language processors trained on vast datasets and built to adapt based on responses and learned patterns. However, they're inherently vulnerable to manipulation because they don't truly understand intent or context. Malicious AI and GPTs refer to using these technologies in ways that push them beyond their ethical boundaries.

Threat actors exploit GPTs through flaws in training data, weaknesses in model control systems, and their general susceptibility to the inputs they receive. Several techniques allow attackers to turn these weaknesses into opportunities:

  1. Data Poisoning: This technique involves tampering with the data used to train a GPT model and injecting biased, misleading, or harmful content. Once it's in, the model may unknowingly help spread misinformation or support automated attacks.
  2. Jailbreak Techniques: Attackers use cleverly worded prompts to convince a GPT model to say or do things it's usually programmed to avoid. Once fooled, the system can be used to generate things like fake news, dangerous code, and more.
  3. Prompt Injection: This technique inserts malicious instructions into the model's inputs after deployment. Attackers use deceptive prompts that confuse the system and cause it to ignore safeguards or execute unauthorised actions.
  4. Model Reprogramming: This advanced technique embeds persistent hidden instructions influencing future responses. It can quietly shift how the model behaves over time—guiding it to give misleading answers or automate social engineering tactics.

Impacts of Malicious AI Exploits

The risks of malicious AI are no longer hypothetical. It's already in use and poses a serious threat to individuals and businesses alike. One of the biggest dangers is how it can be used to execute data security breaches. Instead of relying on time-consuming, manual tactics, attackers can now use LLMs to craft convincing social engineering scams in minutes.

Malicious AI is being used to generate phishing emails, fraudulent communications, and even deep fake impersonations, creating an increasingly complex fraud landscape.

The consequences can be severe, from data breaches and financial loss to reputational fallout. A single attack can erode customer trust, trigger regulatory scrutiny, and leave long-term damage that's far harder to fix than the initial breach. What's more, the risks are rising fast. When AI is embedded across systems and workflows, a single malicious prompt can be enough to compromise entire supply chains.

Real-world examples highlight just how sophisticated and dangerous malicious AI exploits have become:

  • AI-Driven Deepfake CFO Scam: A multinational company employee in Hong Kong was tricked into transferring $25 million after participating in a video call populated entirely by deep fake versions of the company's CFO and other staff.
  • AI-Generated Polymorphic Malware: Researchers from Palo Alto's Unit 42 demonstrated how LLMs can be used to create and continuously modify malicious code. In controlled testing, they fed the AI instructions to generate variations of known malware, bypassing guardrails and resulting in new, undetectable strains with the same harmful functionality.

Defending Businesses Against Malicious GPTs

Our increasing reliance on AI, combined with the rise of malicious GPTs, is transforming the cyber threat landscape at an alarming pace. These tools aren't just amplifying existing risks; they're eroding our ability to trust what we see, hear, and read. In this new environment, businesses can't afford to be reactive. A new kind of defence strategy is urgently needed.

To stay ahead of attackers, organisations must prioritise proactive cybersecurity measures. That means educating employees on the evolving risks, embedding AI literacy into training, and significantly strengthening security protocols. Fortunately, AI can also form part of the solution to detect anomalies, identify AI-generated threats, and automate responses. This kind of intelligent automation dramatically reduces the time it takes to spot and contain threats, helping organisations stay resilient even in the face of highly adaptive attacks.

If you'd like to read the full whitepaper and learn more about how cybercriminals are exploiting trusted tools like GPTs and what you can do to protect your business, you can download it here: https://abnormal.ai/resources/ai-arms-race-how-cybercriminals-exploit-trusted-tools-malicious-gpts