Blog

Key Findings – The 2025 Varonis State of Data Security Report

Cyber Defence Team
August 13, 2025
Home

Let's Get STARTED

AI has become embedded in our workflows and integrated into the tools we use every day. However, with the ability to scan and analyse any data in its path, it can also create new pathways for data breaches. Add to this the fact that AI tools operate at the speed of thought, and the security implications are far-reaching.

The 2025 Varonis State of Data Security Report examines this shifting landscape and how the rise of generative AI is transforming data risk. The report aims to quantify and clarify the new risks introduced by AI-powered tools and to provide organisations with the strategies needed to manage those risks effectively.

Shadow AI and Risks of Unsanctioned Tools

Shadow AI refers to generative AI applications that are used without the knowledge or authorisation of an organisation’s security team. These tools can bypass corporate governance and lead to potential data leaks. In addition, they may not comply with GDPR, HIPAA, or other regulations, which could result in significant fines. With a reported 98% of companies unwittingly using unsanctioned apps, including AI tools, this is a widespread and urgent issue.

A clear example of the risk is the widespread use of DeepSeek. In 2025, millions of employees downloaded the application, and a misconfigured database later exposed over one million log entries, including secret keys and backend credentials. Another example involves stale OAuth apps. Even long after their last use, these applications can still access sensitive data. According to the report, 52% of employees use high-risk OAuth apps, and 1 in 4 unverified OAuth apps in the average organisation are considered high-risk, creating a large and often invisible attack surface.

Model Poisoning Risks to AI Training Data

Model poisoning is when adversaries manipulate training data to deliberately corrupt an AI model’s performance. It typically happens when a malicious user gains access to a model’s underlying cloud infrastructure, such as storage accounts or databases, and is able to modify or write to that data without detection. Once poisoned, the model may behave in unpredictable or dangerous ways, but often without noticeable signs.

Model poisoning can also happen by accident. If an AI model is trained on poor-quality or inaccurate data, its outputs can become flawed. This is particularly risky in high-stakes sectors, such as healthcare, where a model trained on incorrect data could lead to harmful clinical decisions or misdiagnoses.

The danger with model poisoning is that it’s hard to detect. For example, an attacker could alter vendor bank details in a training dataset. If an employee later asks the AI to retrieve payment information, it could unknowingly return the attacker’s injected details, potentially resulting in funds being sent to a fraudulent account.

Internal Security Gaps

Internal security gaps are also a significant cause for concern when it comes to AI’s impact on data risk and even the most security-conscious organisations can be vulnerable, as these issues often go unnoticed.

1. Ghost Users

Ghost users refer to active accounts belonging to former employees. These accounts are often left enabled, still having access to sensitive applications and data long after the user has left the organisation. As ghost accounts aren’t actively monitored, they offer a perfect opportunity for adversaries to conduct reconnaissance or exfiltrate data without triggering alerts.

According to the report, a colossal 88% of organisations have stale but still-enabled ghost users, and most have 10 stale accounts with admin privileges, posing a serious risk of undetected access to critical systems.

2. Over-Permissive Access

As responsibilities and roles evolve, a single user can end up with dozens of permissions, memberships, and access to a wide range of systems, many of which are no longer relevant. IT and security teams struggle to keep up, and access rarely gets revoked when people change roles or leave. And non-human identities, such as APIs, often hold powerful permissions, too.

To illustrate the scale of the problem, AWS alone offers more than 18,000 identity and access management permissions, making it incredibly complex to control. The report found that the average AWS account contained over 3,000 over-permissive policies, each one representing a potential doorway for attackers.

3. Missing Multi-Factor Authentication

When multi-factor authentication (MFA) isn’t enforced, accounts become far more vulnerable to attacks such as credential stuffing, and phishing. The report found that 1 in 7 organisations don’t use or enforce MFA across their SaaS and multi-cloud environments. Without appropriate authentication controls in place, attackers can easily log in using stolen credentials and exploit AI tools to quickly locate and extract the most valuable data within an organisation.

A stark example came in 2024 when the absence of MFA contributed to a breach that compromised 190 million patient records. The fallout was so severe that MFA became a non-negotiable requirement under HIPAA regulations soon after.

Mitigating Rising Challenges

The findings in this report paint a clear picture: AI is rapidly accelerating data risk. From unsanctioned apps and model poisoning to internal security gaps, organisations face a growing and often invisible attack surface. Even well-managed environments are vulnerable, and legacy security systems are struggling to keep up.

To stay ahead, organisations must modernise their data protection strategies. This means reducing the blast radius by locking down access, continuously monitoring data, and remediating risks in real time. AI should also be part of the defence, used to identify sensitive data, flag abnormal behaviour, and detect threats before they escalate.

Ultimately, data security is AI security. More than just a compliance requirement, protecting sensitive data is a crucial aspect of using AI responsibly and effectively.

To find out more explore the in-depth findings on quantifying AI’s impact on data risk and management, access the complete Varonis report here.