
AI adoption has risen rapidly, with 1.3 million, or around 50%, of Australian businesses now regularly using the technology. However, while generative AI tools such as ChatGPT, Gemini and Copilot once felt like the answer to every organisation’s problems, that was just the tip of the iceberg. The next step, agentic AI, or AI agents, don’t just think, they can perceive, reason and act on their own.
For Australian businesses, AI agents are already reshaping how work gets done: orchestrating data, automating workflows and executing tasks. It sounds almost too good to be true – and our adversaries are clearly thinking the same. This is where risk comes into play, and why AI and data security posture management have never been more important.
AI agents work incredibly quickly and often have broad default permissions. That ability to access what they need, when they need it, at speed, is also a significant danger. They don’t follow the same safeguards as human users and are routinely over-permissioned, inheriting the excessive privileges already present in many SaaS systems. For attackers, it’s too good an opportunity to miss.
SaaS was already a significant attack vector for cybercriminals because it breaks traditional security boundaries; it’s accessible from everywhere, full of integrations, easy for lateral movement, and many organisations are still slow to secure it. AI agents have led to the emergence of new attack types, including model poisoning, data manipulation, and inference attacks, which specifically target AI systems.
One example of the risk was demonstrated by the 2023 Microsoft AI model exposure incident. Researchers discovered that an overly permissive SAS token had accidentally exposed 38TB of private AI training data on GitHub. While it wasn’t an attack, it showed just how quickly AI systems can become vulnerable when permissions aren’t tightly controlled. And because Azure and Microsoft 365 are so widely used in Australia, the exposure raised real concerns about how easily sensitive data could leak into AI workflows without proper oversight. Ultimately, attackers only need one weak point to cause widespread damage.
As AI adoption accelerates and data becomes increasingly decentralised, organisations need a structured way to keep control of both, without slowing innovation. This is where AI-SPM and DSPM come into play, each offering a slightly different approach to identifying, monitoring and reducing risk:
AI-SPM and DSPM together give organisations a way to see exactly where data lives, how AI systems are behaving, which permissions don’t look right and where the biggest risks sit. With this visibility, organisations can move quickly without losing control.
Once you understand the scale of the risks, the next step is turning posture management into something practical. This means going beyond simple compliance checklists and moving towards a more proactive, unified approach across protecting your data.
This is where Infotrust can help organisations implement a comprehensive enterprise Data Program program which defines and implements the technology and processes necessary to reduce the risk of data exposure and leakage through AI and other risky destinations. Infotrust provides subject matter expertise through a range of specialised services:
By combining governance, visibility and control, organisations can build an AI and data posture that actually supports growth rather than slowing it down.
AI adoption will continue to rise, and with AI agents now operating across core business systems, the stakes have never been higher. Without the right posture management in place, the risks can quickly outweigh the benefits.
AI-SPM and DSPM give organisations the clarity and structure they need to manage these risks, providing visibility into how AI systems behave, where sensitive data lives and which access points pose the greatest threat. Ultimately, by bringing these approaches together, you can create a stronger, more resilient security posture for your organisation that supports innovation rather than holding it back.
If you’d like to explore how AI and Data Security Posture Management can strengthen governance, visibility and trust across your digital ecosystem, get in touch with the team at Infotrust.