Spirit Managed Services is now branded Infotrust.
Cyber Emergency Number:
IT Support Number:
Blog

Key Findings – Netskope: Cloud and Threat Report 2026

Sumit Singh
March 11, 2026
Home

Let's Get STARTED

Netskope’s Cloud and Threat Report 2026 analyses the most significant cyber security risk trends impacting organisations worldwide in 2025, providing vital insights into the challenges and risks that lie ahead.

The report highlights how the rapid adoption of generative AI is driving further disruption and risk. Organisations continue to struggle with the complexities of cloud data security, as well as the constant threat of phishing campaigns and malware. However, it is the introduction of widespread AI usage that creates a new layer of complex data exposure, resulting in new and evolving risks. The most immediate risk is the substantial rise in data exposure, leading to policy violations and the leakage of highly sensitive material. Add to this the ability of agentic AI systems to execute complex autonomous actions across business systems, and it’s plain to see why organisations need to reevaluate their security perimeters and trust models.

The report concludes with key recommendations for how organisations can improve their security posture. As new tools and user behaviour evolve faster than traditional safeguards, strengthening oversight and data loss prevention controls is key.

Critical Insights from the 2026 Report

The report drills down into several critical security shifts shaping organisational risk in 2025, examining not only the scale of these trends but how they operate in practice.

  • Rapid Increase in SaaS GenAI Use: Over the past year, SaaS genAI usage has accelerated rapidly, as employees adopt tools like ChatGPT, Google Gemini and Copilot for everyday work, often outside organisational visibility, policy and control (shadow AI). While personal account usage has fallen from 78% to 47% and organisation-managed access has risen from 25% to 62%, overall usage has tripled, and data shared has grown sixfold (3,000 to 18,000 prompts/month). As the demand for these tools outpaces governance, it’s proving challenging for organisations to keep up.
  • Rise in GenAI Data Policy Incidents: As employees routinely upload internal data to external AI services for summarising, coding or analysis, policy violations incidents are increasing. Over the past year, the number of users committing data policy violations doubled, as did the total number of incidents, with 3% of genAI users responsible for an average of 223 violations per month. The top 25% of organisations experience 2,100 incidents per month across 13% of users, highlighting significant governance gaps.
  • Agentic AI Expands the Attack Surface: Agentic AI is rapidly moving from experimentation to enterprise deployment, executing autonomous actions across internal and external systems. Adoption of platform-based services is accelerating, with 33% of organisations using OpenAI via Azure, 27% Amazon Bedrock and 10% Google Vertex AI. As 70% of organisations now connect to api.openai.com, API-driven automation is becoming the default, amplifying data exposure and insider risk at machine speed.
  • Personal Cloud Apps Drive Insider Risk: Personal cloud apps remain a primary driver of insider data exposure, as employees use personal accounts for convenience, collaboration or AI access. 60% of insider threat incidents involve personal cloud app instances, with regulated data, intellectual property, source code and credentials commonly exposed.
  • Phishing Remains a Persistent Threat: While user susceptibility has declined, with fewer users clicking on links, phishing continues to account for a significant share of initial access attempts and is becoming increasingly sophisticated. Brand impersonation remains central, with Microsoft accounting for 52% of clicks, followed by Hotmail (11%) and DocuSign (10%).
  • Malware Continues to Exploit Trusted Cloud Channels: External adversaries are increasingly distributing malware through trusted cloud services and familiar workflows, exploiting user confidence in widely used platforms. GitHub is the most abused service, with 12% of organisations detecting employee exposure each month, followed by Microsoft OneDrive (10%) and Google Drive (5.8%).

Together, these findings reveal a clear pattern: as digital workflows become more decentralised and AI-enabled, traditional visibility and control mechanisms are being tested at unprecedented scale.

Key Recommendations for Risk Mitigation

It’s clear that the cyber security landscape has become more complex than ever, driven by the rapid and often ungoverned adoption of generative AI. While existing threats such as malware and phishing remain prevalent, the evolution of AI has added a new level of risk. Most concerning is the amount of unwanted or unseen data exposure. With employees adopting AI tools, often without security oversight, and agentic AI executing complex autonomous actions across internal resources, the attack surface has grown rapidly.

With all these changes, organisations need to evaluate their security perimeters and reassess how trust is managed across cloud environments. Moreover, security teams need to manage an additive threat model, which makes strengthening oversight, DLP controls and AI-aware security vital. In order to review security posture in this new and evolving environment, the report recommends organisations take the following actions:

  • Implement a comprehensive inspection of all HTTP and HTTPS traffic across both web and cloud environments to detect and block malware before it reaches users. Security controls should apply consistently across all file types and download sources to minimise gaps in protection.
  • Restrict access to applications that lack a clear business justification or introduce unnecessary risk. Organisations should establish an allow-list approach, permitting trusted and approved applications while preventing access to unvetted or high-risk services.
  • Deploy robust Data Loss Prevention (DLP) controls to identify and prevent the transfer of sensitive information, including source code, regulated data, credentials, encryption keys, and intellectual property, to personal accounts, generative AI platforms, or other unauthorised destinations.
  • Leverage Remote Browser Isolation (RBI) capabilities when users need to access higher-risk websites, such as newly registered or previously unseen domains, to reduce the likelihood of malware execution or credential compromise.

Navigating the Threat Landscape in 2026

As organisations move further into AI-driven, cloud-first operating models in 2026, the threat landscape will continue to expand in both scale and complexity. Generative and agentic AI are accelerating innovation, but they are also amplifying data exposure, identity risk and the speed at which adversaries can operate. At the same time, longstanding threats such as phishing and malware continue to evolve, exploiting trusted platforms and familiar workflows.

As the threat landscape continues to evolve, security teams need to treat identity as the new perimeter, strengthening visibility across cloud and SaaS environments, and aligning security policies with how employees actually work. Proactive inspection, tighter access controls, robust DLP enforcement and protection of high-risk web interactions must work together as part of a layered defence strategy.

To strengthen your organisation’s data security posture in the face of these evolving risks, contact us to discuss how these insights apply to your environment. You can also explore the full findings in the Cloud and Threat Report 2026 here.