Spirit Managed Services is now branded Infotrust.
Cyber Emergency Number:
IT Support Number:
Blog

Uncomfortable truth: autonomy has outpaced control

Julian Challingsworth, Managing Director & Group CEO
May 15, 2026
Home

Let's Get STARTED

Part 1 of a 3-part series on AI assurance in mining 

Australia’s mining sector knows how to innovate. What it lacks is productivity headroom. Cost inflation, labour shortages and operational complexity are eroding margins, and AI is widely held up as the answer: autonomous haulage, predictive maintenance, digital twins, all promising step-change gains. 

Yet across the operators I speak with, the returns remain uneven. The reason is not that AI hasn’t been adopted. It has. The reason is that in most large mining operations, autonomy has quietly outpaced control. 

AI doesn’t fail loudly. It underperforms. 

When an AI-driven haul-truck dispatcher drifts, or a mill optimiser starts pushing beyond a calibrated envelope, or a maintenance model misses the early signal it was trained to catch, nothing alarms. Throughput slips a few percentage points. A planned shutdown extends by a shift. A safety threshold gets tested. The losses are real, but they don’t show up as a single incident; they accumulate as the slow erosion of the very productivity the AI was deployed to deliver. 

This is the quiet-degradation problem, and it is now the core operational risk of AI in mining. 

The threats are concrete, not hypothetical. 

Consider sensor data poisoning, where vibration or temperature feeds into a predictive-maintenance model are subtly skewed (by a degraded sensor, a vendor firmware change, or in a worst case, deliberate tampering), and the model quietly learns to ignore the early warning it was built to surface. Or model drift in a fleet dispatcher trained on pre-pandemic haul cycles, now operating against fundamentally different shift patterns, fuel costs and ore grades, with no one watching whether its decisions still match the operating envelope. These are not theoretical AIrisks; they are AI versions of failure modes mining engineers already manage in physical systems, just with no equivalent assurance regime around them. 

Governance has not kept pace with autonomy. 

In most operations, AI systems now influence fleet movements, processing throughput, maintenance schedules and even safety thresholds. But ownership is fragmented across IT, operations and vendors. Oversight is implicit rather than explicit. Few executives can answer with confidence three simple questions: who owns a given AI-driven decision, what controls sit around it, and how the organisation can intervene when outcomes deviate. 

Global regulators are moving to close that gap. The NIST AIRisk Management Framework and the EU AI Act both reflect a clear consensus: organisations must be able to explain, govern and intervene inautomated decision-making. Australia’s regime is still maturing, but the direction of executive accountability is upward. 

Here is the uncomfortable part for mining: those frameworksare written through an IT lens. 

They assume the AI sits inside an enterprise tech stack, governed by a CISO, with logs flowing into a SIEM and risks owned by a CIO. They do not yet account for AI that lives inside an OT environment, beside a process-control system, on a haul truck or in a mill, where the failure mode is not a data breach but a stopped shovel, a missed grade target, or a worker in the wrong place. The implication is uncomfortable but unavoidable: mining operators cannot outsource AI assurance to a generic framework. The frameworks help; they do not absorb the accountability. That sits with the operator. 

The capability gap is the real constraint. 

Plenty of mining organisations have people who can secure systems. Very few have people who can challenge an automated decision in real time. The constraint isn’t policy or tooling, it’s the control-room and reliability bench. Without trained staff, define descalation paths and the authority to pause or revert an automated action, AI becomes a silent risk multiplier rather than a productivity lever. As more decisions are delegated to AI agents that act without constant human oversight, the gap between machine decision authority and human ability to intervene widens. 

Treat AI as production-critical infrastructure. 

The mining organisations extracting durable value from AIare not the ones deploying the most algorithms. They are the ones treating AIwith the same discipline they apply to safety systems and plant reliability:clear ownership, continuous monitoring tied to operational KPIs, definedfail-safes (human-in-the-loop, bounded autonomy, graceful degradation) andexecutive oversight. 

AI is not the risk to productivity. Poor governance is. 

At a time when Australia’s mining productivity cannot afford further erosion, AI security must be reframed as a production-assurance issue, not an IT concern. The uncomfortable truth is that autonomy without governance is not productivity. It is risk wearing a productivity badge. 

Coming next in this series. 

This piece sets out the problem. Part 2 works through the three questions every mining board should be asking about its AI estate today, and the four controls that turn AI autonomy into AI assurance. Part 3 puts those controls inside an OT-grade AI assurance framework, the operating model the sector needs but has not yet built.