Spirit Managed Services is now branded Infotrust.
Cyber Emergency Number:
IT Support Number:
Blog

What an OT-grade AI assurance framework actually looks like

Julian Challingsworth, Managing Director & Group CEO
May 15, 2026
Home

Let's Get STARTED

AI assurance in mining

This series began with an uncomfortable observation: AI autonomy has outpaced control in Australian mining. Part two named the three questions that diagnose the gap, and the four controls that close it. This final piece tackles the harder question: what does it look like when those controls are stitched into a single operating framework, and why has the sector not yet built one?

The IT-lens problem, restated.

Global frameworks (the NIST AI Risk Management Framework, the EU AI Act, ISO/IEC 42001) are valuable. They are also written through an IT lens. They assume the AI sits inside an enterprise tech stack, governed by a CISO, with logs flowing into a SIEM and risks owned by a CIO. They do not yet account for AI that lives inside an OT environment, beside a process-control system, on a haul truck or in a mill, where the failure mode is not a data breach but a stopped shovel or a worker in the wrong place.

For mining, the implication is unavoidable. An IT-grade framework is necessary but not sufficient. The sector needs an OT-grade companion, built around the same operational disciplines mining already applies to safety integrity, plant reliability and production assurance.

What an OT-grade framework specifies.

Five elements, each anchored in OT practice mining already understands.

  • Decision-class accountability: Every AI-driven decision class has a named accountable executive in the operational chain, registered in the production accountability matrix the operation already maintains for safety and reliability.
  • Envelope-based assurance: The framework defines the operational envelope for each decision class (throughput band, grade range, safety threshold) and the monitoring that confirms the AI’s decisions still fall inside it.
  • Fail-safe by design: Each decision class has its human-in-the-loop, bounded-autonomy and graceful-degradation behaviour specified during commissioning, not retrofitted after an incident.
  • Authority to intervene: Control-room and reliability-bench staff have standing authority to pause or revert AI decisions, codified in standard operating procedures and exercised in routine drills.
  • Audit and assurance cadence: The framework specifies independent review, scheduled audits and (where the regulator eventually requires it) certification, with the same rigour applied to safety case management.

None of those five elements is unfamiliar to mining. The work is applying them to AI as deliberately as the sector already applies them to fixed plant and mobile equipment.

Some operators will wait. They will wait for the regulator to define the framework, for the technology vendor to bundle it, or for the consultancy market to standardise it. That is a defensible position; it is also a costly one. The framework that gets adopted first sets the language the regulator uses, the controls the vendor builds toward, and the audit pattern the consultancy applies. Operators who build one first will not just be ahead. They will be defining what ahead means.

Here is the uncomfortable edge for boards.

Operators who treat AI assurance as a compliance exercise will inherit someone else’s framework. Operators who treat it as an operating discipline will write the one the sector ends up using. Those are not equivalent positions. One is a price taker. The other sets the price.

AI assurance is not an IT project. It is an operational discipline, and mining already knows how to do operational discipline. Apply that muscle to AI, and autonomy becomes productivity. Without it, autonomy is just risk wearing a productivity badge.

The three-part arc of this series has been one argument, told from three angles. Australia’s mining sector has the engineering culture, the safety discipline and the operational rigour to lead on this. What it needs is the deliberate decision to do so, starting with the boards that ask the three questions, the operators that build the four controls, and the executive teams that put them inside a framework worth defending.

The sector does not need to wait for a framework. It needs to write one.