Spirit Managed Services is now branded Infotrust.
Cyber Emergency Number:
IT Support Number:
Blog

Three questions, four controls: from AI hope to AI assurance

Julian Challingsworth, Managing Director & Group CEO
May 15, 2026
Home

Let's Get STARTED

Part 2 of a 3-part series on AI assurance in mining

Part one of this series argued that AI autonomy has quietly outpaced governance in Australian mining. The question that came back hardest from directors after that piece ran was the practical one: how does a board actually know whether its operation has the problem, and what does it actually do about it?

Three questions diagnose the gap. Four controls close it. Both belong on the board agenda, in that order. In every operation I have discussed this with, at least one of these three exposes a gap the executive team did not fully see. That is the value of asking.

  • Question 1: Who owns each AI-driven decision in our operations? Ask the board to name the executive accountable for fleet decisions, predictive-maintenance decisions and process-control decisions. If the room goes quiet, that silence is the finding. And the accountable executive is not the CIO by default; it should be the operational executive whose KPI the AI most directly influences.
  • Question 2: How do we know the AI is still operating inside its designed envelope? Model-performance dashboards (accuracy, latency, uptime) tell you the AI is running. They do not tell you the AI’s decisions still fall inside the operational reality of the site. A model can score 96% accuracy and still be failing operationally.
  • Question 3: What is our intervention capability when an AI decision is wrong? Can the operation pause it, revert it or override it, in time, without breaking the broader system? Or is intervention dependent on a vendor support ticket? Intervention capability is the difference between AI as production-critical infrastructure and AI as a black box bolted to the production line.

If your board cannot answer those three questions today, you do not have AI assurance. You have AI hope. And hope is not a control.

The four controls that close the gap.

These are not new ideas to mining. They are the same controls operators already apply to safety systems, plant reliability and operational risk. The work is applying them to AI with the same discipline.

  • Control 1, clear executive ownership: Name an accountable executive for each class of AI-driven decision and put the name in writing. Default the accountability to the operational executive whose KPI the AI most directly influences (the GM of mining for fleet, the maintenance leader for asset health, the metallurgy lead for plant control), not the CIO. This single move shifts AI from a technology question to a production-accountability question.
  • Control 2, continuous operational-envelope monitoring: Tie monitoring to the KPIs the operation already tracks: throughput against plan, grade against target, asset availability against forecast, incident rate against baseline. When the AI’s behaviour drifts away from those KPIs, the alert fires before the consequence does. The cost of building this is real; the cost of not having it shows up as quiet degradation no one can explain.
  • Control 3 defined fail-safes in three layers: Human-in-the-loop for high-stakes decisions (safety thresholds, large capital commitments). Bounded autonomy for routine decisions, with explicit operational limits. Graceful degradation when the AI is suspended (a fleet dispatcher that goes offline reverts to a manual schedule, not to silence). These three layers are specified during commissioning, not bolted on after.
  • Control 4, real-time intervention capability: Give control-room and reliability-bench staff three things: the training to spot an AI decision that has drifted, the authority to pause or revert it, and the tooling to do so without breaking the broader system. Training is the easy part. Authority is harder; it has to be granted explicitly by the executive named in Control 1 and exercised in drills before it is needed in anger.

Some vendors will resist bounded autonomy because it constrains the pitch. The honest response is that an AI system the operator cannot bound is an AI system the operator cannot own. Operators have to constrain the AI before the vendor constrains the discussion. That is uncomfortable to put in a contract; it is more uncomfortable to discover after deployment.

The four controls are not items to be ticked off in a compliance review. They are operating disciplines, embedded in the way the production line runs. Apply them piecemeal and you get the appearance of assurance; apply them as a system and you get the substance. Done well, the four controls turn AI from a silent risk multiplier into a deliberate productivity lever, which is the outcome the technology was deployed to deliver.

Three questions diagnose the gap. Four controls close it. The harder question, and the one Part 3 tackles, is what those controls look like when they are stitched into a single operating framework. The sector borrows IT-grade frameworks today because no OT-grade one exists. Part 3 sets out what one should look like, and why the operators who build one first will define the standard the rest of the sector eventually follows.