ComplianceMarch 3, 2026By Patrick Songore, Founder, GangoAI

What “Explainable AI” Actually Means for Safety Decisions

Explainability has become a buzzword. Every AI provider claims their system is explainable. But when a regulator asks why a worker was flagged, or when a union representative asks why a member was pulled from a shift, "the AI detected a pattern" is not an explanation. It is a restatement of the problem.

Three Audiences, Three Standards

When an AI system makes a safety decision, that decision needs to be explainable to three different audiences, each with different requirements.

The worker

The person being flagged needs to understand what was measured and why it was different from their normal pattern. They do not need to understand the underlying algorithm. They need to hear something specific and objective - not a vague statement about risk scores. If the explanation does not feel fair to the person receiving it, the system will face resistance regardless of how accurate it is.

The supervisor

The person acting on the flag needs enough information to make a proportionate response. Is this a significant deviation or a marginal one? Is it consistent with other indicators they can observe? A supervisor who receives a red flag with no context will either overreact or ignore it - neither is acceptable in a safety context.

The auditor

Whether this is an internal safety review, an HSE investigation, a regulatory audit, or legal proceedings, the standard is evidence. What was measured? When? Against what baseline? What threshold was crossed? Was human oversight exercised? A system that cannot produce this chain of evidence fails the audit regardless of how good its detection rate is.

The Explainability Spectrum

Not all AI systems are equally opaque, and not all claims of explainability are equally meaningful. There is a spectrum, and understanding where a system sits on it matters.

At one end are fully opaque systems - deep learning models that take in raw data and produce an output with no interpretable intermediate steps. The developers themselves often cannot explain why a specific input produced a specific output. They can describe the architecture and the training process, but not the reasoning behind any individual decision.

In the middle are systems that apply post-hoc explainability - techniques like SHAP values or attention maps that attempt to explain a black box decision after the fact. These are better than nothing, but they are approximations. They tell you which inputs the model weighted most heavily, not why it weighted them that way. For a research context, this is interesting. For a safety decision that affects someone's livelihood, it may not be sufficient.

At the other end are systems that are explainable by design - where the decision logic is transparent from the outset. The system measures specific things, compares them against specific thresholds, and produces specific outputs. There is nothing to reverse-engineer because the logic was never hidden.

What the EU AI Act Requires

The EU AI Act does not use the word "explainable" loosely. For high-risk AI systems - which includes AI used as a safety component in transport and critical infrastructure - Article 13 requires that systems are designed to be sufficiently transparent to enable users to interpret the output and use it appropriately. Article 14 requires meaningful human oversight, which presupposes that the human can understand what the system is telling them.

A system that produces a risk score between 0 and 100 with no explanation of how that score was calculated does not enable interpretation. A system that tells a supervisor "three specific measurements deviated from this individual's baseline by these specific amounts" does.

The Trust Connection

Explainability is not just a compliance requirement. It is the foundation of trust. Workers who understand how a system makes decisions are more likely to accept those decisions. Supervisors who understand what they are looking at are more likely to act appropriately. Organisations that can explain their safety decisions are better protected legally and operationally.

Conversely, systems that cannot explain themselves create suspicion. Workers assume the worst about technology they do not understand. Unions resist deployment of systems that cannot be scrutinised. Regulators take a harder line on organisations using tools they cannot account for.

Five Questions That Test Explainability

  1. Show me the audit trail for a specific flag. Can the provider produce the exact measurements, thresholds, and baseline comparisons that led to a specific decision? Not a summary - the actual data.
  2. Explain this flag to a non-technical person. Ask the provider to explain a sample flag as they would to a shop floor supervisor. If the explanation involves technical jargon, model architectures, or confidence intervals, it is not explainable in practice.
  3. Can the worker see what was measured? Transparency is not just for management and regulators. If the person being assessed cannot understand what was measured and why it was flagged, the system will face resistance.
  4. What happens when the system is wrong? Every system will occasionally produce incorrect outputs. How does the system handle disputes? Can a flag be reviewed against the raw measurements? Is there a clear process for correction?
  5. Would this stand up in court? If a decision made using this system were challenged legally, could the provider demonstrate exactly how the decision was reached? Expert witness testimony about "how the model generally works" is not the same as evidence showing "this specific measurement triggered this specific output."

The Standard Is Simple

If you cannot trace a specific flag to a specific measurement at a specific time for a specific individual, the system is not explainable in any way that matters for safety. Everything else is marketing.

No Black Boxes

Every output traces to a specific measurement. Deterministic. Auditable. Built for the standards that safety-critical environments demand.

Supported by

Innovate UKNVIDIA Inception ProgramTech South West