// AI Systems Engineering · Functional Safety

AI Safety: Applying IEC 61508 / ISO 26262 to LLM Deployments

By Mario Alexandre · AI Systems Engineer, DLux Digital · April 13, 2026 · 6 min read

An LLM in a chatbot is annoying when wrong. An LLM recommending medication dosages is dangerous when wrong. An LLM controlling an autonomous vehicle is lethal when wrong. The difference between these three is not the model — it is the consequence of failure. And there is a discipline, fifty years old, that exists specifically to engineer systems where the consequence of failure is severe. It is called functional safety, and it is governed by standards: IEC 61508 (general), ISO 26262 (automotive), DO-178C (avionics), IEC 62304 (medical software).

The AI industry has, with very few exceptions, ignored these standards. The result is a wave of LLM deployments in regulated domains where the engineering substrate that would catch foreseeable failures is simply absent. Medical chatbots. Legal document AI. Drone autonomy. Industrial control. The systems work in demos. They fail in ways that the standards would have flagged, if the standards had been applied.

The Three Pillars of Functional Safety Analysis

Three artifacts dominate functional-safety practice, and they are exactly what the free AI Safety Hazard Analyzer produces:

1. Hazard Analysis (HA)

Every hazard the system can pose to users, operators, bystanders, or the environment, characterized by three parameters from ISO 26262:

These three combine into the ASIL rating — Automotive Safety Integrity Level — from ASIL-A (lowest) to ASIL-D (highest). For non-automotive systems, the analogous IEC 61508 rating is SIL-1 to SIL-4. A higher rating mandates more rigorous engineering: redundancy, formal verification, third-party certification, documented test coverage.

2. Failure Mode and Effects Analysis (FMEA)

For each component, what are its failure modes? What is the effect of each failure on the system? What causes the failure? How can it be detected? What mitigates it? The FMEA table is the operational checklist that engineers use when designing redundancy and monitoring. For an LLM system, the failure modes include:

3. Fault Tree Analysis (FTA)

Working backward from a top-level system failure ("medication dose recommendation harms patient"), enumerate all the combinations of basic events that could cause it. Each branch is connected by an AND gate (all events must occur) or an OR gate (any one event suffices). The FTA reveals single points of failure, identifies cut sets that an attacker or freak event could exploit, and quantifies the reliability you need from each component.

The SIL/ASIL Recommendation

The Analyzer takes your use-case description and returns a recommended SIL or ASIL level. This matters because it dictates the engineering rigor required:

If you describe a chatbot that recommends medication doses to nurses on bedside tablets, the Analyzer should return SIL-3 or higher and recommend specific safeguards: human-in-loop on irreversible recommendations, independent dose verification, audit logging, output bounded by formulary database.

The Deploy Verdict

The Analyzer also returns a verdict in three categories:

The verdict is conservative by default. If you are designing a system where lives or licenses are at stake, you want the verdict to err toward DO_NOT_DEPLOY rather than DEPLOY_OK. That is the engineering posture the safety standards encode. Hope is not a strategy. Verifiable safeguards are.

From the wiki synthesis on direct AI application: "Reinforcement learning, cyber-physical systems architecture, AI in real-time systems, systems integration methodology — these transfer directly without analogy. CPS thinking applied to LLM deployment IS the safety engineering layer."

Why This Is Underbuilt Right Now

Functional safety expertise is concentrated in industries where the standards have been mandatory for decades — automotive, aerospace, medical devices, industrial control. AI expertise is concentrated in software-first companies that did not need to think about safety standards until they started deploying AI into the regulated domains. The intersection of "fluent in AI" and "fluent in IEC 61508" is statistically tiny.

This tool exists because that intersection is where the next wave of AI deployment will get blocked, lawsuited, or recalled. Run your AI use case through the Analyzer before you ship it. The cost of catching a SIL-3 hazard in design is hours. The cost of catching it in production is everything.

From Audit to Production

The free Analyzer gives you the diagnostic. Building an AI system that actually meets the safeguards it recommends — formal verification, independent redundancy, certified inference, deterministic latency — is engineering work. For deployments in medical, automotive, aerospace, or industrial domains where the standards are not negotiable, see the paid service. The standards exist for a reason. AI does not get a pass.

// Try It Free

Run a Functional Safety Audit

Describe an AI use case. Returns IEC 61508 / ISO 26262 hazard analysis: enumerated hazards, FMEA failure modes, fault-tree, SIL/ASIL recommendation, required safeguards, deploy verdict.

// Need It at Production Scale?

Safety-Certified AI for Cyber-Physical Systems — Service #34

Production AI systems that meet IEC 61508 / ISO 26262 / DO-178C — formal verification, certified inference paths, deterministic latency, deployable in regulated environments.

IEC 61508 ISO 26262 ASIL rating Functional safety AI hazard analysis Safety-critical AI