Health insurers use AI to make coverage decisions.
We show whether the clinical reasoning holds up.
Structured clinical reasoning audits for AI-driven insurance decisions. Built for the regulations that are already here.
Texas SB 815 is in effect. Colorado AI Act takes effect June 2026. Your AI needs to explain itself.
The black box era is over
300,000 claims denied in 2 months
Cigna's algorithm spent 1.2 seconds per decision. Class action ongoing.
81% of appealed denials overturned
The system is designed to deny first. Most denials can't withstand clinical scrutiny.
5 US states now require AI explainability
California, Texas, Maryland, Nebraska, Arizona. Laws are in effect, not proposed.
Audit every AI decision against structured clinical reasoning
CliniReason takes a coverage decision - diagnosis, proposed treatment, approval or denial - and audits it against a clinical reasoning graph currently covering all 429 conditions required for UK medical licensing, with structured differential diagnosis knowledge.
The output: a human-readable explanation of whether the clinical logic holds up, traceable to specific medical evidence. The kind of explanation that Texas SB 815 now requires in writing.
Not another LLM. A clinical reasoning graph.
LLMs read medical records and guess. Our graph knows.
CliniReason is built on a structured knowledge graph: 429 medical conditions (all conditions required for UK medical licensing, expanding to the entire ICD-11), millions of clinical findings, investigations, and management pathways. Every condition is linked to its confusable pairs: the conditions that look almost identical but require different treatment.
When an AI system makes a clinical decision, we don't ask another AI if it looks right. We trace the reasoning against structured medical knowledge where every step is auditable.
Confusable Pair Detection
The graph encodes which conditions are commonly confused (PE vs pneumonia, appendicitis vs ectopic pregnancy) and which specific tests discriminate between them. If an AI denial ignores a discriminating investigation, we flag it.
Structured Reasoning Chains
Every audit produces a step-by-step reasoning chain: finding → differential → investigation → discrimination → conclusion. Each step is traceable to published clinical evidence. No hallucination. No black box.
Regulatory-Ready Output
Texas SB 815 requires a "plain-language explanation of how AI influenced the decision." Our audit reports are designed to meet this requirement out of the box. Compliance by construction, not retrofit.
The compliance clock is ticking
From audit to engine
Today, CliniReason audits your existing AI's decisions.
Tomorrow, we replace the black box entirely. The same clinical reasoning graph that audits decisions can make them with explainability built in from the ground up. One system for clinical reasoning, compliance, and audit. No bolt-on explainability layer needed.
makes
decisions
audits them
IS the
clinical
reasoning
engine
Built on a comprehensive clinical reasoning graph
CliniReason started as a clinical simulation engine modeling complex diagnostic cases. To power realistic clinical scenarios, we built a structured clinical reasoning graph encoding how conditions relate, which ones get confused, and what distinguishes them.
It quickly became clear: this graph isn't just useful for simulation. It's the structured clinical reasoning layer that the entire healthcare AI industry is missing.
The entire technical stack - graph database, backend API, and evaluation infrastructure - is built for enterprise scale and seamless integration.
Expanding to cover the entire ICD-11.
relationships encoded
with sub-second latency
Get early access
We're onboarding the first health plans for clinical reasoning audits. Join the waitlist to be first in line.
We'll reach out when we're ready for your team. No spam.