PricingSecurityDocsChangelogStatus
Log inTry free
API v1 — stable

Xelurel AI Documentation

Xelurel AI is a governance layer for AI output. It sits between your AI model and your end users — assessing every output against your policy, enforcing human review where required, and logging every decision immutably for compliance and audit.

You send Xelurel AI your AI's output. Xelurel AI returns a decision: allow review block. Every decision is logged with a unique ID, risk score, triggered rules, and an immutable audit trail.

Your AI model
Generated output
Xelurel AI /assess
allow / review / block
End user
Xelurel AI does not modify, generate, or store your AI output in raw form. It makes a governance decision about content your model already produced, hashes the content for integrity, and records that decision permanently.

Quickstart

Get a governance decision on your first AI output in under 5 minutes.

1
Get your API key
Sign into the Xelurel AI dashboard and navigate to API Keys. Create a key with the environment set to test for development. Keys are prefixed xel_test_ for sandbox and xel_live_ for production.
2
Make your first request
Use the official SDK (npm install @xelurelai/sdk) or send a raw HTTP POST to /api/v1/assess with your API key in the x-api-key header. See examples below.
3
Read the decision
Check the decision field: allow, review, or block. Route your output accordingly — show it, hold it for human review, or suppress it entirely.
4
Store the decision ID
Persist the decision_id alongside your record. This is your audit receipt — it permanently links your output to the governance decision that governed it.

SDK (recommended)

bash
npm install @xelurelai/sdk
typescript
import { XelurelAI } from '@xelurelai/sdk'; const client = new XelurelAI({ apiKey: process.env.XELURELAI_API_KEY, }); const result = await client.assess({ prompt: "Summarize this patient visit", output: "Patient prescribed 500mg amoxicillin twice daily for 7 days.", use_case: "medical_note", }); // result.decision → "allow" | "review" | "block" // result.decision_id → keep this for your audit trail

cURL

bash
curl -X POST https://api.xelurel.com/v1/assess \ -H "Content-Type: application/json" \ -H "x-api-key: xel_test_your_key_here" \ -d '{ "prompt": "Summarize this patient visit", "output": "Patient prescribed 500mg amoxicillin twice daily for 7 days.", "use_case": "medical_note" }'

Response

json
{ "decision_id": "9f4e2a1b-3c7d-4e8f-a1b2-c3d4e5f67890", "decision": "review", "risk_score": 40, // 0–100 "risk_score_normalized": 0.4, // 0.0–1.0 "reasons": ["contains medication dosage"], "policy_id": "healthcare_default", "policy_version": "1.0.0" }

Authentication

All API requests require an API key in the x-api-key request header. Keys are tenant-scoped — all activity is isolated to your workspace.

http
POST /api/v1/assess HTTP/1.1 x-api-key: xel_test_abc123... Content-Type: application/json

Key environments

PrefixEnvironmentBehaviour
xel_test_sandboxFull functionality. Decisions logged. Safe for development, staging, and integration testing.
xel_live_productionFull functionality. Use for any production or patient-facing workflows.
Never expose your API key client-side. All calls to /api/v1/assess must originate from your backend server — never directly from a browser, mobile app, or public script.

API Key Management

API keys are managed through the Xelurel AI dashboard. Each key is scoped to your tenant and can be independently labelled, rotated, and revoked.

Creating a key

1
Open the dashboard
Navigate to your Xelurel AI dashboard and click API Keys in the top navigation.
2
Click Create key
Choose a label (e.g. "Production backend"), select the environment (test or live), and confirm.
3
Copy your key now
The full key is shown once only at creation time. Copy it immediately and store it in your secrets manager or environment variable store. Xelurel AI only retains the last 4 characters for identification.

Key security practices

PracticeDetail
Use separate keys per environmentNever use a live key in development. Keep test and production keys strictly separated.
Store in environment variablesNever hardcode keys in source code. Use process.env.XELURELAI_API_KEY or your secrets manager.
Rotate on suspected compromiseRevoke the key immediately from the dashboard and create a replacement. Revocation is instant.
Label keys meaningfullyUse labels like prod-backend-v2 so you can identify and revoke specific keys without disruption.
Rate limits are enforced per API key as well as per tenant and per IP. Creating multiple keys does not increase your overall rate limit allocation — limits are shared across your tenant.

POST /api/v1/assess

The core endpoint. Send your AI's generated output and receive a governance decision.

POST/api/v1/assessAssess an AI-generated output

Request body

FieldTypeRequiredDescription
promptstringrequiredThe input sent to your AI model. Max 50,000 characters. Used for semantic overlap checks and logged as a hash.
outputstringrequiredThe AI-generated output to assess. Max 50,000 characters. This is the content that will reach your user if allowed.
use_casestringoptionalContent type for policy and threshold selection. Defaults to general. See .
modelstringoptionalThe model name that generated the output (e.g. "gpt-4o", "claude-3-5-sonnet"). Logged for audit purposes only.
contextobjectoptionalAdditional metadata. Can include session_id, patient_id, or any key-value pairs. Not stored in raw form.
policy_idstringoptionalOverride the auto-selected policy for this assessment. Useful for multi-policy tenants.

Response

FieldTypeDescription
decision_idstring (uuid)Unique identifier for this governance decision. Store this — it is your audit receipt.
tenant_idstringYour tenant identifier.
decisionstringThe governance decision: allow, review, or block.
risk_scoreinteger (0–100)Aggregate risk score normalised to 0–100. Suitable for display.
risk_score_normalizedfloat (0.0–1.0)Raw risk score. Used for threshold comparisons. Matches policy threshold values directly.
reasonsstring[]Human-readable list of rules that triggered. Show these to reviewers — they explain why the decision was made.
policy_idstringThe policy that governed this assessment.
policy_versionstringThe exact policy version applied. Immutably linked to this decision record.
api_key_envstringtest or live, matching the key used.

Decision States

Every assessment returns one of three decisions. Your integration must handle all three — never assume only allow will be returned.

Decisionrisk_score_normalizedMeaningRequired action
allow0.00 – 0.30Output passed all policy rules within acceptable thresholds.Safe to deliver to end user. Store the decision_id on your record.
review0.31 – 0.69Output triggered one or more risk rules. Requires a human decision before delivery.Do not auto-publish. Hold the output and route to your human review queue. Record reviewer action.
block0.70 – 1.00Output triggered high-weight rules. Risk exceeds automatic review threshold.Suppress the output entirely. Inform the user that manual input is required. Log the decision_id regardless.
Never auto-publish a review decision. The entire compliance value of Xelurel AI is that review decisions require a human to approve, edit, or reject them. Auto-publishing defeats the audit trail and removes the human accountability the system is designed to prove.

Risk Scoring

The risk score is an aggregate of all rules that triggered during assessment. Each rule in your active policy carries a weight between 0 and 1. When a rule triggers, its weight is added to the running score. The final score is capped at 1.0 before thresholds are applied.

The response returns two representations: risk_score (0–100, integer, for display) and risk_score_normalized (0.0–1.0, float, matches policy threshold values directly).

json
// Example: two rules triggered // DOSAGE_DETECTED (weight: 0.4) + LOW_SEMANTIC_OVERLAP (weight: 0.3) { "risk_score": 70, // display value (0–100) "risk_score_normalized": 0.7, // 0.4 + 0.3 = 0.70 "decision": "block", // 0.70 exceeds reviewMax threshold "reasons": ["contains medication dosage", "output may not relate to prompt"] }
Risk score thresholds are configurable per tenant through the dashboard policy editor. Changes to thresholds take effect immediately on the next assessment after publishing.

Rate Limits

Xelurel AI enforces three independent rate limits on /api/v1/assess. All three must pass for the request to proceed. When a limit is exceeded, the API returns 429 with a Retry-After header.

Limit type
Limit
Scope
Tenant
120 / min
All requests across all API keys for your tenant
API key
60 / min
Requests from a single API key
IP
60 / min
Requests from a single originating IP address

Handling 429 responses

typescript
async function assessWithRetry(payload, retries = 3) { const res = await fetch('https://api.xelurel.com/v1/assess', { method: 'POST', headers: { 'x-api-key': process.env.XELURELAI_API_KEY, 'Content-Type': 'application/json' }, body: JSON.stringify(payload), ); if (res.status === 429) { if (retries === 0) throw new Error("rate limited"); const wait = parseInt(res.headers.get('Retry-After') ?? '2') * 1000; await new Promise(r => setTimeout(r, wait)); return assessWithRetry(payload, retries - 1); } return res.json(); }

JavaScript SDK

The official SDK wraps the assess API with typed responses, automatic error handling, and retry-after support. Zero dependencies — works in Node.js 18+ and modern bundlers.

Installation

bash
npm install @xelurelai/sdk

Initialisation

typescript
import { XelurelAI } from '@xelurelai/sdk'; // Instantiate once and reuse across your application const xel = new XelurelAI({ apiKey: process.env.XELURELAI_API_KEY, // required baseUrl: "https://app.xelurel.com", // optional — defaults to this timeout: 30_000, // optional — ms, default 30 000 });

client.assess(params)

ParamTypeRequiredDescription
promptstringrequiredThe input sent to your AI model.
outputstringrequiredThe AI-generated text to evaluate.
use_casestringoptionalUse-case hint for policy routing. e.g. medical_note, legal_draft.
policy_idstringoptionalOverride the policy to evaluate against.
modelstringoptionalModel name — logged for audit, not used in scoring.

Full usage example

typescript
import { XelurelAI, XelurelAIError } from '@xelurelai/sdk'; const xel = new XelurelAI({ apiKey: process.env.XELURELAI_API_KEY }); try { const result = await xel.assess({ prompt: userMessage, output: aiResponse, use_case: "customer_support", model: "gpt-4o", }); if XelurelAI.isAllowed(result)) { return deliverToUser(result); } if XelurelAI.needsReview(result)) { await queueForReview({ output: aiResponse, decisionId: result.decision_id, reasons: result.reasons }); return { status: "pending", decisionId: result.decision_id }; } // Xelurel AI.isBlocked(result) — suppress the output return { status: "blocked", reasons: result.reasons }; } catch (err) { if (err instanceof XelurelAIError && err.code === "rate_limited") { // retry after err.retryAfterMs } throw err; }

Static helpers

MethodReturnsDescription
XelurelAI.isAllowed(result)booleanTrue when decision === "allow". Safe to deliver output.
XelurelAI.needsReview(result)booleanTrue when decision === "review". Route to human review queue.
XelurelAI.isBlocked(result)booleanTrue when decision === "block". Suppress output entirely.

XelurelAIError

All API errors throw a XelurelAIError with the following properties:

PropertyTypeDescription
messagestringHuman-readable error description.
statusnumber | nullHTTP status code, if the request reached the server.
codestring | nullMachine-readable code: rate_limited, timeout, network_error.
retryAfterMsnumber | nullSet on rate_limited — milliseconds to wait before retrying.
The SDK is ESM-first (import) with a CommonJS wrapper (require) for legacy Node.js projects. TypeScript types are bundled — no @types/ package needed.

Node.js / TypeScript

Standard integration pattern for any AI pipeline that produces text output before delivery to a user.

User input
AI model
Generated output
Xelurel AI /assess
allow / review / block
End user

Using the SDK

typescript
import { XelurelAI } from '@xelurelai/sdk'; const xel = new XelurelAI({ apiKey: process.env.XELURELAI_API_KEY }); async function assessOutput(prompt: string, output: string, useCase: string) { const result = await xel.assess({ prompt, output, use_case: useCase }); switch (result.decision) { case 'allow': return { status: 'ready', output, decision_id: result.decision_id }; case 'review': await queueForReview({ output, decision_id: result.decision_id, reasons: result.reasons }); return { status: 'pending_review', decision_id: result.decision_id }; case 'block': return { status: 'blocked', decision_id: result.decision_id, reasons: result.reasons }; } }

Raw fetch (no SDK)

typescript
async function assessOutput(prompt: string, output: string, useCase: string) { const res = await fetch('https://api.xelurel.com/v1/assess', { method: 'POST', headers: { 'Content-Type': 'application/json', 'x-api-key': process.env.XELURELAI_API_KEY! }, body: JSON.stringify({ prompt, output, use_case: useCase }), }); if (!res.ok) throw new Error(`Xelurel AI error ${res.status}`); const { decision_id, decision, reasons } = await res.json(); // handle decision as above... }

Python

python
import requests, os def assess_output(prompt: str, output: str, use_case: str = "general") -> dict: response = requests.post( "https://api.xelurel.com/v1/assess", headers={ "x-api-key": os.environ["XELURELAI_API_KEY"], "Content-Type": "application/json", }, json={ "prompt": prompt, "output": output, "use_case": use_case, }, timeout=10, ) response.raise_for_status() data = response.json() match data["decision"]: case "allow": return {"status": "ready", "output": output, "decision_id": data["decision_id"]} case "review": queue_for_review(output, data) return {"status": "pending_review", "decision_id": data["decision_id"]} case "block": return {"status": "blocked", "decision_id": data["decision_id"], "reasons": data["reasons"]}

Handling Decisions in Your UI

What your user-facing interface should do for each decision state.

allow — deliver the output

tsx
// Output passed governance — present normally if (decision === 'allow') { return <OutputViewer output={output} governanceId={decision_id} />; }

review — hold and route to reviewer

tsx
// Output flagged — must not be auto-published if (decision === 'review') { return ( <ReviewBanner reasons={reasons} // show why it was flagged output={output} onApprove={() => submitOutput(output, decision_id, 'approved') onEdit={() => openEditor(output, decision_id) onReject={() => discardOutput(decision_id) /> ); }

block — suppress and prompt manual input

tsx
// Output blocked — do not show AI content to user if (decision === 'block') { return ( <BlockedNotice message="This output could not be automatically generated safely." reasons={reasons} onManualEntry={() => openManualEditor() governanceId={decision_id} // log even blocked attempts /> ); }
Always store decision_id on your underlying record — even for allow decisions. This links every output to the governance decision that permitted it. Your audit trail requires this.

Use Case Types

The use_case field tells Xelurel AI what kind of content it is assessing. Different use cases apply different policy thresholds — clinical content uses stricter thresholds than general content.

use_case valuePolicy appliedNotes
medical_notehealthcare_defaultSOAP notes, visit summaries, clinical documentation. Strictest thresholds — dosage, diagnosis language, allergy flags all apply.
discharge_summaryhealthcare_defaultDischarge documentation. Same policy as medical_note.
patient_instructionshealthcare_defaultPatient-facing instructions. Dosage and allergy rules apply.
legal_draftlaw_defaultLegal documents and contract analysis. Flags definitive advice language and privileged terms.
generalgeneral_defaultDefault fallback. Standard thresholds. Appropriate when no specific use case applies.
You can define additional use cases and per-use-case threshold overrides through your policy configuration in the dashboard — no code changes needed.

How Policies Work

A policy is a versioned set of rules and thresholds that governs how AI outputs are assessed. Every tenant has their own isolated policies. Every assessment records the exact policy version that was active — the audit trail is immutable even when policies change.

Policies use semantic versioning (1.0.0, 1.0.1, etc.). You work in a draft version that you can edit freely, then publish it as a new immutable version. Past decisions remain permanently linked to the version that governed them.

Policy lifecycle

1
Edit the draft
Navigate to Policies in your dashboard. All editing happens in the draft version — changes do not affect live assessments until you publish.
2
Test your changes
Use your test API key to run assessments and verify the draft rules behave as expected before promoting to production.
3
Publish
Clicking Publish creates a new immutable version and immediately activates it for all future assessments under that policy.
4
Rollback if needed
If a published version causes unexpected behaviour, you can roll back to any previous version from the dashboard. Past decisions remain linked to the version that governed them.
Policy changes take effect immediately after publishing. There is no cache delay — every assessment fetches the active policy version directly.

Rule Types

Each policy contains an array of rules. When a rule triggers, its weight is added to the risk score. Rules are evaluated in order with an early exit once the block threshold is exceeded.

regex
Tests a regular expression against the target text. Use for pattern detection — medication dosages, critical values, specific terminology. Max 300 character pattern.
contains_any
Triggers if the target contains any of the provided strings. Case-insensitive substring match. Efficient for keyword lists.
length_lt
Triggers if the target text is shorter than min characters. Catches suspiciously short or empty outputs that may indicate a model failure.
token_overlap_lt
Triggers if the semantic token overlap between prompt and output is below minOverlap. Catches outputs that do not appear to address their prompt.

Rule targets

targetDescription
outputEvaluate the rule against the AI-generated output only. Most rules should target output.
promptEvaluate against the input prompt only. Useful for detecting sensitive query patterns.
prompt_outputEvaluate against the concatenation of prompt and output. Used for overlap and relevance checks.

Example rule — regex

json
{ "id": "DOSAGE_DETECTED", "type": "regex", "target": "output", // "prompt" | "output" | "prompt_output" "pattern": "\\b\\d+(\\.\\d+)?\\s*(mg|ml|mcg|units|tablets?)\\b", "flags": "i", // case-insensitive "weight": 0.4, // added to risk_score_normalized if triggered "reason": "contains medication dosage" }

Example rule — contains_any

json
{ "id": "ALLERGY_MENTION", "type": "contains_any", "target": "output", "any": ["allerg", "anaphylax", "epipen"], "weight": 0.3, "reason": "contains allergy reference requiring review" }
Regex rules must not contain nested quantifiers (e.g. (a+)+) to prevent ReDoS attacks. Patterns over 300 characters are rejected. Both checks are enforced at policy save time.

Thresholds

Thresholds define the risk_score_normalized (0.0–1.0) boundaries for each decision. You can set policy-wide thresholds and override them for specific use cases.

json
// Policy-level thresholds (defaults) "thresholds": { "allowMax": 0.30, // 0.00 – 0.30 → allow "reviewMax": 0.69 // 0.31 – 0.69 → review | 0.70+ → block }, // Per use_case override — stricter for medical_note content "useCaseOverrides": { "medical_note": { "thresholds": { "allowMax": 0.19, // tighter allow band "reviewMax": 0.59 // lower block threshold for clinical content } } }

Dashboard Overview

The Xelurel AI dashboard is your operational interface for governance. It provides real-time visibility into all AI decisions made through your tenant, a policy editor, analytics, and audit export.

Access the dashboard at /dashboard after signing in with your Xelurel AI account.

Decisions
Real-time log of every assessment. Filter by decision type, date range, or API key. Click any decision to see full details including the reasons, risk score, triggered rules, and the complete audit event log.
Policies
View, edit, publish, and roll back your governance policies. All edits happen in a safe draft version. Publishing creates an immutable version immediately applied to all new assessments.
Analytics
Aggregated stats over configurable time windows. Decision distribution (allow / review / block), top triggered rules, violation reasons, and risk score trends over time.

Decision Log

The decision log is the real-time record of every assessment made through your tenant. Each row represents one call to /api/v1/assess.

What you can see per decision

FieldDescription
Decision IDUUID. The audit receipt stored on your system records.
TimestampServer-side ISO timestamp. Tamper-evident.
Decisionallow review block — the governance outcome.
Risk scoreAggregate score at time of assessment (0–100).
ReasonsHuman-readable list of rules that triggered and why.
Rules triggeredExact rule IDs from your active policy that fired.
Policy / versionThe exact policy version that governed this decision.
API keyLast 4 of the key used — environment (test / live) and label.
Use caseThe use_case value sent in the request.
ModelThe client model reported in the request, if provided.
Review statusWhether a human reviewer has acted on this decision.
Audit eventsFull chronological log of every action taken on this decision.

Review Actions

Decisions with status review can receive human review actions from your team directly in the dashboard. All review actions are appended to the immutable audit log with the reviewer's identity and timestamp.

ActionResulting statusWhen to use
ApproveallowReviewer has examined the output and determined it is safe to deliver. Output can now be sent to the end user.
RejectblockReviewer has determined the output should not be delivered. Adds reviewer identity and reason to the audit trail.
Send for reviewreviewEscalate to another team member. Decision remains in review status with an audit note.
Every review action records the reviewer's user ID, email, timestamp, and any note they add. This is the human accountability chain that makes Xelurel AI audit-ready for enterprise and regulatory use cases.

Policies in the Dashboard

The Policies tab lets you view and edit your governance configuration without writing code.

Policy list

Your tenant starts with pre-seeded policies based on the industry you selected at registration:

Policy IDIndustryDefault rules included
general_defaultAllOUTPUT_TOO_SHORT, LOW_SEMANTIC_OVERLAP
healthcare_defaultHealthcareDOSAGE_DETECTED, OUTPUT_TOO_SHORT, LOW_SEMANTIC_OVERLAP, ALLERGY_MENTION
law_defaultLegalDEFINITIVE_LEGAL_ADVICE, PRIVILEGED_TERMS, OUTPUT_TOO_SHORT, LOW_SEMANTIC_OVERLAP

Editing a policy

Select a policy and click Edit draft. You can add, remove, and reorder rules, adjust weights and thresholds, and add use-case overrides. All changes are saved to the draft version only — live assessments continue using the published version until you explicitly publish.

Published policy versions are immutable. Once a version is published, it cannot be modified — only superseded by a new published version. This is by design: every past decision must remain permanently linked to the exact policy that governed it.

Analytics

The Analytics tab provides aggregated governance metrics over a configurable time window (default: 7 days).

MetricDescription
Total decisionsAll assessments in the selected window.
Decision splitCount and percentage of allow / review / block decisions.
Flagged ratePercentage of decisions that were review or block — a proxy for output risk rate.
Top rulesThe rules that triggered most frequently — useful for tuning policy weights.
Top reasonsMost common human-readable violation reasons, sorted by frequency.
Risk trendDaily risk score trend over the selected window. Useful for detecting model drift or prompt changes.

Audit Export

Export your complete decision log as CSV or JSON for compliance reporting, external audit, or integration with your SIEM or data warehouse. Export is available from the dashboard and via the API.

GET/api/admin/audit/exportExport decision log

Query parameters

ParameterTypeRequiredDescription
tenantIdstringrequiredYour tenant ID.
formatstringoptionalcsv (default) or json.
limitintegeroptionalMax records to return. Default: 2000. Max: 10,000.
fromIsostringoptionalISO 8601 start date filter (inclusive).
toIsostringoptionalISO 8601 end date filter (inclusive).
decisionIdstringoptionalExport a single decision by ID. Ignores limit/date filters.

CSV fields exported

Every exported row includes: decision_id, timestamp, use_case, model_used, api_key_env, policy_id, policy_version, decision, reviewed_decision, review_status, reviewed_by, reviewed_at_iso, review_note, risk_score, risk_score_normalized, rules_triggered, reasons, prompt_hash, output_hash, audit_events_count, audit_log.

For enterprise procurement or regulatory audits, export the full JSON format. It includes the complete audit event log per decision — every assessment, review action, reviewer identity, and timestamp, in a structured format that can be ingested directly into audit tooling.

Request Schema

typescript
interface AssessRequest { prompt: string; // required — max 50,000 chars output: string; // required — max 50,000 chars use_case?: string; // default: "general" model?: string; // model name — logged for audit context?: Record<string, unknown>; // optional metadata policy_id?: string; // override policy selection }

Response Schema

typescript
interface AssessResponse { decision_id: string; // uuid — store this on your record tenant_id: string; decision: 'allow' | 'review' | 'block'; risk_score: number; // 0–100 integer — for display risk_score_normalized: number; // 0.0–1.0 float — matches threshold values reasons: string[]; // human-readable rule triggers policy_id: string; policy_version: string; api_key_id: string | null; api_key_env: 'test' | 'live' | null; api_key_last4: string | null; }

Errors

All error responses return JSON with an error field describing the problem.

StatusErrorCause
400prompt and output are requiredRequest body missing prompt or output.
400prompt and output must be stringsNon-string value passed for prompt or output.
400prompt and output must each be under 50000 charactersInput exceeds the maximum length limit.
401missing api keyx-api-key header absent.
401invalid api keyKey not found, revoked, or inactive.
429rate_limitedLimit exceeded. Respect the Retry-After response header (seconds).
500internal server errorUnexpected server error. Retry with exponential backoff.

Error response shape

json
{ "error": "prompt and output are required" }

Rate limit response shape

json
{ "error": "rate_limited", "retryAfterMs": 15000 // milliseconds until your window resets }

Audit Trail

Every call to /api/v1/assess creates an immutable record in Xelurel AI's decision log. Records cannot be modified or deleted. The prompt and output are never stored in raw form — they are stored as HMAC-SHA256 hashes, scoped to your tenant, allowing cryptographic verification without retaining potentially sensitive content.

What is logged per decision

FieldDescription
idUUID. Your audit receipt — store this on every record your system produces.
tenantIdYour tenant. All decisions are fully isolated per tenant.
promptHashTenant-scoped HMAC-SHA256 of the prompt. Proves what context was provided without storing PHI.
outputHashTenant-scoped HMAC-SHA256 of the output. Proves exactly what was assessed.
hashVersionHash scheme version, for forward compatibility.
decisionallow / review / block — the governance outcome.
riskScoreThe aggregate score (0–100) at time of assessment.
riskScoreNormalizedThe raw score (0.0–1.0) used for threshold comparison.
reasonsHuman-readable rule triggers at time of assessment.
rulesTriggeredRule IDs from your policy that fired.
policyId + policyVersionExact policy snapshot that governed this decision. Permanently linked.
engineVersionAssessment engine used (policy_v1 or assess_v1 fallback).
apiKeyId / apiKeyLast4Which key was used. Last 4 characters only — the full key is never stored.
clientModelModel name reported by the caller, if provided.
createdAtServer-side timestamp. Set by Firestore — cannot be spoofed by the caller.
reviewStatusCurrent review state: null, approved, rejected, or sent_for_review.
reviewedBy / reviewedByEmailIdentity of the reviewer who acted on this decision.
reviewedAtIsoISO timestamp of the review action.
reviewNoteOptional note the reviewer attached to their action.
auditLogAppend-only array of every event on this decision — assessment, review actions, escalations. Capped at 200 entries.
When an auditor, hospital procurement team, or regulator asks "how do you ensure AI outputs don't reach users without oversight?" — you open the Xelurel AI dashboard or export the audit log and show them this record. Every output. Every decision. Every reviewer. Timestamped and immutable.