Compliance#
swarm is built for regulated ML. Every pipeline produces regulator-format evidence. Every permission decision is auditable. Every model promotion has an attributable approver.
This section maps swarm features to specific regulatory frameworks.
RBI FREE-AI (India BFSI) โ Reserve Bank of India's 7-Sutra AI governance
HIPAA (US healthcare) โ 45 CFR ยง 164.312 Technical Safeguards
EU AI Act โ Annex III high-risk conformity
- SOC 2 controls โ Security / Availability / Confidentiality
- Reading the audit PDF โ section-by-section
What "compliance by default" means#
Every pipeline run produces (whether you asked for it or not):
- permission_denials audit log
- run_events append-only journal
- Per-agent conversation JSONL
- Pipeline artefacts with SHA-256 pin
Adding a compliance profile (rbi_free_ai / hipaa / eu_ai_act_high_risk) adds:
- Profile-specific guardrail agents (fairness auditor, PHI redactor, drift monitor)
- Profile-specific permission policies
- Required artefacts (model card, SHAP, fairness report)
- Profile-specific audit PDF template
What swarm does NOT claim#
We are honest about the limits:
- swarm is not SOC 2 Type II certified yet. Target: month 9 post-launch.
- swarm does not hold a HIPAA BAA. Target: month 12.
- swarm has no EU AI Act notified-body conformity assessment. Target: month 15.
We are the evidence-production platform, not the audit certifier. Customer's auditor does the certification; swarm makes producing evidence cheap.
The buyer-facing pitch#
Two sentences:
Every agent action in swarm is permission-checked against a unified rule engine. Every denial is queryable in SQL with rule-source attribution โ the exact shape your regulator understands.
That's why BFSI + healthcare + EU-AI-Act customers pay.
Next#
- Start with the profile matching your jurisdiction
- Then Reading the audit PDF to understand the output format
- Then Concepts: Permissions & audit for the engine model