Skip to content

RBI FREE-AI (India BFSI)#

Reserve Bank of India's Framework for Responsible and Ethical Enablement of AI (FREE-AI) — 7 Sutras (guiding principles) for AI governance in Indian banking, NBFCs, and payment systems.

swarm's rbi_free_ai compliance profile maps each Sutra to concrete platform features with inspection-ready evidence.

The 7 Sutras — complete mapping#

Sutra 1 — Trust-building#

Regulatory intent: Users and regulators must be able to trust AI-driven decisions. Trust comes from auditability, transparency of process, and a clear record of who decided what.

swarm enforcement:

Feature Evidence
audit_trail_security_events (invariant flag) Every DENY decision persists to SQLite permission_denials regardless of other flags
run_events append-only journal Tamper-evident record of every tool call, LLM call, approval gate, decision
Audit PDF with SHA-256 manifest Hash-pinned evidence bundle; swarm audit verify proves untampered
OIDC SSO with per-action user attribution Every action names the responsible human

Sutra 2 — Human oversight#

Regulatory intent: Humans must be in the loop for consequential decisions. AI cannot unilaterally deploy, promote, or authorize.

swarm enforcement:

Feature Evidence
require_approval decorators 6 HITL gate types: deploy, promote, approve_retrain, approve_export, approve_config, approve_schema_change
ApprovalStore with user attribution Every approval records approver, timestamp, comment
Permission engine ASK tier ASK-tier rules prompt operator before tool execution
Dashboard pending-approvals surface Visual queue for awaiting decisions

Sutra 3 — Safety & robustness#

Regulatory intent: Deployed models must not silently degrade. Drift, data-quality issues, adversarial inputs must be detected.

swarm enforcement:

Feature Evidence
detect_drift tool + cron drift_check task kind Scheduled distribution drift monitoring with alert thresholds
smoke_test agent + tool Pre-deployment smoke-test suite
Shadow traffic (shadow_predictions table) Challenger observed in parallel before promotion
Retention daemon Pre-configured TTLs for PII artefacts

Sutra 4 — Fairness & non-discrimination#

Regulatory intent: Models must not discriminate against protected groups. Bias must be measured, documented, and mitigated.

swarm enforcement:

Feature Evidence
fairness_auditor agent (injected by rbi_free_ai profile) Runs fairness_audit on every compliance-enabled pipeline
fairness_audit tool (fairlearn MetricFrame) Demographic parity, equalized odds, equal opportunity; per-group metrics
reports/fairness_audit.json artefact Mandatory output; pipeline fails if this isn't produced
Permission rule: promotion blocked without fairness report RBI-002 in permission_policies_rbi.yaml

Sutra 5 — Accountability#

Regulatory intent: Every AI decision must trace to an accountable party. No blind spots.

swarm enforcement:

Feature Evidence
OIDC SSO with IdP group mapping User identity authenticated by enterprise IdP
Per-action user logging in run_events Every LLM call, tool call, approval tagged with user
rule_source column in permission_denials Every denial names which rule (and therefore which policy) enforced it
Immutable audit trail (append-only) Cannot rewrite history

Sutra 6 — Inclusive innovation#

Regulatory intent: Innovation accessible across institutional sizes. Small NBFCs should not be excluded from AI capability.

swarm enforcement:

Feature Evidence
Open-core (Apache-2.0) platform No per-user licensing on core runtime
Plugin ecosystem (CC marketplace compat) Industry-contributed integrations, no vendor lock
Kubernetes + Docker Compose Runs on a ₹30K/month VPS as well as a multi-region enterprise cluster
Bring-your-own-LLM support No mandatory vendor tie

Sutra 7 — Explainability#

Regulatory intent: Model outputs must be explainable — both globally (which features matter?) and locally (why was this specific customer rejected?).

swarm enforcement:

Feature Evidence
explain_model tool (SHAP-based) Global + per-prediction feature importances
reports/shap_explanation.json artefact Mandatory output under rbi_free_ai profile
generate_model_card tool Human-readable card per model
reports/model_card.md artefact Algorithm, parameters, training data, performance, limitations, protected attributes

The profile's bundled policies#

config/permission_policies_rbi.yaml:

version: "1.0"
description: "RBI FREE-AI Sutras  7-pillar AI governance"
rules:
  - id: RBI-001
    description: "Require fairness audit before deployment (Sutra 4)"
    when: { tool: deploy_serving }
    behaviour: ask
    reason: "RBI FREE-AI Sutra 4: operator must review fairness audit"
  - id: RBI-002
    description: "SHAP explanation mandatory before promotion (Sutra 7)"
    when: { tool: promote_challenger }
    behaviour: ask
    reason: "RBI FREE-AI Sutra 7: explainability artefact required"
  - id: RBI-003
    description: "Model card mandatory at package time (Sutra 7)"
    when: { tool: package_model }
    behaviour: ask
    reason: "RBI FREE-AI Sutra 7: model card + SHAP required pre-packaging"
  - id: RBI-004
    description: "Drift baseline required post-deploy (Sutra 3)"
    when: { tool: deploy_serving }
    behaviour: ask
    reason: "RBI FREE-AI Sutra 3: drift baseline must be pinned"

What the audit PDF contains (RBI profile)#

Follows the RBI's expected model risk report format:

  1. Cover — institution, model name, date, auditor, SHA-256 pin
  2. Executive summary — in Indian-English format, non-technical
  3. Sutra-by-Sutra evidence — 7 sections, one per Sutra, each citing the specific artefact
  4. Model card — algorithm + data + parameters + performance
  5. Fairness audit — per-protected-attribute metrics; definitional framework used
  6. Explainability — global + top-k-importance SHAP
  7. Drift baseline — pinned feature distributions
  8. Accountability trail — training data lineage, retraining log, approver history
  9. Governance attestations — operator signatures via approval gates
  10. Appendix — raw event log, permission denial log, environment manifest

Typical size: 15-25 pages. Designed to be read by a Model Risk Management committee.

Data residency (paired with compliance)#

RBI Master Direction on IT Outsourcing (2023) requires core banking data to stay in India. See Deployment: Data residency for the enforcement mechanism.

DPDPA (Digital Personal Data Protection Act, 2023)#

Companion to FREE-AI. Customers typically need:

  • Data principals' rights — customer can request model inputs used in their decision. Supported via POST /api/v1/audit/lineage?principal_id=<...>.
  • Purpose limitation — model can only use data for the stated purpose. Enforced by per-agent tool allowlists + YAML policies restricting data-source tools.
  • Storage localisation — SDF (Significant Data Fiduciaries) must store data in India. See Deployment: Data residency.
  • Breach notification — 72-hour window. Swarm emits structured incident reports via POST /api/v1/audit/incidents.

Limitations / honest disclosure#

  • swarm produces the evidence RBI auditors look for. swarm itself is not an auditor — your institution's internal / external audit teams do the actual certification.
  • RBI FREE-AI is guidance, not statute. Your institution's interpretation of the 7 Sutras may add specific requirements; use additional_policies in pipeline config to layer internal governance.
  • The fairness_auditor agent requires the pipeline to know which columns are protected attributes. swarm cannot detect this automatically — you must specify in the problem statement or dataset metadata.

Extending#

Add your institution's internal model-risk committee rules as an overlay:

# config/permission_policies_acme_bank.yaml
rules:
  - id: ACME-001
    description: "Models >₹10Cr exposure require CRO sign-off"
    when:
      tool: promote_challenger
      # Use tags from model_card metadata
      args_pattern: '"max_exposure_cr"\\s*:\\s*(\\d+)'
    behaviour: ask
    reason: "Acme internal: high-exposure models require CRO approval"

Next#