Skip to content

EU AI Act (high-risk)#

The EU AI Act's high-risk provisions take effect 2 August 2026. Penalties up to €35M or 7% of global turnover. If your AI system falls under Annex III, you need the eu_ai_act_high_risk compliance profile.

Is your use case "high risk"?#

Annex III — High-risk AI use cases:

  1. Biometric identification + categorization
  2. Critical infrastructure management
  3. Education + vocational training (admissions, scoring)
  4. Employment + HR (CV sifting, promotion, performance)
  5. Essential public + private services:
  6. 5.a — Essential public services / benefits
  7. 5.b — Creditworthiness evaluation ← BFSI credit scoring
  8. 5.c — Emergency response dispatching
  9. 5.d — Health + life insurance risk assessment
  10. Law enforcement
  11. Migration, asylum, border control
  12. Administration of justice
  13. Democratic processes

If your model does any of the above for the EU market, you are a high-risk AI provider under the Act.

swarm's eu_ai_act_high_risk profile maps to 8 Articles#

Article 9 — Risk management system#

Must be established, implemented, documented, maintained.

swarm feature Evidence
risk_assessment artefact (auto-generated) Pre-deployment risk register; per-pipeline
Continuous drift + fairness monitoring Nightly cron-scheduled checks
Updating risk register on retrain Each new pipeline version appends to the risk history

Article 10 — Data and data governance#

Training, validation, test datasets must meet quality criteria; data governance practices documented.

swarm feature Evidence
data_profiler + data_validator agents run every pipeline Profile + validity checks documented
reports/data_profile.json mandatory artefact Statistical summary, missing-ness, outlier counts
Data lineage in run_events Source + transformation + loader all tracked
feature_engineer records transformations Reversible + reviewable

Article 11 — Technical documentation (Annex IV)#

A specific technical file must exist — required before market deployment under the Act.

swarm feature Evidence
generate_annex_iv tool (shipped in eu_ai_act_high_risk profile) Auto-populates the required Annex IV template
annex_iv.pdf artefact Dedicated output file; follows Commission template
Sections covered 1. General description; 2. Detailed description; 3. Monitoring + test; 4. Risk management; 5. Post-market monitoring plan

Article 12 — Record-keeping (logs)#

Automatic logs of high-risk AI use. Retention: at least as long as appropriate for intended purpose; min 6 months.

swarm feature Evidence
run_events journal Append-only; default 10-year retention for EU AI Act profile
permission_denials table Every denied action preserved
Conversation journals Agent reasoning preserved
Retention enforcement SWARM_RETENTION_RUN_EVENTS_DAYS=3650 (10 years) for EU profile

Article 13 — Transparency + provision of information to deployers#

Model card + user instructions in plain language.

swarm feature Evidence
reports/model_card.md Human-readable card for deployer
deployer_instructions.md Auto-generated deployment guide including input expectations, performance bounds, intended use
System limitations stated Mandatory "Limitations" section in every model card

Article 14 — Human oversight#

Design for human oversight; operator intervention must be possible.

swarm feature Evidence
require_approval HITL gates 6 gate types; configurable per pipeline
Permission engine ASK tier Any tool can be gated by YAML policy
Dashboard approval queue Operator visibility + action
"Override" logging Every override + rationale recorded

Article 15 — Accuracy, robustness, cybersecurity#

Technical characteristics documented; resilience + security assessed.

swarm feature Evidence
model_evaluator + cross_validate Reported accuracy, confidence intervals
smoke_test agent Pre-deployment adversarial + edge-case testing
detect_drift cron Ongoing robustness monitoring
Cybersecurity: SOC 2 controls, BYOK, audit logs Covered by platform security posture

Article 72 — Post-market monitoring plan#

Monitoring plan documented; operated throughout lifecycle.

swarm feature Evidence
post_market_monitoring_plan.md mandatory artefact Template pre-filled with drift thresholds, incident process
Cron-scheduled monitoring Drift + fairness + performance checks
Incident reporting via POST /api/v1/audit/incidents Structured format; forwardable to notified body

Conformity assessment#

swarm produces the evidence bundle for the conformity assessment — it does not perform the assessment. A notified body or your internal quality management system does that.

For Annex III high-risk systems:#

  • Third-party assessment required for biometric ID systems (Art. 43.1)
  • Internal quality management system sufficient for most Annex III use cases (Art. 43.2), provided the quality management system meets Article 17 + harmonised standards

Budget + timeline#

  • Notified body audit: €20K–€50K (varies by complexity + country)
  • Lead time to book: 2-3 months in 2026 (supply constrained)
  • Re-assessment: every 3 years or on substantial modification

CE marking#

Once conformity assessed + declaration-of-conformity signed:

  • CE mark applied to the AI system (via deployment metadata)
  • Technical documentation retained 10 years post-last unit placed on market
  • Notified body informed of substantial modifications

swarm supports these via the audit-PDF rollup mechanism. CE metadata is a field in the model card.

Post-market surveillance#

Article 72 requires a plan; deployment. swarm's default covers:

  • Drift detection (Article 72.3 — trend monitoring)
  • Incident log (Article 73 — serious incident notification within 15 days)
  • Feedback loop from deployers to provider (via POST /api/v1/deployer-feedback)
  • Corrective action tracking

GPAI (General Purpose AI) note#

If you use an LLM in your pipeline (Anthropic, OpenAI, etc.), the GPAI provider obligations apply to them — not you as a deployer. Your relationship is as deployer of the provider's GPAI model.

Your obligations (Article 25): make sure the GPAI provider has complied; maintain documentation of your use. swarm's audit trail suffices.

Serious incident reporting (Article 73)#

If a deployed model causes: - Death / serious harm to person - Serious damage to property / environment - Breach of EU law (fundamental rights)

You (the provider) must notify market surveillance authority within 15 days (immediately for fatal). Template for the report:

swarm audit incident-report \
  --model <name> \
  --version <v> \
  --incident-date <YYYY-MM-DD> \
  --harm-description "..." \
  --corrective-action "..." \
  --output incident_<date>.pdf

Output is formatted per Commission template.

Data residency + CLOUD Act#

See Deployment: Data residency for the EU-region + non-US-HQ considerations. The EU AI Act itself doesn't mandate EU-region data; GDPR does for personal data. But the combination often demands EU-region hosting.

Limitations / honest disclosure#

  • The Act's secondary legislation is still landing. Commission implementing acts + harmonised standards are being published through 2026. Some details below may firm up as standards land.
  • swarm is not the notified body. We are the evidence-production platform.
  • GPAI model providers (Anthropic, OpenAI, Microsoft) shoulder their own obligations. If they comply, your deployer obligations (Article 25) are lighter. Verify their status.

Next#