Configuration reference#
Every SWARM_* environment variable + every feature flag. Resolution order: runtime override → env var → alias → default.
Runtime#
# Database
SWARM_DB_URL=postgresql://user:pass@host:5432/swarm # prod
# SWARM_DB_URL=sqlite:///swarm.db # dev (default)
# Web server
SWARM_API_HOST=0.0.0.0
SWARM_API_PORT=8000
SWARM_API_WORKERS=4 # uvicorn worker count
# Dashboard
SWARM_DASHBOARD_URL=http://localhost:3000 # CORS origin allowlist
NEXT_PUBLIC_API_URL=http://localhost:8000 # dashboard → API
# Object storage (for batch + audit artefacts)
SWARM_STORAGE_BACKEND=local | s3 | gcs | azure # default: local
SWARM_STORAGE_BUCKET=swarm-runs
SWARM_STORAGE_PREFIX=prod/
Authentication#
# Local password (dev only)
SWARM_LOCAL_PASSWORD_AUTH=true # false in prod
SWARM_JWT_SECRET=<generate-a-long-random-string>
SWARM_JWT_EXPIRY_HOURS=24
# OIDC
SWARM_OIDC_ENABLED=true
SWARM_OIDC_ISSUER=https://accounts.google.com
SWARM_OIDC_CLIENT_ID=...
SWARM_OIDC_CLIENT_SECRET=...
SWARM_OIDC_REDIRECT_URI=https://your-swarm/api/v1/auth/oidc/callback
SWARM_OIDC_SCOPES=openid email profile
SWARM_OIDC_AUTO_PROVISION=true
SWARM_OIDC_DEFAULT_ROLE=viewer
LLM providers#
# Provider selection — per-tier or default
SWARM_LLM_PROVIDER_DEFAULT=anthropic | openai | azure | openai_compat
SWARM_LLM_PROVIDER_FAST=<provider>
SWARM_LLM_PROVIDER_STANDARD=<provider>
SWARM_LLM_PROVIDER_DEEP=<provider>
# Model names per tier
SWARM_LLM_MODEL_FAST=claude-haiku-4-20260315
SWARM_LLM_MODEL_STANDARD=claude-sonnet-4-5-20260315
SWARM_LLM_MODEL_DEEP=claude-opus-4-1-20260315
# Provider API keys
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-proj-...
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://<resource>.openai.azure.com
AZURE_OPENAI_API_VERSION=2026-02-01
# For self-hosted / OpenAI-compat (e.g. vLLM):
OPENAI_API_BASE=https://vllm.internal/v1
OPENAI_API_KEY=<internal-token>
# Retry + rate limits
SWARM_LLM_RETRY_MAX_ATTEMPTS=3
SWARM_LLM_RETRY_BACKOFF_SECONDS=2
SWARM_LLM_RETRY_ON_429=true
SWARM_LLM_TIMEOUT_SECONDS=120
# Budgets
SWARM_BUDGET_PER_PIPELINE_USD=5.00
SWARM_BUDGET_PER_DAY_USD=500.00
SWARM_BUDGET_ACTION_ON_OVER=pause | alert_only | deny
See How-to: Bring your own LLM.
Concurrency + resources#
SWARM_MAX_CONCURRENT_PIPELINES=10 # global
SWARM_MAX_CONCURRENT_AGENTS=5 # per pipeline
SWARM_AGENT_TIMEOUT_SECONDS=600
SWARM_PIPELINE_TIMEOUT_MINUTES=60
Retention#
SWARM_RETENTION_ENABLED=true
SWARM_RETENTION_DAEMON_INTERVAL_HOURS=24 # run every N hours
SWARM_RETENTION_CONVERSATION_JSONL_DAYS=90
SWARM_RETENTION_RUN_EVENTS_DAYS=365
SWARM_RETENTION_SHADOW_PREDICTIONS_DAYS=30
SWARM_RETENTION_AUDIT_PDF_DAYS=2555 # 7 years (BFSI default)
Cron#
Batch#
Observability#
SWARM_LOG_LEVEL=DEBUG | INFO | WARNING | ERROR
SWARM_LOG_FORMAT=json | text # json in prod
SWARM_LOG_FILE=/var/log/swarm/api.log
# OpenTelemetry
OTEL_SERVICE_NAME=swarm-api
OTEL_EXPORTER_OTLP_ENDPOINT=https://otel-collector:4317
OTEL_RESOURCE_ATTRIBUTES=env=prod,region=ap-south-1
# Prometheus scrape — always on at /metrics (no config needed)
See Operations: Observability.
Feature flags#
Feature flags are three-tiered:
- Invariant — cannot be disabled at runtime (security / compliance invariants)
- Flag — stable, production-toggled
- Experiment — opt-in, may disappear
Current catalogue:
| Flag | Tier | Default | Purpose |
|---|---|---|---|
audit_trail_security_events |
Invariant | on | Denials always persisted |
cron_scheduler |
Flag | on | Cron daemon |
batch_runner |
Flag | on | Batch run subsystem |
plugin_commands_enabled |
Flag | on | Plugin command registry |
plugin_agents_enabled |
Flag | on | Plugin agent registry |
retention_daemon |
Flag | on | Retention sweeps |
evaluator_grading |
Experiment | off | Clean-context grader on agent output |
hooks_enabled |
Experiment | off | Plugin lifecycle hooks |
plugin_shell_hooks_enabled |
Experiment | off | Shell-command hooks (security choice) |
context_compaction |
Experiment | on | 80%-context compaction |
skill_registry |
Experiment | off | Plugin skill injection |
tool_allowlist_enforce |
Flag | on | Per-agent tool allowlists |
persist_conversations |
Flag | on | JSONL conversation journals |
auto_fairness_audit |
Flag | on | Fairness audit in default pipeline |
local_password_auth |
Flag | on (dev), off (prod) | Username/password login |
oidc_auth |
Flag | off | OIDC SSO |
Set via env:
Set via runtime:
List all:
Resolution order (detail)#
When is_enabled("hooks_enabled") is called:
- Runtime override —
swarm features setvia API or CLI (cleared on restart unless persisted) - Environment variable —
SWARM_HOOKS_ENABLED=true - Alias — legacy name if the flag was renamed (e.g.
SWARM_PLUGIN_HOOKS_V1) - Declared default — in
feature_flags.py::_FEATURES_LIST
First source that returns a value wins. Invariant tier ignores runtime override (can't be disabled).
Configuration file (alternative to env vars)#
For complex deployments, a YAML config file:
# ~/.swarm/config.yaml (or path via SWARM_CONFIG_FILE)
api:
host: 0.0.0.0
port: 8000
llm:
provider_default: anthropic
models:
fast: claude-haiku-4-20260315
standard: claude-sonnet-4-5-20260315
deep: claude-opus-4-1-20260315
flags:
hooks_enabled: true
plugin_shell_hooks_enabled: false
Env vars still take precedence over config file.
Precedence hierarchy (complete)#
runtime override (swarm features set)
├── env var (SWARM_*)
├── env var alias (legacy names)
├── config file (~/.swarm/config.yaml)
├── Helm values.yaml (if K8s)
└── declared default
Top wins.
Security note#
Never commit .env files with API keys, JWT secrets, or OIDC client secrets. Use:
- Kubernetes Secrets in prod
- Docker Compose .env in .gitignore for local dev
- Vault / AWS Secrets Manager / GCP Secret Manager for managed deployments
swarm reads env vars at startup — it does not hot-reload secrets. Restart after rotation.
Next#
- Deployment — where to put each env var in each environment
- How-to: Bring your own LLM — LLM-provider-specific configs