OpenClaw daemon: configuration for server operation
Default setup is enough for solo. For server operation with team access you need a deliberate daemon configuration. Here are the details.
openclaw.json: the central file
Located at ~/.openclaw/openclaw.json. JSON5 format, comments and trailing commas allowed. Top-level keys: providers, channels, skills, limits, logging, dashboard.
Provider routing
Multiple LLM providers in parallel:
{
"providers": {
"openai": { "apiKey": "$OPENAI_KEY", "models": ["gpt-5", "gpt-4o-mini"] },
"anthropic": { "apiKey": "$ANTHROPIC_KEY", "models": ["claude-opus-4-7", "claude-haiku-4-5"] },
"ollama": { "host": "http://localhost:11434", "models": ["llama3.3:70b"] }
}
}Per skill you can set the provider explicitly or route by data class.
Limits
Token and rate limits per user, channel and skill. Stops a runaway skill from torching your OpenAI budget.
{
"limits": {
"tokensPerUserPerDay": 500000,
"tokensPerSkillPerCall": 50000,
"rateLimitPerChannel": "30/min"
}
}
Logging and dashboard
Structured logging in ~/.openclaw/logs/openclaw.log, daily rotation, max 30 days. Dashboard only on 127.0.0.1:18789 โ never 0.0.0.0 (CVE-2026-25253).
launchd / systemd unit
On macOS OpenClaw places a plist in ~/Library/LaunchAgents/. On Linux a systemd user unit. Both should run Restart=always with RestartSec=5s, plus resource limits (MemoryMax=4G, CPUQuota=200%).