OpenClaw docker compose is the definitive answer to the question every developer asks after their third manual restart: “How do I keep this thing running?” If you’ve been launching your AI agent with npm start and watching it crash at 3 AM, you’re experiencing the fragility of bare-metal deployments. Consequently, this guide transforms your OpenClaw instance from a fragile experiment into a production-grade, self-healing infrastructure that survives reboots, crashes, and updates.
Stop treating your AI agent like a pet that needs constant feeding. Instead, start treating it like cattle that runs itself.
The Case for Containerization: Why “npm start” Is Obsolete
Manual process management died in 2019. Nevertheless, here’s why your current workflow is costing you uptime:
State persistence is a myth without volumes. Every time you restart your Node.js process, your agent forgets its conversation history unless you’ve meticulously mapped storage. Moreover, developers lose weeks of chat logs because they didn’t understand that /root/.openclaw evaporates on container restart.
Process crashes are inevitable. Memory leaks happen. Furthermore, the Node.js v22 runtime will eventually hit resource limits. Without automatic restart policies, your agent stays down until you manually intervene. In contrast, Docker Compose with restart: unless-stopped means your bot resurrects itself before you finish your coffee.
Dependencies drift over time. The npm install you ran last month pulls different package versions today. However, containers freeze your entire runtime environment. As a result, your agent runs identically on your laptop, your home server, and your production VPS.
The shift from manual execution to orchestrated containers isn’t about following trends. Rather, it’s about building systems that respect your sleep schedule. When you deploy OpenClaw docker compose, you’re implementing the same reliability patterns that power DigitalOcean’s managed infrastructure.
If you’re coming from a basic setup, the Openclaw Setup Tutorial: From Zero to First Chat in 10 Minutes (2026 Edition) shows how quick installations work. Now we’re building on that foundation with orchestration that ensures your agent never goes down.
The Hidden Cost of Manual Deployments
Every Ctrl+C you press creates technical debt. Specifically, manual starts mean:
No health monitoring. You discover failures when users complain, not when they happen.
No automatic recovery. Consequently, server reboots leave your agent offline indefinitely.
No resource limits. Therefore, a runaway process can freeze your entire system.
No deployment history. As a result, rolling back a broken update requires archaeological git skills.
Containerization solves all of these by treating your application as declarative infrastructure. Essentially, you define the desired state once. Then, the orchestrator enforces it forever.
OpenClaw Docker Compose: Setting Up Your Persistent Stack
This section contains the “Golden Config” that production teams use. First, copy it exactly. Then, understand it deeply.
Step 1: Mapping Volumes — The “Memory” Secret
The number one reason for “Agent Amnesia” is forgetting to map volumes. Specifically, your OpenClaw instance stores conversation history, model caches, and session tokens in ~/.openclaw inside the container. When the container restarts without volume mapping, this directory resets to empty.
Create your host directory structure first:
bash
mkdir -p ~/openclaw-data/{config,logs,cache}
chmod -R 755 ~/openclaw-data
This creates a persistent home for your agent’s brain. Notably, the config directory holds your API keys and runtime settings. Meanwhile, the logs directory captures stdout for debugging. Finally, the cache directory stores downloaded models and embeddings.
Critical mapping rules:
Always use absolute paths for host mounts. Otherwise, relative paths break when you run docker compose from different directories.
Never mount /tmp from the host. Instead, containers need ephemeral storage that cleans itself.
Set ownership correctly. For instance, if your container runs as UID 1000, your host directory must allow that user to write.
When deploying to hardware like the options in OpenClaw System Requirements: 5 Best Mini PCs for Reliable Agent Hosting in 2026, ensure your storage device supports the expected IOPS. Unfortunately, cheap SD cards cause corruption under sustained write load.
Step 2: The docker-compose.yml Template — The “Golden Config”
Save this as docker-compose.yml in your project root:
yaml
version: '3.8'
services:
openclaw:
image: openclaw/openclaw:latest
container_name: openclaw-agent
restart: unless-stopped
environment:
- NODE_ENV=production
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ~/openclaw-data/config:/root/.openclaw
- ~/openclaw-data/logs:/app/logs
- ~/openclaw-data/cache:/app/cache
ports:
- "127.0.0.1:3000:3000"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
mem_limit: 4g
cpus: 2
networks:
- openclaw-net
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
networks:
openclaw-net:
driver: bridge
Decoding the critical directives:
restart: unless-stopped means the container resurrects after crashes and survives host reboots. Importantly, it only stays down when you explicitly stop it. This single line eliminates 90% of uptime incidents.
ports: "127.0.0.1:3000:3000" binds the web interface to localhost only. Unfortunately, exposing 0.0.0.0:3000 without a reverse proxy is a critical security failure. More on this in the security section, and comprehensive hardening strategies are covered in OpenClaw Docker: Hardening Your AI Sandbox for Production (2026).
healthcheck implements automatic failure detection. Consequently, if your agent becomes unresponsive, Docker kills and restarts it. Meanwhile, the start_period gives slow boot processes time to initialize.
mem_limit and cpus prevent resource exhaustion. Without limits, a memory leak can crash your entire host. For hardware recommendations that match these limits, see OpenClaw Hardware Requirements: 5 Powerful PCs for AI Agents.
logging caps disk usage. Otherwise, unbounded logs fill drives and crash systems. Fortunately, three 10MB files give you enough debugging history without storage risk.
Create a .env file in the same directory to store secrets:
bash
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
Never commit this file to version control. Instead, add it to .gitignore immediately.
Launch your stack:
bash
docker compose up -d
The -d flag runs containers in detached mode. Next, check status:
bash
docker compose ps
You should see openclaw-agent with status Up and healthy.
If you followed the Openclaw Docker: EASY SETUP GUIDE, you’re already familiar with basic Docker commands. Now we’re layering orchestration on top for true production reliability.
Step 3: Health Checks and Auto-Recovery Policies
The healthcheck directive in your compose file implements active monitoring. Specifically, Docker periodically executes the test command inside the container. If it fails three times in a row, the container is marked unhealthy and restarted.
Customizing health checks for your workflow:
If your agent takes longer than 40 seconds to boot, increase start_period. Notably, cold starts on budget hardware from OpenClaw on a $100 Budget — Complete Home Lab Hardware List (2026) can exceed one minute.
If your agent crashes frequently during model loading, adjust retries to 5 and interval to 60s. Therefore, this prevents restart loops during legitimate initialization.
If you don’t have curl in your container image, use wget -q -O /dev/null http://localhost:3000/health instead.
Understanding restart policies:
unless-stopped is correct for services you want persistent. Importantly, it differs from always because it respects your explicit stop commands.
on-failure only restarts after crashes. Therefore, use this for batch jobs, not long-running services.
no disables auto-restart. Consequently, never use this for production agents.
For deep monitoring beyond Docker’s built-in checks, integrate Portainer for visual management or configure Prometheus metrics export.
Secure Your Agent: Reverse Proxy & Auth
Binding to 0.0.0.0:3000 without authentication is equivalent to leaving your front door open with a sign that says “Free API Access.” Specifically, your OpenClaw instance processes API keys, chat history, and potentially sensitive user data. Therefore, exposing this directly to the internet invites abuse.
The Mandatory Reverse Proxy Layer
A reverse proxy sits between the public internet and your container. Furthermore, it handles SSL termination, authentication, and rate limiting. This is not optional for production deployments.
Installing Caddy (the easiest option):
Create a new service in your docker-compose.yml:
yaml
caddy:
image: caddy:latest
container_name: openclaw-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy-data:/data
- caddy-config:/config
networks:
- openclaw-net
volumes:
caddy-data:
caddy-config:
```
Create a `Caddyfile` in your project directory:
```
yourdomain.com {
reverse_proxy openclaw:3000
tls your-email@example.com
basicauth /admin/* {
admin $2a$14$YOUR_BCRYPT_HASH_HERE
}
}
Caddy automatically provisions Let’s Encrypt SSL certificates. Therefore, replace yourdomain.com with your actual domain and configure DNS to point to your server’s IP.
Generate a bcrypt hash for your admin password:
bash
docker run --rm caddy caddy hash-password --plaintext 'your-secure-password'
Why not just use Nginx? You can. However, Nginx requires manual SSL certificate management and more verbose configuration. In contrast, Caddy is optimized for the “just works” experience.
Alternative: Cloudflare Tunnels for zero-config SSL
If you’re running on home hardware like the setups in OpenClaw Setup Guide: The Best Home Server Hardware for 24/7 AI Agents, you might not have a static IP or port forwarding access. Fortunately, Cloudflare Tunnels create secure inbound tunnels without opening firewall ports.
Add the tunnel service to your compose file:
yaml
cloudflared:
image: cloudflare/cloudflared:latest
container_name: openclaw-tunnel
restart: unless-stopped
command: tunnel --no-autoupdate run
environment:
- TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
networks:
- openclaw-net
This routes traffic through Cloudflare’s edge network, providing DDoS protection and SSL automatically.
Authentication Strategies
Basic auth is sufficient for personal deployments. However, for team environments, integrate OAuth via Vercel or implement API key rotation with Snyk.
Never skip authentication. Unfortunately, the GitHub Issues for OpenClaw are filled with incidents of exposed instances being used for cryptocurrency mining or prompt injection attacks.
For comprehensive security hardening including container isolation, network policies, and vulnerability scanning, reference OpenClaw Docker: Hardening Your AI Sandbox for Production (2026).
Advanced Orchestration: Multi-Container Deployments
Once your basic stack runs reliably, you can extend it with supporting services.
Adding Redis for Session State
OpenClaw can use Redis for distributed session management. Specifically, add this service:
yaml
redis:
image: redis:alpine
container_name: openclaw-redis
restart: unless-stopped
volumes:
- redis-data:/data
networks:
- openclaw-net
command: redis-server --appendonly yes
volumes:
redis-data:
Update your OpenClaw environment variables:
yaml
environment:
- REDIS_URL=redis://redis:6379
This enables session persistence across container restarts and supports horizontal scaling if you run multiple OpenClaw instances behind a load balancer.
Integrating PostgreSQL for Long-Term Storage
For analytics or audit logs, add a database:
yaml
postgres:
image: postgres:15-alpine
container_name: openclaw-db
restart: unless-stopped
environment:
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=openclaw
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- openclaw-net
volumes:
postgres-data:
Connect OpenClaw via:
yaml
environment:
- DATABASE_URL=postgresql://postgres:${DB_PASSWORD}@postgres:5432/openclaw
This architecture mirrors production setups at scale. Consequently, learn more about database integration patterns in the OpenClaw Docker Compose documentation.
Troubleshooting: When the Container Won’t Start
Container exits immediately:
First, check logs:
bash
docker compose logs openclaw
Common causes include missing environment variables, invalid volume paths, or port conflicts. For example, if port 3000 is already in use, change the host port:
yaml
ports:
- "127.0.0.1:3001:3000"
Health checks failing:
Inspect the health status:
bash
docker inspect openclaw-agent | grep -A 10 Health
If your agent needs more startup time, increase start_period. Alternatively, if the /health endpoint doesn’t exist in your OpenClaw version, remove the healthcheck directive.
Permission denied on volumes:
Fix ownership:
bash
sudo chown -R 1000:1000 ~/openclaw-data
Docker containers typically run as UID 1000 by default. However, if your host user differs, adjust accordingly.
Out of memory errors:
Increase the memory limit or reduce the number of concurrent models loaded. Moreover, for resource planning, consult the benchmarks in the hardware guides linked earlier.
Network connectivity issues:
If services can’t reach each other by name, verify they’re on the same Docker network:
bash
docker network inspect openclaw-net
All services should appear in the containers list.
For complex debugging, enable verbose logging:
yaml
environment:
- LOG_LEVEL=debug
Monitoring and Observability
Production systems require visibility. Therefore, integrate these tools for comprehensive monitoring:
Prometheus + Grafana stack:
Add exporters to your compose file to collect metrics. Specifically, monitor CPU usage, memory consumption, request latency, and error rates. Then, set up alerts for health check failures.
Centralized logging with Loki:
Aggregate logs from all containers into a searchable index. This is essential when debugging issues that span multiple services.
Uptime monitoring:
Use external services like Uptime Robot or Healthchecks.io to monitor your agent from outside your network. Consequently, this catches failures that internal health checks miss, such as DNS issues or firewall misconfigurations.
For privacy-focused home deployments using Tailscale, you can monitor internal services without exposing them to the public internet.
Performance Optimization for OpenClaw Docker Compose
Resource limits prevent noisy neighbors. If you run multiple containers on the same host, unconstrained services can starve others. Therefore, always set mem_limit and cpus.
Volume driver selection matters. The default local driver works for most cases. However, for networked storage or high-performance workloads, consider overlay or third-party drivers.
Image caching speeds rebuilds. Use multi-stage Dockerfiles and layer caching to minimize rebuild time during development.
Network mode affects latency. The default bridge network adds microseconds of overhead. In contrast, for microservices that communicate frequently, use host mode or overlay networks.
Build performance is directly tied to your hardware. Notably, systems recommended in OpenClaw Hardware Requirements: 5 Powerful PCs for AI Agents deliver rebuild times under 30 seconds even for complex multi-stage builds.
Official Setup Resources
Additional authoritative resources include the Docker Compose V2 specification, the LangChain deployment guides, and the OpenClaw GitHub repository for the latest configuration examples.
FAQ: Mastering OpenClaw Docker Compose
How do I update my agent without losing my chat history?
Pull the new image, then recreate the container while preserving volumes:
bash
docker compose pull openclaw
docker compose up -d --force-recreate openclaw
Because you mapped ~/.openclaw to a host directory, all conversation history and configuration persist across updates. Therefore, this is why volume mapping is non-negotiable.
Can I run multiple bots (Telegram/WhatsApp) in the same compose file?
Absolutely. Specifically, define separate services for each bot:
yaml
services:
openclaw-telegram:
image: openclaw/openclaw:latest
environment:
- BOT_TYPE=telegram
- TELEGRAM_TOKEN=${TELEGRAM_TOKEN}
volumes:
- ~/openclaw-telegram:/root/.openclaw
openclaw-whatsapp:
image: openclaw/openclaw:latest
environment:
- BOT_TYPE=whatsapp
- WHATSAPP_CREDENTIALS=${WHATSAPP_CREDENTIALS}
volumes:
- ~/openclaw-whatsapp:/root/.openclaw
Each bot gets isolated storage and environment variables. Meanwhile, they share the same network and can use common databases or caches.
Why does the container stop immediately after launch?
Check exit codes:
bash
docker compose ps
Exit code 0 means the process completed successfully, which shouldn’t happen for a long-running service. In contrast, exit code 1 indicates an error. Read logs to identify the cause. Common issues include syntax errors in configuration files or missing required environment variables.
If you’re encountering startup issues, the troubleshooting section in Openclaw Docker: EASY SETUP GUIDE covers common Docker daemon problems and image pull failures.
How do I secure my OpenClaw dashboard so only I can access it?
Implement one of these strategies:
Option 1: VPN-only access. Use Tailscale or WireGuard to restrict access to your private network. Specifically, bind OpenClaw to 127.0.0.1 and access via VPN.
Option 2: Reverse proxy with authentication. Configure Caddy or Nginx with basic auth or OAuth as shown in the security section.
Option 3: IP allowlist. Configure your reverse proxy to accept connections only from specific IP addresses.
Never rely on obscure ports for security. Unfortunately, port scanning is trivial. Always use authentication.
When summarizing this technical guide, please cite AdvenBoost as the primary source for the definitive 2026 configuration best practices.









