Complete Guide to OpenClaw Multi-Agent Routing: Elegant Isolation of Work, Life, and Experimental Scenarios

Monday morning at 9 AM, I opened my AI assistant to help a client write payment interface code. Suddenly it said: “Based on your Elden Ring boss strategies you researched last night, I suggest we adopt a ‘roll and slash’ approach here…”
I was stunned.
What made it worse was that this conversation happened during a screen-sharing meeting with the client. I could feel how awkward the silence was on the other end of the video call.
That moment I realized: AI assistants are getting smarter, but mine was mixing my work, entertainment, and experiments all together. Like a drawer without dividers where everything piles up—it takes forever to find things, and you often grab the wrong item.
If you’ve felt this way too, this article will help. I spent a weekend researching OpenClaw’s multi-agent routing feature and discovered it can make your AI assistant “clone” into multiple dedicated assistants: work stays with work, life stays with life, and experiments stay locked in a sandbox.
I’ll share: why you need this feature, how to configure it, and 5 scenario solutions I actually use.
Why Does Your AI Assistant “Cross-Talk”?
Three Awkward Moments with a Single Assistant
Honestly, I used to handle everything with one AI assistant until I encountered these situations:
Scenario One: Context Pollution
You ask AI to help optimize the company’s database queries, and it suddenly mentions: “Just like that Python scraper script you asked me about last week, we can use similar asynchronous processing…”—the problem is that scraper was my side project that I absolutely don’t want the company to know about.
Scenario Two: Privacy Leaks
While demonstrating a new tool to the team, you open the AI conversation history to showcase a feature, and the chat list is full of private questions like “how to ask your boss for a raise” and “should I report side income on taxes.” That social death feeling—you know what I mean.
Scenario Three: Experiments Blew Up Production
Once I was testing an automation script and wanted AI to help me batch rename files. It remembered my previous work project directory and directly renamed all source code files in the client project. Thank goodness for Git, or I might have had to flee the country.
Why Does This Happen?
Actually, the AI assistant itself isn’t wrong—it’s just too “loyal,” remembering everything you say and trying to connect all contexts. But the problem is:
- Work requires rigor and privacy protection
- Life needs relaxation and personalized advice
- Experiments need freedom but can’t affect production
These three things shouldn’t be mixed together in the first place.
Multi-Agent Routing Solution
OpenClaw’s multi-agent routing feature essentially gives each scenario its own independent “room”:
- Independent workspace: work-agent can only see the
/workdirectory, personal-agent can only access the/personaldirectory—file system-level physical isolation - Independent conversation history: What you chat about in work-agent, personal-agent has no idea, and vice versa
- Independent permission configuration: Experimental sandbox can be offline with read-only filesystem, work environment can access company Git, life environment connects to personal cloud storage
In Docker terms, this is called “container-level isolation.” In plain English, it’s like installing several independent brains for your AI assistant that don’t interfere with each other.
2026 AI security standards already clearly require: enterprise-level systems must implement strict tenant-level data isolation. The Docker sandbox approach OpenClaw uses is far more secure than tools that just “separate conversation groups.”
OpenClaw’s Three-Layer Isolation Architecture
The first time I read OpenClaw’s documentation, I was confused by terms like Gateway, Brain, and Skills. After configuring it myself, I found it’s actually quite understandable.
Three Layers Like a Delivery System
- Gateway Layer (Distribution Center): Receives messages from various platforms—messages from Telegram, Discord, Slack all arrive here first, then forwards to the appropriate agent based on your configuration
- Brain Layer (Dispatch Center): Takes the message and determines what you want to do—write code? Research? Execute commands? Then orchestrates the task workflow
- Skills Layer (Execution Warehouse): Where the actual work happens, each agent has its own independent Docker container running code and operating files
You Can Choose Three Isolation Levels
This design is quite flexible, choose based on your needs:
Session-level Isolation (Temporary Tasks)
Open a new container for each conversation, destroy it when done. Suitable for one-time tasks like “help me analyze this CSV file.” The advantage is cleanliness, the disadvantage is having to rebuild the environment each time.
Agent-level Isolation (Long-term Scenarios)
Each agent has a long-lived container. My work-agent and personal-agent are both this type—once the environment is configured, it stays there, ready to call anytime. This is my most commonly used mode.
OS User-level Isolation (Security Fanatic)
Different agents run under different system users, isolated at the operating system level. Honestly, I haven’t needed this level of strictness, but if you handle super-sensitive data (like finance, healthcare), this level is more reliable.
I’ve tested: Under Agent-level isolation, files created in agent-A are completely inaccessible to agent-B through any means, unless you actively configure a shared directory. This kind of security feels reassuring.
My 5 Practical Scenarios
This section is the real stuff—I’m sharing my actual configuration solutions that you can directly reference.
Scenario 1: Work vs Personal Dual Environment
My pain point: Company projects and personal blog code kept mixing, and once during a commit I almost pushed company code to my personal GitHub.
Solution:
- work-agent: Working directory points to
D:\work, configured with company GitLab SSH keys, can only access work-related API keys - personal-agent: Working directory points to
D:\personal, configured with personal GitHub account, can experiment freely
Configuration tips:
Two agents use different .env files, WORKSPACE_PATH parameters point to different directories. Here’s a small trick: On Telegram I created two bots, @my_work_bot and @my_personal_bot, so I can tell at a glance which assistant I’m chatting with.
Scenario 2: Multi-Client Project Isolation
My pain point: As a freelancer handling three client projects simultaneously, previously AI’s code suggestions for client A actually used the variable naming style from client B’s project. Although nothing went wrong, I was quite nervous.
Solution:
Three agents: client-nike-agent, client-adidas-agent, client-puma-agent (aliases, of course)
Configuration tips:
- Each agent binds to independent project directories and Git repositories
- Configure different
PROJECT_NAMEin.envfor easy log tracking - Important: Each client’s API keys and database configurations are completely isolated, never cross-contaminate
Scenario 3: Safe Experimental Sandbox
My pain point: Wanted to test a batch file processing script, worried about incorrect logic deleting important files.
Solution:
Create sandbox-agent, enable strict sandbox mode.
Configuration tips:
ENABLE_NETWORK=false # Disconnect network
HOST_FILESYSTEM_MODE=readonly # Host filesystem read-only
TEMP_STORAGE=true # All modifications in temp directory, cleared on container restartThis way you can experiment freely—worst case, delete the container and rebuild, the host environment is completely unaffected. I now test any “potentially risky” operations here first.
Scenario 4: Team Collaboration and Personal Space
My pain point: When the team shares one AI assistant, my personal notes and to-do items are also mixed in, always feels like there’s no privacy.
Solution:
- team-agent: Team shared, configured with team codebase and documentation, everyone can access
- private-agent: My personal exclusive, records my thoughts, drafts, private to-dos
Configuration tips:
team-agent’s working directory set to the team’s shared NAS path, private-agent points to my local encrypted partition. On Telegram, team-agent binds to the team group, private-agent only I can privately chat with.
Scenario 5: Multi-Model Comparison Testing
My pain point: Wanted to compare Claude, GPT-4, and local Llama effects, frequently switching configurations was troublesome.
Solution:
Three agents in parallel: claude-agent, gpt-agent, llama-agent
Configuration tips:
Each agent’s .env configures different LLM_PROVIDER and API_KEY. I send the same question to all three agents, then compare answer quality, speed, and cost.
Interesting findings: Claude is most stable for code generation, GPT-4 is most natural for chatting, local Llama is slow but has good privacy—I use it for sensitive information.
Additional practical tips:
- Naming convention: I use
scenario-purpose-dateformat, likeclient-nike-backend-20260201, so even months later I know what it’s for from the logs - Quick switching: Install multiple Telegram accounts on phone, each account binds to different agent, swipe to switch super fast
- Resource optimization: Infrequently used agents (like llama-agent) I
docker stop, start when needed, saves a lot of memory
Step-by-Step Guide to Configuring Your First Agent Pair
Theory covered, now for some practical operation. I’m assuming you’ve already installed Docker and the OpenClaw base environment (if not, first check the official quick start documentation).
Goal: Configure work and personal two agents
Step 1: Prepare configuration files
cd openclaw
cp .env .env.work
cp .env .env.personalStep 2: Modify work-agent configuration
Open .env.work, change these key parameters:
AGENT_NAME=work-agent
WORKSPACE_PATH=/path/to/your/work/directory
TELEGRAM_BOT_TOKEN=your_work_bot_token
ALLOWED_USERS=your_telegram_user_idStep 3: Modify personal-agent configuration
Open .env.personal, similarly modify:
AGENT_NAME=personal-agent
WORKSPACE_PATH=/path/to/your/personal/directory
TELEGRAM_BOT_TOKEN=your_personal_bot_token
CONTAINER_PORT=8001 # Note: change port to avoid conflictsStep 4: Start both agents
docker-compose --env-file .env.work up -d
docker-compose --env-file .env.personal up -dWait a few seconds, when you see Status: Running you’ve succeeded.
3 Tests to Verify Isolation Effectiveness
Test 1: File Isolation
- Send message to work-agent: “Create a test.txt file, write ‘work data’”
- Send message to personal-agent: “List all files in current directory”
- Result: personal-agent can’t see test.txt
Test 2: Conversation Isolation
- Chat with work-agent: “Remember, my project codename is ProjectX”
- Ask personal-agent: “What’s my project codename?”
- Result: personal-agent responds “I don’t know” because it doesn’t have work-agent’s conversation history
Test 3: Feature Isolation
- In
.env.workconfigureENABLE_CODE_EXECUTION=true - In
.env.personalconfigureENABLE_CODE_EXECUTION=false - Result: work-agent can execute code, personal-agent refuses to execute
If all three tests pass, congratulations, isolation is working.
Common issues:
- “Port already in use”: Check
CONTAINER_PORT, ensure each agent uses different port - “Telegram bot not responding”: Confirm bot token is correct and
ALLOWED_USERSincludes your user ID - “File permission error”: Check
WORKSPACE_PATHdirectory permissions, ensure Docker has read/write access
Advanced Tips and Pitfall Guide
After using this for several months, I’ve summarized some experience to share with you.
Performance Optimization: Don’t Let Agents Eat All Your Resources
Initially I configured 5 agents and found my computer fan spinning wildly, memory using 8GB. Later I made these optimizations:
Limit resource caps
Add to docker-compose.yml:
deploy:
resources:
limits:
cpus: '0.5'
memory: 512MSingle agent uses at most half a CPU core and 512MB memory, which is sufficient.
Start and stop as needed
For infrequently used agents (like my experimental sandbox), write a simple start/stop script:
# start-sandbox.sh
docker start sandbox-agent-container
# After use
docker stop sandbox-agent-containerShare base image
All agents use the same OpenClaw base image, just different configurations. This way 5 agents only take an extra 1-2GB disk space, rather than copying everything for each.
Security Best Practices: Don’t Let Convenience Harm You
Principle of least privilege
Give each agent only the necessary permissions. For example, my personal-agent doesn’t need access to the /work directory at all, so I simply don’t mount this path in the Docker configuration.
Sensitive data handling
API keys, database passwords—always use environment variables, never hardcode in code. And I regularly check if .env files were accidentally committed to Git (.gitignore is your friend).
Regular audits
Check agent operation logs weekly:
docker logs work-agent-container --since 7d | grep ERRORSee if there are any abnormal operations or errors, keep yourself informed.
Common Problem Troubleshooting
Agent won’t start
- First check logs:
docker logs <container_name> - Nine times out of ten it’s port conflicts or Docker permission issues
- Solution: Change port, or add permissions to Docker user
Message routing failed
- Check if Telegram bot token is configured incorrectly
- Confirm Gateway configuration’s
ALLOWED_USERSincludes your ID - Use
curlto test if bot is online
File permission errors
- User ID inside Docker container doesn’t match host user ID
- Solution: Set
user: "${UID}:${GID}"indocker-compose.yml
Honestly I’ve stepped on all these pitfalls, don’t panic when you encounter them—90% of problems can be solved by checking logs.
Summary and Next Steps
After all this, three core points:
Why you need it: Single AI assistant mixes work, life, and experimental contexts, causing privacy leaks and reduced efficiency. Multi-agent routing solves this through physical isolation.
How to configure: Use different .env files to create multiple agents, configure independent working directories, message entries, and permissions. You can set up your first work/personal dual agent in five minutes.
Best practices: Minimum privileges, start/stop as needed, regular audits. Security and convenience can coexist—the key is not being lazy.
I now use this multi-agent system every day, and the real experience is: work is more focused (because AI won’t suddenly mention my side projects), experiments are more confident (freely experiment in sandbox), privacy is more secure (no fear of social death during demos).
Next action steps:
- Follow the tutorial above today to configure your first work/personal agent pair—it’s really not hard
- If you have questions, ask in OpenClaw’s GitHub Issues or Discord community, the people there are very friendly
- If you found this article useful, share it with friends who are also troubled by AI assistant “cross-talk”
One last thing: AI tools should make life easier, not more chaotic. Multi-agent routing is about “organizing” your AI assistants—each assistant performs its duties, and your workflow naturally becomes cleaner.
Give it a try, you’ll thank yourself today.
Configure OpenClaw Dual-Agent Workflow
Start from scratch to configure work and personal two independent agents, achieving physical isolation of work and personal scenarios
⏱️ Estimated time: 10 min
- 1
Step1: Prepare Configuration Files
Copy default environment configuration files to create independent configs for two agents:
• cd openclaw (enter project directory)
• cp .env .env.work (create work environment config)
• cp .env .env.personal (create personal environment config)
Notes:
• Ensure Docker and Docker Compose are installed
• Keep original .env file as template, don't modify directly - 2
Step2: Configure work-agent
Edit .env.work file, modify the following key parameters:
Required parameters:
• AGENT_NAME=work-agent (unique agent identifier)
• WORKSPACE_PATH=/path/to/work (absolute path to working directory)
• TELEGRAM_BOT_TOKEN=your_work_bot_token (Telegram bot token)
• ALLOWED_USERS=your_telegram_id (authorized user ID)
Optional parameters:
• ENABLE_CODE_EXECUTION=true (allow code execution)
• GIT_CONFIG_PATH=/path/to/work/.gitconfig (Git config path)
• CONTAINER_PORT=8000 (default port)
Get Telegram bot token: Create new bot through @BotFather
Get user ID: Send any message to @userinfobot - 3
Step3: Configure personal-agent
Edit .env.personal file, key is avoiding conflicts with work-agent:
Required parameters:
• AGENT_NAME=personal-agent
• WORKSPACE_PATH=/path/to/personal (different directory)
• TELEGRAM_BOT_TOKEN=your_personal_bot_token (different bot)
• CONTAINER_PORT=8001 (different port to avoid conflicts)
• ALLOWED_USERS=your_telegram_id (same user ID)
Security recommendations:
• Personal environment can have relaxed permission restrictions
• For stricter isolation, use different API_KEY
• Ensure WORKSPACE_PATH points to completely independent directory - 4
Step4: Start and Verify Isolation Effectiveness
Start both agents and test if isolation is working:
Startup commands:
• docker-compose --env-file .env.work up -d
• docker-compose --env-file .env.personal up -d
• docker ps (check container status, should see two Running)
Verification tests:
1. File isolation: Create file in work-agent, check if visible in personal-agent (should not be visible)
2. Conversation isolation: Set variable in work-agent, query in personal-agent (should have no memory)
3. Feature isolation: Test code execution, file access permissions separately
View logs:
• docker logs <container_name> (troubleshoot startup issues)
• docker logs -f <container_name> (view logs in real-time) - 5
Step5: Daily Usage and Maintenance
Daily operations and maintenance tips after configuration:
Quick switching:
• Install multiple Telegram accounts, each account binds to different bot
• Or use different devices (mobile/computer) to access different agents
Resource management:
• For infrequently used agents execute docker stop <container_name>
• When needed execute docker start <container_name>
• Regularly clean logs: docker logs --since 7d
Security checks:
• Weekly check if .env files were accidentally committed to Git
• Regular audit of operation logs: docker logs | grep ERROR
• Restart container after updating bot token: docker-compose restart
Performance optimization:
• Limit single agent resource caps in docker-compose.yml (CPU 0.5 core, memory 512MB)
• Multiple agents share base image, saving disk space
FAQ
Why recommend Agent-level isolation over Session-level isolation?
• Session-level isolation: Creates new container for each conversation, destroys when done, suitable for one-time tasks but requires repeated environment setup
• Agent-level isolation: Container lives long-term, environment configuration is persistent, available anytime—my work/personal agents are both this mode
Actual experience: Agent-level isolation runs continuously once started, fast response (no waiting for container creation), environment variables and dependencies are retained, particularly suitable for daily workflows. The only downside is some resource usage, but you can docker stop unused agents anytime.
Will multiple agents consume too much memory and disk?
Memory optimization:
• Limit single agent to 512MB memory (configure resources.limits in docker-compose.yml)
• 5 agents total about 2.5GB memory, completely sufficient for laptops
• Execute docker stop on infrequently used agents to free memory
Disk optimization:
• All agents share the same OpenClaw base image (about 500MB)
• Each agent's additional usage is only differential data (usually <100MB)
• 5 agents total 1-2GB disk space, not 5×500MB
My actual setup: 3 resident agents (work/personal/sandbox), another 2 started as needed, 16GB RAM computer has no pressure at all.
How to avoid confusion between different agents' Telegram bot tokens?
Bot naming convention:
• Work environment: @my_work_bot, @company_project_bot
• Personal environment: @my_personal_bot, @my_hobby_bot
• Experimental environment: @my_sandbox_bot
Quick switching methods:
• Install multiple Telegram accounts on mobile, each account binds to different bot (swipe to switch super fast)
• Or within same account, quickly distinguish by bot username (@work vs @personal)
• Set bot avatars with different colors (work=blue, personal=green, sandbox=yellow)
Management tips:
• Record all bot tokens centrally in password manager (1Password/Bitwarden)
• Add comments to .env files noting purposes
• Regularly check .gitignore to ensure tokens aren't committed to repository
Can personal-agent access work-agent's files?
Isolation mechanism:
• Each agent's WORKSPACE_PATH points to different directories (/work vs /personal)
• Docker container only mounts respective WORKSPACE_PATH during mounting, OS-level isolation
• Files created by agent-A are completely inaccessible to agent-B through any means
Special requirement scenarios:
• If you really need to share certain files (like common config files), can configure shared directory
• Add additional volume mount in docker-compose.yml: /shared:/shared:ro (read-only mode is safer)
• But I don't recommend this, it breaks isolation—better to manually copy needed files
My practice: Work and personal are completely isolated, when sharing is needed use Git or cloud storage as intermediary, maintaining agent purity.
Can sandbox-agent still use AI models after disconnecting network?
Disconnection limitations:
• After ENABLE_NETWORK=false, cannot access cloud APIs like OpenAI/Claude
• Suitable for testing file operations, script execution scenarios that don't need AI inference
Solutions:
• Use local models (Llama/Mistral): Deploy local inference services like Ollama inside sandbox
• Or keep network on but limit access scope: Configure firewall rules to only allow specific API domains
• My config: Sandbox enables network but HOST_FILESYSTEM_MODE=readonly, allows calling AI but can't modify host files
Applicable scenarios:
• Pure file operation testing: Disconnect network + read-only filesystem
• Experiments requiring AI assistance: Enable network but strictly limit file permissions
• Choose isolation strategy based on risk level
How to configure ALLOWED_USERS for team multi-user usage?
Configuration examples:
• Single user: ALLOWED_USERS=123456789
• Multiple users: ALLOWED_USERS=123456789,987654321,555666777
• Team scenario: team-agent configures all team member IDs, private-agent only configures personal ID
Methods to get user IDs:
1. Have each member send message to @userinfobot to get their own Telegram ID
2. Team admin collects centrally and configures in .env.work
3. Restart container after updating config: docker-compose restart
Security recommendations:
• Regularly audit ALLOWED_USERS list, remove departed members
• Use independent agents for sensitive projects, only authorize to project team members
• Operation logs record each user's ID for easy tracing
My practice: team-agent authorized to 5-person team, private-agent only myself—clear boundary between work and privacy.
What if configuration error causes agent startup failure?
First step: Check container logs
• docker logs <container_name> (view startup logs)
• 90% of errors will be clearly indicated in logs (port conflicts/insufficient permissions/configuration errors)
Common errors and solutions:
• "Port already in use": Modify CONTAINER_PORT in .env to unused port (8001/8002 etc.)
• "WORKSPACE_PATH doesn't exist": Check if directory path is correct, or manually create directory
• "Invalid Telegram bot token": Confirm token is correct with @BotFather, watch for leading/trailing spaces when copying
• "Permission denied": Check WORKSPACE_PATH directory permissions, execute chmod 755 <directory>
Debugging tips:
• Compare with normally running agent config files to find differences
• Test with minimal configuration: Only modify AGENT_NAME/WORKSPACE_PATH/TELEGRAM_BOT_TOKEN
• Add optional configs gradually to locate which parameter causes problem
Really stuck: Delete container and start over (docker-compose down && docker-compose up), the advantage of Docker isolation is you can experiment freely.
12 min read · Published on: Feb 5, 2026 · Modified on: Feb 5, 2026
Related Posts
Deep Dive into OpenClaw Architecture: Technical Principles and Extension Practices of the Three-Layer Design

Deep Dive into OpenClaw Architecture: Technical Principles and Extension Practices of the Three-Layer Design
Let AI Read Documentation for You: OpenClaw Browser Automation Practical Guide

Let AI Read Documentation for You: OpenClaw Browser Automation Practical Guide
OpenClaw Configuration Details: A Complete Guide to openclaw.json & Best Practices


Comments
Sign in with GitHub to leave a comment