
Reviewed by Harsh Desai · Last reviewed:
NanoClaw
Your personal AI assistant on WhatsApp, Telegram, Discord and Slack
Best for
NanoClaw is a free, open-source personal AI agent built by NanoCo on top of Anthropic's official Claude Agent SDK. Instead of running one giant Node process with shared memory (the OpenClaw model), NanoClaw spins up a separate Linux container per agent group, mounts only the folders you explicitly allow, and routes credentials through OneCLI's Agent Vault so raw API keys never enter the container. The whole trunk is one process plus a handful of source files, so you can read the entire codebase in an afternoon: 28,691 GitHub stars and 122 active subscribers in 2026 say a lot of self-hosters wanted exactly that.
What NanoClaw does:
- •Container-isolated agents every agent group runs in its own Docker container (Apple Container on macOS as an opt-in, Docker Sandboxes micro-VM as an extra layer) so a compromised agent cannot read your home directory or other agents' state.
- •13+ messaging channels on demand WhatsApp, Telegram, Discord, Slack, Microsoft Teams, iMessage, Matrix, Google Chat, Webex, Linear, GitHub, WeChat, and email via Resend, each installed from the CLI by running
/add-telegram,/add-whatsapp, etc. inside Claude Code. Trunk stays clean; you only carry channels you actually use, and a localnanoclawCLI ships alongside the bash installer for direct interactive use. - •Flexible per-channel isolation connect each channel to its own agent for hard privacy, share one agent across many channels for unified memory with separate threads, or fold multiple channels into one shared session. Set per channel via
/manage-channels. - •Scheduled jobs that talk back recurring tasks like "every weekday at 9am, summarise the sales pipeline from my Obsidian vault and send it to Telegram" are first-class, no cron file to edit.
- •Hybrid AI-native install
bash nanoclaw.shruns a deterministic scripted path (Node, pnpm, Docker, OneCLI, container build, channel pairing). When a step fails, control hands off to Claude Code automatically to diagnose and resume from the broken step. - •Multi-provider per agent group Claude is the default via the Agent SDK, but
/add-codexadds OpenAI Codex (ChatGPT subscription or API key),/add-opencodeadds OpenRouter, Google, DeepSeek and others via OpenCode, and/add-ollama-provideradds local open-weight models. Each agent group can run on a different backend. - •Per-agent CLAUDE.md and memory every group has its own
CLAUDE.md, its own SQLite memory, its own container, and only the mounts you allow. Nothing crosses unless you wire it. - •Customise by editing code, not config there are no settings files. To change behaviour you tell Claude Code ("change the trigger word to @Bob", "make replies shorter") and it modifies your fork. The codebase is small enough that this is safe.
- •Two-database session architecture inbound.db and outbound.db per session, each with exactly one writer, no IPC, no stdin piping. The host process polls outbound.db and delivers via the channel adapter.
- •Web search and fetch built in the agent-runner inside the container ships with web tools so the AI can pull live information without you wiring an MCP server.
Pricing:
- •NanoClaw itself $0, MIT licensed forever. Clone, run
bash nanoclaw.sh, you're done. - •Claude API or subscription required to actually run the agents. Anthropic's Claude API pay-as-you-go (about $3 per million input tokens for Sonnet in 2026) or a Claude Pro/Max subscription via Claude Code.
- •Optional providers bring your own OpenAI key for Codex, OpenRouter key for OpenCode, or run Ollama locally for free.
Limitations:
- •Docker familiarity required self-hosting expects basic Docker, Linux or WSL2 skills and a working Docker Desktop install. Non-technical users will struggle without a developer friend.
- •Setup needs Claude Code installed
/customize,/debug, error recovery during setup, and every/add-<channel>skill run inside Claude Code, so you must install it from claude.ai/download first. - •Windows is WSL2-only there is no native Windows runtime. You run NanoClaw inside WSL2, which adds an install layer for Windows-first users.
- •No monitoring dashboard there is no web UI for logs, metrics, or scheduled-task status. Debugging is done by asking Claude Code in chat, which is fast for AI-native users but a hard left turn if you expect a Grafana panel.
- •Trunk only accepts security and bug fixes new channels and providers must land on the
channelsorprovidersbranches as skills. If you need a feature merged into mainline, expect a no.
Our Verdict
NanoClaw scores 8/10 because it is the cleanest answer in 2026 to the question "how do I run a personal AI agent across WhatsApp, Telegram, Discord and Slack without trusting half a million lines of code I never read?" The MIT licence, Anthropic Agent SDK foundation, container-per-agent isolation, OneCLI credential vault, and 28,691-star GitHub momentum make it the lightweight successor to OpenClaw that security-conscious self-hosters have been waiting for. Docker's March 2026 partnership announcement adding Docker Sandboxes micro-VM isolation pushed it from "interesting alternative" into "production-ready" territory.
For the Vibe Builder, NanoClaw is the right pick if you want a personal AI on every channel you already use (Telegram, WhatsApp, Discord) and you are comfortable opening a terminal and running bash nanoclaw.sh. It will not hold your hand the way a SaaS chatbot would, but the install script + Claude Code error recovery makes the first run smoother than any other self-hosted agent of this size, and once running you customise it by chatting to Claude Code rather than editing config files.
For the Developer, this is the most readable Claude Agent SDK reference in the wild. The two-database session model, channel adapter registry, and Docker-isolated runner are the architecture you would want to copy for any internal AI agent product. The trunk is small enough to read end-to-end, the Agent Vault credential pattern (via OneCLI) is genuinely good, and the channels/providers branch model means you can fork it and ship a private agent without inheriting features you do not want.
Skip it if you want a hosted no-Docker chatbot or a polished web dashboard with metrics and audit logs; consider OpenClaw for the easier first install on messaging apps, or try Open WebUI for team-wide self-hosted ChatGPT with RBAC and a real admin UI.
Related Tools
View allCompare NanoClaw With
Also Useful For
Frequently Asked Questions
How much does NanoClaw cost?
NanoClaw itself is free and open-source under the MIT licence in 2026. You only pay for the LLM you point it at: an Anthropic Claude API key (about $3 per million input tokens for Sonnet) or a Claude Pro/Max subscription via Claude Code. Optional providers like OpenAI Codex, OpenRouter via OpenCode, or local Ollama models all use bring-your-own credentials, so you can run NanoClaw on free local models if you have the hardware.
NanoClaw vs OpenClaw: which should I pick in 2026?
Pick OpenClaw if you want the largest community, the easiest first install, and the broadest plugin ecosystem. Pick NanoClaw if you want a codebase you can read in a sitting, true container-level isolation per agent group, and a custom-fork-by-default philosophy. NanoClaw was explicitly built as a lightweight, security-first alternative to OpenClaw's ~500k-line, 70-dependency, single-process model, and Docker, Inc. partnered with NanoCo in March 2026 to add Docker Sandboxes micro-VM isolation on top.
Which messaging apps does NanoClaw support?
Out of the box NanoClaw supports WhatsApp, Telegram, Discord, Slack, Microsoft Teams, iMessage, Matrix, Google Chat, Webex, Linear, GitHub, WeChat, and email via Resend. Each channel is installed on demand with a `/add-<channel>` skill that copies only the adapter you need into your fork, so trunk stays minimal. You can run one or many channels at the same time and pick per-channel isolation in `/manage-channels`.
Does NanoClaw run on Windows?
Yes, but only via WSL2. NanoClaw uses Docker as the default container runtime, which works natively on macOS and Linux and through WSL2 on Windows. macOS users can also opt into Apple Container for a lighter native runtime via `/convert-to-apple-container`. There is no plain-Windows install path, so Windows-only users need WSL2 set up first before running `bash nanoclaw.sh`.
Is NanoClaw safe to run on my personal machine?
NanoClaw is built around isolation: each agent group runs in its own Linux container, can only see explicitly mounted folders, and never holds raw API keys (outbound calls go through OneCLI's Agent Vault, which injects credentials at the proxy level with per-agent rate limits and policies). With Docker Sandboxes, agents run inside a micro-VM so even a container-escape zero-day is contained. The codebase is small enough that you can audit the full trunk yourself, which is the whole point of choosing NanoClaw over OpenClaw.
What is NanoClaw?
NanoClaw is Your personal AI assistant on WhatsApp, Telegram, Discord and Slack.
Is NanoClaw free?
Yes, NanoClaw offers a free version.
Who should use NanoClaw?
NanoClaw is built for vibe builders who want AI to handle the technical work and developers looking to accelerate their workflow. Common use cases include personal-ai-agent, messaging-app-automation, self-hosted-ai, scheduled-ai-tasks, container-isolated-agents.
What are the best alternatives to NanoClaw?
Popular alternatives to NanoClaw include Openclaw, Claude Code, Hermes Agent. Compare features and pricing in our AI Agents directory to compare options.
Affiliate link: we may earn a commission. How this works.
NanoClaw
Free tier available