Install the bridge
Run the shell installer, Go install command, or Docker image. WeClaw is meant to sit beside the agent binaries you already operate.
WeChat AI Agent Bridge
WeClaw installs locally, opens a QR login on first run, auto-detects ACP, CLI, and HTTP integrations, and routes real chat threads through the agent stack you already trust.
3 transports
ACP, CLI, HTTP
7 integrations
Claude to OpenClaw
1 thread surface
Reply, route, send
Operator Flow
One-line install
first run prompts WeChat QR login
01curl -sSL https://raw.githubusercontent.com/fastclaw-ai/weclaw/main/install.sh | shCompatibility
Use WeChat as the thread surface without replacing the tools, models, or launch path you already run locally.
Workflow
Evaluation friction is predictable: show the install path, the authentication step, and the first operator action before asking people to trust the feature list.
Run the shell installer, Go install command, or Docker image. WeClaw is meant to sit beside the agent binaries you already operate.
On first run, scan the WeChat QR code. WeClaw checks ACP, CLI, and HTTP availability, then writes the active local config.
Use slash commands, aliases, default-agent switching, and outbound sends to keep the conversation inside WeChat instead of learning another operator UI.
Why WeClaw
WeClaw does a small number of hard things well: message routing, transport selection, media cleanup, outbound sends, and explicit configuration.
Keep a default agent for normal traffic, override a single prompt with `/codex` or `/claude`, and switch thread defaults without leaving WeChat.
When ACP is available, WeClaw keeps a persistent JSON-RPC bridge alive so agent processes and session state stay warm.
If ACP is not available, WeClaw still routes messages through CLI invocation or an OpenAI-compatible HTTP chat endpoint.
WeClaw flattens agent markdown into WeChat-safe text, turns image URLs into uploads, and keeps files as files instead of broken links.
Operators can send text or media even when the user has not just messaged, either from the CLI or from a local HTTP endpoint.
Config lives in `~/.weclaw/config.json`, default-agent selection is explicit, and environment overrides stay visible where they matter.
Quick Start
The shortest path matters more than feature volume. The home page should make the first run obvious.
Use the shell script for the fastest evaluation path, or choose Go or Docker if that better matches your workstation.
install
01curl -sSL https://raw.githubusercontent.com/fastclaw-ai/weclaw/main/install.sh | shRun `weclaw start`, authenticate with WeChat, and let the bridge write the local config it will reuse later.
start
01weclaw startTalk to the default agent, send a one-off request to a named model, or inspect bridge status without leaving the thread.
chat commands
01/codex write a sorting function02/cc explain this diff03/statusCommand Routing
Stay in the WeChat message box: send to the default agent, switch defaults, or force a one-off route with a slash command.
hello
Send a message to the current default agent without prefixing an explicit route.
/codex write a sorting function
Route one request to Codex without changing the default agent for the rest of the thread.
/cc explain this diff
Use the Claude alias when you want a shorter, chat-friendly route inside the message box.
/claude
Switch the default agent to Claude and persist that choice in local config.
Media
WeClaw cleans agent output for WeChat, turns image URLs into actual uploads, and keeps operators out of copy-paste cleanup.
Outbound
Use `weclaw send` or the local endpoint when an automation or handoff needs to push text, images, or files before the next inbound message.
FAQ Preview
Most evaluation friction is predictable: supported agents, first-run behavior, config location, and outbound sending.
Prospective users usually need four facts before they install: what WeClaw is, which stacks it supports, where config lives, and how outbound sends work. The FAQ answers those directly.
Read the full FAQGet Started
The upstream repository currently labels the project for personal-learning usage. Review the README, license, and your own operational requirements before moving beyond evaluation.