Windows安装OpenClaw(小龙虾)并配置开机自启动
Windows 安装 OpenClaw 操作文档
一、架构概览
┌─────────────────────────────────────────────────┐│ Windows 11 ││ ││ ┌──────────────┐ ┌───────────────────────┐ ││ │ LiteLLM Proxy│────►│ Azure OpenAI GPT │ ││ │ :4000 │ │ (gpt-5.4) │ ││ └──────┬───────┘ └───────────────────────┘ ││ │ streaming SSE ││ ┌──────┴───────┐ ┌───────────────────────┐ ││ │ OpenClaw GW │────►│ 飞书 WebSocket │ ││ │ :18789 │ │ (长连接) │ ││ └──────┬───────┘ └───────────────────────┘ ││ │ ││ ┌──────┴───────┐ ││ │ Control UI │ http://127.0.0.1:18789/ ││ │ (浏览器) │ ││ └──────────────┘ ││ ││ Scheduled Tasks: ││ 1. "OpenClaw Bootstrap" (ONLOGON) → 启动LiteLLM + 触发Gateway ││ 2. "OpenClaw Gateway" (由Bootstrap触发) │└─────────────────────────────────────────────────┘
二、组件清单
|
|
|
|
|
|
|
|
C:\Program Files\nodejs\ |
|
|
|
|
%APPDATA%\npm\node_modules\openclaw\ |
|
|
|
|
%LOCALAPPDATA%\Programs\Python\Python312\ |
|
|
|
|
|
|
|
|
|
|
|
三、安装步骤
3.1 安装 Node.js
⚠️ 版本要求:>= 22.16.0(LTS),勿装 Node 24 等非 LTS 版本
# 下载 Node.js 22.x LTS MSIcurl -L -o node-v22.22.1-x64.msi "https://nodejs.org/dist/v22.22.1/node-v22.22.1-x64.msi"# 管理员安装msiexec /i node-v22.22.1-x64.msi /qn /norestart# 验证node --version # 应为 v22.22.1npm --version
3.2 安装 OpenClaw
npm install -g openclaw# 验证openclaw --version
3.3 初始化 OpenClaw(交互式向导)
openclaw setup
配置关键项:
-
• LLM Provider: litellm -
• Base URL: http://localhost:4000 -
• API Key: sk-openclaw-litellm-key-12345(与 LiteLLM 配置对应) -
• Model: gpt-5.4
3.4 安装 Python + LiteLLM
# 安装 Python 3.12(从 python.org 下载)# 安装 LiteLLMpip install litellm uvicorn fastapi
3.5 部署 LiteLLM Proxy 脚本
文件:C:\Users\<user>\.openclaw\litellm_proxy.py
import uvicorn, os, jsonos.environ["LITELLM_DONT_FETCH_COST_MAP"] = "true"os.environ["LITELLM_LOCAL_MODEL_COST_MAP"] = "true"from fastapi import FastAPI, Requestfrom fastapi.responses import JSONResponse, StreamingResponsefrom litellm import completionapp = FastAPI()AZURE_KEY = "<your-azure-api-key>"AZURE_BASE = "<your-azure-endpoint>"AZURE_VERSION = "2025-04-01-preview"defsafe_serialize(obj):if obj isNone:returnNoneifisinstance(obj, (str, int, float, bool)):return objifisinstance(obj, dict):return {k: safe_serialize(v) for k, v in obj.items()}ifisinstance(obj, (list, tuple)):return [safe_serialize(i) for i in obj]ifhasattr(obj, 'model_dump'):return safe_serialize(obj.model_dump())ifhasattr(obj, '__dict__'):return safe_serialize({k: v for k, v in obj.__dict__.items() ifnot k.startswith('_')})returnstr(obj)asyncdefdo_chat(request: Request): body = await request.json() model = body.pop("model", "azure/gpt-5.4")ifnot model.startswith("azure/"): model = "azure/gpt-5.4" body["model"] = model body["api_key"] = AZURE_KEY body["api_base"] = AZURE_BASE body["api_version"] = AZURE_VERSIONif"max_tokens"in body: body["max_completion_tokens"] = body.pop("max_tokens") is_stream = body.get("stream", False)if is_stream: body["stream"] = Truetry: resp = completion(**body)defgenerate():for chunk in resp: chunk_data = safe_serialize(chunk)yieldf"data: {json.dumps(chunk_data)}\n\n"yield"data: [DONE]\n\n"return StreamingResponse( generate(), media_type="text/event-stream", headers={"Cache-Control": "no-cache", "Connection": "keep-alive", "X-Accel-Buffering": "no"} )except Exception as e:return JSONResponse(status_code=500, content={"error": {"message": str(e), "type": "server_error"}})else: body["stream"] = Falsetry: resp = completion(**body)return JSONResponse(content=safe_serialize(resp))except Exception as e:return JSONResponse(status_code=500, content={"error": {"message": str(e), "type": "server_error"}})app.post("/v1/chat/completions")(do_chat)app.post("/chat/completions")(do_chat)app.post("/v1/completions")(do_chat)app.post("/completions")(do_chat)@app.get("/health")asyncdefhealth():return {"status": "healthy"}@app.get("/v1/models")@app.get("/models")asyncdefmodels():return {"object": "list", "data": [{"id": "gpt-5.4", "object": "model", "owned_by": "azure"}]}if __name__ == "__main__": uvicorn.run(app, host="127.0.0.1", port=4000)
3.6 部署 Bootstrap 脚本
文件:C:\Users\<user>\.openclaw\openclaw_bootstrap.ps1
# OpenClaw Bootstrap - Start LiteLLM Proxy then OpenClaw Gateway$ErrorActionPreference = "Continue"$logFile = "$env:TEMP\openclaw\bootstrap.log"functionLog($msg) {$ts = Get-Date-Format"yyyy-MM-dd HH:mm:ss""$ts$msg" | Out-File-Append-FilePath$logFile-Encoding utf8}New-Item-ItemType Directory -Force-Path"$env:TEMP\openclaw" | Out-NullLog "=== Bootstrap starting ==="# Start LiteLLM if not runningtry {Invoke-WebRequest-Uri"http://localhost:4000/health"-TimeoutSec3-ErrorAction Stop Log "LiteLLM already running"} catch { Log "Starting LiteLLM Proxy..."Start-Process-FilePath"<python-path>"-ArgumentList"<proxy-script-path>"-WindowStyleHidden$maxAttempts = 30$attempt = 0$ready = $falsewhile ($attempt-lt$maxAttempts) {Start-Sleep-Seconds2$attempt++try {Invoke-WebRequest-Uri"http://localhost:4000/health"-TimeoutSec3-ErrorAction Stop$ready = $true; break } catch { Log "LiteLLM not ready (attempt $attempt/$maxAttempts)" } }if (-not$ready) { Log "ERROR: LiteLLM failed"; exit1 } Log "LiteLLM healthy"}# Trigger OpenClaw GatewayLog "Triggering OpenClaw Gateway..."schtasks /Run /TN "OpenClaw Gateway"2>&1 | Out-NullLog "=== Bootstrap complete ==="
3.7 配置 Scheduled Tasks
# 1. OpenClaw Bootstrap (启动 LiteLLM + 触发 Gateway)schtasks /Create /TN "OpenClaw Bootstrap" ` /TR "powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File C:\Users\<user>\.openclaw\openclaw_bootstrap.ps1" ` /SC ONLOGON /F /RL HIGHEST# 去掉 72 小时超时限制Set-ScheduledTask-TaskName"OpenClaw Bootstrap"-Settings (New-ScheduledTaskSettingsSet-ExecutionTimeLimit"00:00:00")# 2. OpenClaw Gateway (由 openclaw 自己管理,但需要去掉超时限制)Set-ScheduledTask-TaskName"OpenClaw Gateway"-Settings (New-ScheduledTaskSettingsSet-ExecutionTimeLimit"00:00:00"-RestartCount3-RestartInterval (New-TimeSpan-Minutes1))
启动顺序:用户登录 → Bootstrap → LiteLLM Proxy(:4000) → Gateway(:18789)
四、配置文件
openclaw.json 关键配置
{"models":{"providers":{"litellm":{"baseUrl":"http://localhost:4000","apiKey":"sk-openclaw-litellm-key-12345","api":"openai-completions","models":[{"id":"gpt-5.4","name":"GPT-5.4 (Azure via LiteLLM)","reasoning":false,"contextWindow":200000,"maxTokens":16000}]}}},"agents":{"defaults":{"model":{"primary":"litellm/gpt-5.4"}}},"channels":{"feishu":{"enabled":true,"appId":"<feishu-app-id>","appSecret":"<feishu-app-secret>","connectionMode":"websocket"}}}
五、踩坑记录
5.1 ❌ Node.js 24 不兼容
现象:Gateway 启动成功,RPC probe ok,但聊天无响应,assistant content 为空数组 []原因:Node 24 的 fetch/ReadableStream API 行为变化,影响 SSE streaming 解析解决:降级到 Node 22 LTS (>= 22.16.0)
OpenClaw 2026.3.13 要求 Node >= 22.16.0,v22.14.0 不够
5.2 ❌ LiteLLM Proxy 不支持 Streaming(核心问题)
现象:Gateway 调用 LLM 后返回空内容(content: [], usage: {input: 0, output: 0})原因:自定义 litellm_proxy.py 中硬编码 body["stream"] = False,但 OpenClaw 默认使用 streaming 模式调用 LLM。OpenClaw 的 SSE 解析器收到普通 JSON 响应,解析失败,返回空内容解决:修改 proxy 脚本,正确处理 stream=true 请求,返回 SSE 格式 (data: {...}\n\n)
5.3 ❌ Gateway 僵尸进程 + 锁死
现象:openclaw gateway start 报 gateway already running (pid XXXX); lock timeout原因:Gateway 进程崩溃但 pid 文件/端口未释放(Windows 上常见),或 Scheduled Task 被多次触发解决:
schtasks /End /TN "OpenClaw Gateway"taskkill /F /IM node.exe# 等 3 秒schtasks /Run /TN "OpenClaw Gateway"
5.4 ❌ Scheduled Task 72 小时超时
现象:Gateway 运行 3 天后自动停止原因:Windows Scheduled Task 默认 ExecutionTimeLimit = 72:00:00解决:Set-ScheduledTask -TaskName "OpenClaw Gateway" -Settings (New-ScheduledTaskSettingsSet -ExecutionTimeLimit "00:00:00")
5.5 ❌ SSH 远程 PowerShell 变量丢失
现象:通过 SSH 执行 PowerShell 命令时 $env:TEMP 等变量被 shell 吞掉原因:bash SSH 客户端将 $ 视为变量展开解决:用 cmd 命令替代 (set TEMP),或转义 \$
5.6 ❌ gateway.cmd 被改动后 openclaw 报错
现象:Service config looks out of date or non-standard原因:openclaw 会检查 gateway.cmd 内容,不能随意修改解决:不改 gateway.cmd,用独立的 Bootstrap 脚本管理启动顺序
5.7 ❌ model reasoning 配置
现象:可能导致请求格式不同说明:Azure GPT-5.4 的 reasoning: true 经测试不影响 streaming,但如果遇到空响应,可以先设为 false 排除
六、运维命令
# 查看 Gateway 状态openclaw gateway status# 查看完整状态openclaw status# 重启 Gatewayopenclaw gateway restart# 查看日志type %TEMP%\openclaw\openclaw-YYYY-MM-DD.log# 查看 Bootstrap 日志type %TEMP%\openclaw\bootstrap.log# 手动启动全套服务schtasks /Run /TN "OpenClaw Bootstrap"# 手动停止全套服务schtasks /End /TN "OpenClaw Gateway"taskkill /F /IM node.exetaskkill /F /IM python.exe# 测试 LiteLLM 连通性curl http://localhost:4000/healthcurl http://localhost:4000/v1/chat/completions -H "Authorization: Bearer sk-openclaw-litellm-key-12345" -H "Content-Type: application/json" -d "{\"model\": \"gpt-5.4\", \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}]}"# 测试 LiteLLM Streamingcurl -N http://localhost:4000/v1/chat/completions -H "Authorization: Bearer sk-openclaw-litellm-key-12345" -H "Content-Type: application/json" -d "{\"model\": \"gpt-5.4\", \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}], \"stream\": true}"# 修复(doctor 会覆盖 gateway.cmd)openclaw doctor --fix
七、排查清单
|
|
|
|
|
|
netstat -ano | findstr 18789 |
|
|
|
|
|
|
|
content:[] |
|
|
|
curl localhost:4000/health |
|
|
|
|
|
|
|
node --version |
|
夜雨聆风