Leveling Up OpenClaw: An Advanced Automation Guide
This guide is a blueprint of my current setup. I’ll walk you through how to leverage NVIDIA Build for zero-cost, top-tier LLMs (like DeepSeek-V4-Flash and MiniMax-m2.7), how to equip your agent with four core skills, and finally, how to wire it all together into a bulletproof daily automated pipeline using Linux systemd.
If you already have OpenClaw installed and running, this is your next step.
1. The Pitfall: LLM Memory is Not a Task Scheduler
When I first set out to build an automated daily news aggregator, I made a rookie mistake: I relied entirely on the agent’s memory and prompt instructions to execute scheduled tasks.
The result was a disaster.
-
The model would inevitably “hallucinate” or forget steps. -
Searches were chaotic, and execution was highly unstable. -
Babysitting the agent took more time than doing the task manually.
After weeks of trial and error, I arrived at a fundamental rule of agentic engineering:
Never rely on an LLM’s memory for long-term, periodic task scheduling. Recurring tasks must be anchored by system-level infrastructure.
Once I accepted this, the architecture designed itself.
2. Zero-Cost Compute: Hardening the NVIDIA Build Config
As your workflows become more complex, API token costs can skyrocket. Fortunately, NVIDIA Build currently offers free OpenAI-compatible endpoints for top open-weight models.
Step 1: Grab Your API Key
Head over to NVIDIA Build, log in, and generate an API key from your profile dashboard. It will start with nvapi-. Export this to your environment variables so OpenClaw can read it securely:
exportNVIDIA_API_KEY="nvapi-xxxx..."
Step 2: Wire up openclaw.json
Open your configuration file at ~/.openclaw/openclaw.json (OpenClaw supports JSON5, meaning comments are perfectly valid). We need to inject the NVIDIA provider and whitelist our models.
🚨 Crucial Configuration Detail: When adding custom OpenAI-compatible endpoints, you must explicitly define the contextWindow and maxTokens. If you omit them, OpenClaw’s gateway might fall back to a conservative default (like 8K). For complex agentic tasks, this causes premature context pruning—meaning your agent will suddenly “forget” its instructions halfway through a job.
Here is the hardened, production-ready configuration:
// ~/.openclaw/openclaw.json{"models":{"mode":"merge","providers":{"nvidia-build":{"baseUrl":"https://integrate.api.nvidia.com/v1","apiKey":"${NVIDIA_API_KEY}",// Safely loaded from your OS env"api":"openai",// Instructs OpenClaw to parse this using OpenAI endpoint standards"models":[{"id":"deepseek-ai/deepseek-v4-flash","alias":"ds-v4-flash","contextWindow":128000,// Prevents memory pruning during long tasks"maxTokens":8192},{"id":"minimax/minimax-m2.7","alias":"minimax","contextWindow":200000,// MiniMax is great for massive context payloads"maxTokens":8192}]}}},"agents":{"defaults":{// Whitelist format must strictly be: providerName/modelId"models":["nvidia-build/deepseek-ai/deepseek-v4-flash","nvidia-build/minimax/minimax-m2.7"],// Set your preferred default brain on startup"model":"nvidia-build/deepseek-ai/deepseek-v4-flash"}}}
Because OpenClaw supports Hot Reloading, you usually don’t even need to restart the gateway. Just save the file, type /model ds-v4-flash in your chat interface, and your agent is instantly running on DeepSeek.
3. From Brains to Hands: Building the 4 Core Skills
An LLM is just a brain. To get actual work done, your agent needs hands—in OpenClaw, these are called Skills.
I built a 4-step data pipeline to scrape, translate, save, and distribute F1 racing news. Each skill lives in its own folder under ~/.openclaw/workspace/skills/ and is governed by a SKILL.md file.
Skill 1: Tavily Search (Data Ingestion)
OpenClaw’s native search can be hit-or-miss. Tavily provides a much more robust, agent-optimized search engine.
- The Goal:
Query bilingual keywords (“F1 2026 latest news” and its Chinese equivalent) to grab breaking news. - The Logic:
In the SKILL.md, instruct the agent to deduplicate results, prioritize news from the last 48 hours, and output a structured Markdown list.
Skill 2: DeepSeek Translation (Localization)
- The Goal:
Batch translate English F1 news into fluent Chinese. - The Logic:
We don’t even need a separate API here; we just route the task back through the ds-v4-flashmodel we configured earlier. The prompt strictly enforces keeping terminology (like “Undercut” or “Safety Car”) in English.
Skill 3: Obsidian Archiving (Local Storage)
- The Goal:
Save the aggregated data locally to avoid vendor lock-in with cloud providers. - The Logic:
We give the agent access to the fs(file system) andbashtools. The SOP instructs it to create a daily Markdown file (YYYY-MM-DD-F1-Daily.md), write the YAML frontmatter, append the news, and update a masterINDEX.mdfile.
Skill 4: Feishu/Lark Sync (Cloud Distribution)
- The Goal:
Push the polished Markdown file to a shared Feishu/Lark Wiki space. - The Logic:
The agent uses bashto trigger a local Python script (push_to_feishu.py), passing the daily Obsidian file path as an argument.
💡 Pro Tip on Writing Skills:
- Nail the Description:
The descriptionblock in your YAML frontmatter is the only thing the routing agent uses to decide whether to trigger the skill. Be hyper-specific. - One Skill, One Job:
Keep it modular. Don’t build a monolithic “Do Everything” skill. Chain simple ones together.
4. Tying It All Together: The systemd Pipeline
Now we have a highly capable agent, but we need it to run completely unattended every day at 8:00 PM.
This is where Linux systemd shines. We use a 3-tier architecture:
Tier 1: The Timer (f1-news-collect.timer)
This acts as our alarm clock.
-
Setting OnCalendar=*-*-* 20:00:00fires the job daily. -
Setting Persistent=trueensures that if the server happens to be down at 8 PM, the job will immediately execute the next time it boots up. Zero dropped tasks.
Tier 2: The Service (f1-news-collect.service)
This is the executor. It waits for the network to be online (After=network-online.target), loads an .env file containing all your API keys, and triggers a master Python script.
Tier 3: The Python Controller (collect_f1_news.py)
This script acts as the orchestrator. It doesn’t do the heavy semantic lifting; instead, it triggers the API calls and OpenClaw skills in sequence:Search -> Translate -> Save to Obsidian -> Push to Feishu.
(Note: You don’t need to write the Python or systemd config by hand. You can literally just ask OpenClaw to generate them based on the architecture described above!)
Why systemd beats cron for Agents:
- Native Logging:
journalctlcaptures all standard output and errors effortlessly. You don’t need messy>> /var/log/cron.logredirects. - Network Awareness:
cronfires blindly.systemdcan wait until the internet is actually connected before waking up your agent. - Missed Job Catch-up:
cronmisses a job if the server is off.systemdcatches up automatically.
Final Thoughts
The real magic of building AI agents isn’t about finding the perfect prompt; it’s about codifying your personal Standard Operating Procedures (SOPs) into digital assets.
Models will inevitably get smarter, and frameworks will evolve. But the modular skill libraries and automated pipelines you architect today will remain yours.
Stop hand-coding every script. Define the architecture, write clear constraints, and let your agent do the typing. That is the true paradigm shift of 2026.
夜雨聆风