See INSTALLATION.md for the standard setup guide and README for a full feature overview.
The Problem
npm install -g profclaw on a 512MB device will get OOM-killed. Node’s package installer is memory-hungry — it can spike past 400MB during dependency resolution alone.
You have three options depending on what you have available.
Hardware Requirements
| Mode | Minimum RAM | Swap Needed | Typical Device |
|---|---|---|---|
| pico | 256MB | 256MB+ | Raspberry Pi Zero, $5 VPS |
| mini | 512MB | optional | Raspberry Pi 3, $10 VPS |
| pro | 2GB | none | Raspberry Pi 4+, $20+ VPS |
Option 1: Docker Pico Image (Recommended)
No npm install. Pull a pre-built image that runs in under 200MB of RAM.Verify it works
200 OK with {"status":"ok"} means the container is up and accepting requests.
Connect an AI provider
Pass API keys as environment variables. Add whichever provider you use:Pico mode skips authentication. There is no login screen and no user session required. Whoever can reach port 3000 can use the API. Do not expose the port publicly without a reverse proxy or firewall rule.
Option 2: Add Swap Before npm Install
Works on Raspberry Pi OS and most Debian-based systems. Gives the installer the memory headroom it needs.On SD card devices (Pi Zero, Pi 3), heavy swap use will wear out your card faster. Use a USB SSD or a ramdisk-backed swap if running long-term.
Option 3: Cross-Install from Another Machine
Install profClaw on a machine with enough RAM, then copy the installed binary to your low-memory device. On the build machine:Pico Mode: What Works, What Doesn’t
Works in pico
- Agent engine (full reasoning loop)
- 72 core tools (file, git, shell, HTTP, memory, cron)
- CLI chat (
profclaw chat) - REST API on port 3000
- Cron jobs and scheduled tasks
- One chat channel (your choice: Slack, Telegram, Discord, etc.)
Not available in pico
| Feature | Why |
|---|---|
| Web UI dashboard | Removed to save ~80MB at idle |
| Redis queue | Replaced with SQLite-backed queue |
| Browser/Playwright tools | Chromium won’t fit in memory |
| Multiple simultaneous channels | Single channel limit |
| Real-time multi-user sessions | No WebSocket broadcast layer |
Using a Separate Ollama Instance
Running inference on a Pi Zero is impractical. Point profClaw at an Ollama instance running on another machine on your network.settings.yml:
Troubleshooting
Container won’t start
Check the logs before anything else:PROFCLAW_MODE, or a bad volume mount path. The logs will point to the specific error.
OOM during npm install (Options 2 and 3)
If the install gets killed mid-way, you do not have enough memory headroom. Use the swap method from Option 2 first, then retry:“No AI provider configured”
profClaw requires at least one AI provider key at startup. Set the env var for the provider you want to use:-e as shown in the install command above. For the npm install path, add it to your .env file in the profClaw data directory.
Can’t reach the API
- Check the container or process is actually running:
- Confirm the port binding:
- Try curl from inside the container to rule out a network issue:
curl http://localhost:3000/health from your host does not, the problem is the port binding or a local firewall rule, not profClaw itself.
For the standard installation path on a full-spec machine, see the main installation guide.