Skip to main content
See INSTALLATION.md for the standard setup guide and README for a full feature overview.

The Problem

npm install -g profclaw on a 512MB device will get OOM-killed. Node’s package installer is memory-hungry — it can spike past 400MB during dependency resolution alone. You have three options depending on what you have available.

Hardware Requirements

ModeMinimum RAMSwap NeededTypical Device
pico256MB256MB+Raspberry Pi Zero, $5 VPS
mini512MBoptionalRaspberry Pi 3, $10 VPS
pro2GBnoneRaspberry Pi 4+, $20+ VPS

No npm install. Pull a pre-built image that runs in under 200MB of RAM.
docker run -d \
  --name profclaw \
  --restart unless-stopped \
  -p 3000:3000 \
  -v profclaw-data:/data \
  -e PROFCLAW_MODE=pico \
  ghcr.io/profclaw/profclaw:pico
Point to a remote Ollama instance on a beefier machine:
docker run -d \
  --name profclaw \
  --restart unless-stopped \
  -p 3000:3000 \
  -v profclaw-data:/data \
  -e PROFCLAW_MODE=pico \
  -e OLLAMA_BASE_URL=http://192.168.1.50:11434 \
  ghcr.io/profclaw/profclaw:pico

Verify it works

curl http://localhost:3000/health
A 200 OK with {"status":"ok"} means the container is up and accepting requests.

Connect an AI provider

Pass API keys as environment variables. Add whichever provider you use:
docker run -d \
  --name profclaw \
  --restart unless-stopped \
  -p 3000:3000 \
  -v profclaw-data:/data \
  -e PROFCLAW_MODE=pico \
  -e ANTHROPIC_API_KEY=sk-ant-... \
  -e GOOGLE_GENERATIVE_AI_API_KEY=AIza... \
  -e CEREBRAS_API_KEY=csk-... \
  -e OLLAMA_BASE_URL=http://192.168.1.50:11434 \
  ghcr.io/profclaw/profclaw:pico
You only need one provider. Set the one you have and skip the rest.
Pico mode skips authentication. There is no login screen and no user session required. Whoever can reach port 3000 can use the API. Do not expose the port publicly without a reverse proxy or firewall rule.

Option 2: Add Swap Before npm Install

Works on Raspberry Pi OS and most Debian-based systems. Gives the installer the memory headroom it needs.
# Create a 1GB swap file
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Verify swap is active
free -h

# Now install
npm install -g profclaw

# Initialize and start
profclaw init
profclaw serve --mode pico
Make swap permanent across reboots:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
On SD card devices (Pi Zero, Pi 3), heavy swap use will wear out your card faster. Use a USB SSD or a ramdisk-backed swap if running long-term.

Option 3: Cross-Install from Another Machine

Install profClaw on a machine with enough RAM, then copy the installed binary to your low-memory device. On the build machine:
# Install into a local directory (not global)
mkdir profclaw-bundle && cd profclaw-bundle
npm pack profclaw
npm install --prefix ./install profclaw

# Copy to target device
scp -r ./install pi@raspberrypi.local:/opt/profclaw
On the target device:
# Run directly from the copied install
/opt/profclaw/node_modules/.bin/profclaw init
/opt/profclaw/node_modules/.bin/profclaw serve --mode pico
Optionally symlink it:
sudo ln -s /opt/profclaw/node_modules/.bin/profclaw /usr/local/bin/profclaw

Pico Mode: What Works, What Doesn’t

Works in pico

  • Agent engine (full reasoning loop)
  • 72 core tools (file, git, shell, HTTP, memory, cron)
  • CLI chat (profclaw chat)
  • REST API on port 3000
  • Cron jobs and scheduled tasks
  • One chat channel (your choice: Slack, Telegram, Discord, etc.)

Not available in pico

FeatureWhy
Web UI dashboardRemoved to save ~80MB at idle
Redis queueReplaced with SQLite-backed queue
Browser/Playwright toolsChromium won’t fit in memory
Multiple simultaneous channelsSingle channel limit
Real-time multi-user sessionsNo WebSocket broadcast layer
Switch to mini mode when you have 512MB+:
profclaw serve --mode mini
Mini adds the web UI and multi-channel support. Pro adds Redis, browser tools, and full concurrency.

Using a Separate Ollama Instance

Running inference on a Pi Zero is impractical. Point profClaw at an Ollama instance running on another machine on your network.
# In .env or environment
OLLAMA_BASE_URL=http://192.168.1.50:11434
Or in settings.yml:
ai:
  provider: ollama
  baseUrl: http://192.168.1.50:11434
  model: llama3.2:3b
The Pi Zero acts as the agent runtime — tool execution, memory, scheduling, API surface — while a beefier machine handles inference. Works well with a Pi 5 or an old laptop as the Ollama host. Verify the connection:
profclaw doctor
The doctor check will confirm the Ollama endpoint is reachable and the model is loaded.

Troubleshooting

Container won’t start

Check the logs before anything else:
docker logs profclaw
Common causes: port 3000 already in use, missing PROFCLAW_MODE, or a bad volume mount path. The logs will point to the specific error.

OOM during npm install (Options 2 and 3)

If the install gets killed mid-way, you do not have enough memory headroom. Use the swap method from Option 2 first, then retry:
sudo swapon /swapfile
npm install -g profclaw
If you already added swap and it still fails, the swap file may be too small. Extend it:
sudo swapoff /swapfile
sudo fallocate -l 2G /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

“No AI provider configured”

profClaw requires at least one AI provider key at startup. Set the env var for the provider you want to use:
# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...

# Google Gemini
export GOOGLE_GENERATIVE_AI_API_KEY=AIza...

# Cerebras
export CEREBRAS_API_KEY=csk-...

# Local Ollama
export OLLAMA_BASE_URL=http://192.168.1.50:11434
For Docker, pass it with -e as shown in the install command above. For the npm install path, add it to your .env file in the profClaw data directory.

Can’t reach the API

  1. Check the container or process is actually running:
docker ps | grep profclaw
  1. Confirm the port binding:
docker port profclaw
  1. Try curl from inside the container to rule out a network issue:
docker exec profclaw curl -s http://localhost:3000/health
If that works but curl http://localhost:3000/health from your host does not, the problem is the port binding or a local firewall rule, not profClaw itself.
For the standard installation path on a full-spec machine, see the main installation guide.