Installation Guide
This guide covers multiple ways to install and run profClaw.Table of Contents
- npm Install (Recommended)
- Docker Installation
- One-Line Install
- From Source (Development)
- Production Deployment
- Environment Configuration
- First-Time Setup
- Troubleshooting
- FAQ
npm Install (Recommended)
Requires Node 22+.- Install the package globally:
- Run the setup wizard (configures AI provider, admin account, and registration mode):
- Start the server:
http://localhost:3000.
Verify it works:
One-Line Install
Auto-detects npm/pnpm or Docker and installs accordingly:- Run the install script:
- Follow the prompts, then start the server as directed by the script output.
- Verify the server is up:
From Source (Development)
Prerequisites
- Node.js 22+ — Download
- pnpm — installed via corepack
- Redis — for the job queue (optional in development)
Steps
- Clone and install dependencies:
- Configure your environment:
- Start the dev server:
http://localhost:3000 with hot reload enabled.
Verify it works:
Docker Installation
Using Docker Compose (Recommended)
- Clone the repository:
- Create and configure your environment file:
- Start all services:
- Verify the server is up:
Available Profiles
Using Pre-built Image
- Pull the image:
- Run the container (requires Redis):
- Verify it works:
Production Deployment
Requirements
- Redis — required for the job queue
- Persistent storage — for the SQLite database
- Reverse proxy — Nginx or Cloudflare for HTTPS
Docker Compose Production
- Set environment variables in
.env:
- Deploy:
- Verify the deployment:
Cloudflare Tunnel
For secure exposure without port forwarding:- Install cloudflared:
- Authenticate and create a tunnel:
- Create a config file (
~/.cloudflared/config.yml):
- Start the tunnel:
- In the Cloudflare dashboard, create a DNS CNAME record pointing
profclaw.yourdomain.comto<TUNNEL_ID>.cfargotunnel.com.
Environment Configuration
Required Variables
| Variable | Description |
|---|---|
| At least one AI provider key | ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, or GROQ_API_KEY |
Recommended Variables
| Variable | Default | Description |
|---|---|---|
PORT | 3000 | HTTP server port |
NODE_ENV | development | Environment mode |
REDIS_URL | — | Redis URL for job queue |
STORAGE_TIER | file | Storage backend: memory, file, libsql |
AI Providers
Configure at least one:Integrations
.env.example for the complete list.
First-Time Setup
After starting the server, create your admin account using the setup wizard.Option 1: Docker CLI (Recommended)
- AI provider (Anthropic, OpenAI, Ollama, or skip)
- Admin account with recovery codes
- Registration mode (invite-only or open)
- GitHub OAuth (optional)
Option 2: Local CLI
Option 3: Web UI
Visithttp://localhost:3000/setup and follow the on-screen wizard.
Low-Memory Devices (Raspberry Pi Zero, 512MB VPS)
On devices with 512MB RAM or less,npm install -g profclaw will get killed by the OOM killer before it finishes. Three options:
Option 1: Docker pico image (recommended)
The pico image is pre-built and skips npm install entirely. It runs the agent engine, tools, and one chat channel in roughly 140MB RAM with no UI and no Redis.- Pull and run the pico image:
- Or build locally from
Dockerfile.pico:
- Verify it works:
Option 2: Add swap before npm install
- Extend swap to 1GB so npm has enough memory:
- Install and start in pico mode:
- Verify it works:
Option 3: Cross-install from another machine
- Install on a machine with more RAM:
- Copy to the target device and run:
Hardware requirements by mode
| Mode | Min RAM | Swap needed? | What you get |
|---|---|---|---|
| pico | 256MB | Yes (512MB+) | Agent + tools + 1 channel, no UI |
| mini | 512MB | Recommended | Dashboard, integrations, 3 channels |
| pro | 1GB+ | No | Everything including Redis, browser tools |
Troubleshooting
Redis Connection Issues
Database Errors
AI Provider Errors
Check that your API keys are valid:Port Already in Use
Docker Build Issues
Upgrading
Docker
Manual Installation
FAQ
npm install gets killed on low-memory devices The OOM killer terminatesnpm install when RAM runs out. Two fixes: add 1GB of swap before installing (see Option 2 in the low-memory guide), or skip npm install entirely by using the Docker pico image (see Option 1).
Port 3000 is already in use
Find what is listening on it:
PORT=3001 in your .env.
No AI provider configured
Set at least one of these environment variables before starting the server:
- Install Ollama on your machine.
- Pull a model:
- Set
OLLAMA_BASE_URLin your.env:
- Restart profClaw. The Ollama provider shows up automatically in the model list.
http://host.docker.internal:11434 instead of localhost.
Getting Help
- GitHub Issues
- API Documentation (Swagger UI)
- Discord Community (coming soon)