Skip to main content

The Onboarding Wizard

The profclaw onboard command launches an interactive wizard that configures your instance in 5 steps. Run it after profclaw init:
profclaw init
profclaw onboard
You can re-run profclaw onboard at any time to change your configuration. Existing settings are preserved - the wizard only overwrites what you explicitly change.

Step 1: Environment Detection

profClaw automatically detects your runtime environment and applies appropriate defaults:
EnvironmentDetection MethodApplied Defaults
Docker/.dockerenv presentPersistent volume paths, no hot reload
VPS / CloudNon-interactive TTY, public IPSystemd service hints, WEBHOOK_BASE_URL prompt
Local MachineInteractive TTY, macOS/Linux desktopHot reload, localhost defaults
Raspberry Pi / ARMCPU architecture checkPico mode defaults, memory limits

Step 2: Deployment Mode

Choose your deployment mode based on available hardware resources:
Best for: IoT, edge devices, Raspberry Pi, personal use on constrained hardware.
  • 512MB RAM, 1 CPU core
  • Up to 3 AI providers
  • 2 chat channels
  • 15 essential tools
  • In-memory queue only (no Redis required)
Set via environment variable to skip the wizard prompt:
export PROFCLAW_MODE=mini
See Deployment Modes for a detailed feature comparison.

Step 3: AI Provider Setup

Configure at least one AI provider. The wizard prompts for API keys:
? Select your primary AI provider:
  > Anthropic (Claude)
    OpenAI (GPT-4o)
    Google (Gemini)
    Ollama (Local)
    Other...

? Enter your Anthropic API key: sk-ant-***
? Test connection? Yes
  ✓ Connected to Anthropic - Claude Sonnet 4.6 available
If you want to run fully offline without cloud API costs, choose Ollama (Local). Make sure Ollama is installed and running first: brew install ollama && ollama serve. See the Local LLM guide.
You can add more providers later:
profclaw config providers add openai --key sk-...
See AI Providers Overview for the full list of supported providers and configuration options.

Step 4: Chat Channel (Optional)

Connect a chat channel for conversational access from outside the web UI:
? Set up a chat channel now?
  > Webchat (built-in, no setup needed)
    Slack
    Discord
    Telegram
    Skip for now
WebChat is enabled by default at http://localhost:3000 and requires no additional configuration. For Slack, Discord, or other platforms, the wizard walks you through credential collection. You can also skip this step and configure channels later. See Chat Providers Overview for setup guides per platform.

Step 5: Security Policy

Choose the security posture that fits your deployment:
? Select security mode:
  > Standard (recommended)
    Permissive (development only)
    Strict (production/enterprise)
ModeDescriptionBest For
permissiveTools execute without approval promptsLocal development, trusted users only
standardDestructive operations require approvalMost deployments
strictAll write operations require approval, extra guards activeProduction, shared environments
Do not use permissive mode in any deployment accessible by untrusted users. Anyone who can message the agent can execute file operations and shell commands.
See Security Overview for a full description of all five security modes and the defense-in-depth architecture.

Verify Setup

After onboarding, confirm everything works:
profclaw doctor
Expected output:
profClaw Doctor v2.x.x
--------------------------
✓ Node.js 22.x detected
✓ Configuration valid
✓ AI Provider: Anthropic connected
✓ Chat: Webchat ready on :3000
✓ Security: Standard mode active
✓ Storage: SQLite initialized
--------------------------
All checks passed!
If any checks fail, the doctor command prints the specific issue and remediation steps.

Next: First Run

Start profClaw and send your first message to the agent.