Skip to main content
profClaw routes requests to the right AI provider based on configured API keys. Every provider is lazily loaded - unused providers add zero startup cost.

Provider List

ProviderTypeStatusKey Variable
AnthropicCloudStableANTHROPIC_API_KEY
OpenAICloudStableOPENAI_API_KEY
Azure OpenAICloudStableAZURE_OPENAI_API_KEY
Google GeminiCloudStableGOOGLE_API_KEY
OllamaLocalStableOLLAMA_BASE_URL
OpenRouterGatewayStableOPENROUTER_API_KEY
GroqCloudStableGROQ_API_KEY
MistralCloudStableMISTRAL_API_KEY
DeepSeekCloudBetaDEEPSEEK_API_KEY
xAI / GrokCloudBetaXAI_API_KEY
CohereCloudBetaCOHERE_API_KEY
PerplexityCloudBetaPERPLEXITY_API_KEY
Together AICloudBetaTOGETHER_API_KEY
Fireworks AICloudBetaFIREWORKS_API_KEY
CerebrasCloudExperimentalCEREBRAS_API_KEY
AWS BedrockCloudStableAWS_ACCESS_KEY_ID
LM StudioLocalBetaLM_STUDIO_BASE_URL
Zhipu AICloudBetaZHIPU_API_KEY
Moonshot (Kimi)CloudBetaMOONSHOT_API_KEY
QwenCloudBetaQWEN_API_KEY
ReplicateCloudBetaREPLICATE_API_KEY
GitHub ModelsCloudBetaGITHUB_TOKEN
Volcengine (Doubao)CloudBetaVOLCENGINE_API_KEY
BytePlusCloudBetaBYTEPLUS_API_KEY
Baidu QianfanCloudBetaQIANFAN_API_KEY
ModelStudioCloudExperimentalMODELSTUDIO_API_KEY
MinimaxCloudBetaMINIMAX_API_KEY
Xiaomi MiLMCloudExperimentalXIAOMI_API_KEY
HuggingFaceCloudBetaHUGGINGFACE_API_TOKEN
NVIDIA NIMCloudBetaNVIDIA_NIM_API_KEY
Venice AICloudBetaVENICE_API_KEY
KilocodeCloudBetaKILOCODE_API_KEY
Vercel AI GatewayGatewayBetaVERCEL_AI_API_KEY
Cloudflare AIGatewayBetaCLOUDFLARE_AI_API_KEY
IBM WatsonxCloudBetaWATSONX_API_KEY
GitHub CopilotProxyExperimentalCOPILOT_API_URL
SambaNovaCloudBetaSAMBANOVA_API_KEY
“Stable” providers are fully tested and used in production deployments. “Beta” providers work but may have edge cases. “Experimental” providers are early integrations that may change.

Auto-Selection

When multiple providers are configured, profClaw picks the best available one based on a priority order. Cloud providers with tool-calling support are preferred over local models for full tool tier access.
# Default priority order (first configured wins):
# anthropic -> openai -> azure -> google -> groq -> xai -> mistral
# -> deepseek -> cohere -> perplexity -> together -> fireworks
# -> bedrock -> openrouter -> ... -> ollama (always last)
Override the default at any time:
profclaw config set provider anthropic
Or per-session with the --model flag:
profclaw chat --model groq

Configuration

Set API keys in your environment or .env file. profClaw reads these at startup and configures each provider:
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=AIza...
GROQ_API_KEY=gsk_...
See Environment Variables for the complete list.

Model Aliases

Model aliases let you reference models by short names instead of full IDs:
# These are equivalent:
profclaw chat --model opus
profclaw chat --model claude-opus-4-6

# Provider/model shorthand also works:
profclaw chat --model anthropic/claude-opus-4-6
Common aliases:
AliasProviderModel
opusAnthropicclaude-opus-4-6
sonnetAnthropicclaude-sonnet-4-5
haikuAnthropicclaude-haiku-4-5
gptOpenAIgpt-4o
geminiGooglegemini-1.5-pro
groqGroqllama-3.3-70b-versatile
localOllamallama3.2
grokxAIgrok-2
mistralMistralmistral-large-latest
deepseekDeepSeekdeepseek-chat

Local Models

For fully offline usage without API costs, profClaw supports Ollama and LM Studio. Both run models locally on your hardware.
Local models receive the Essential tool tier (10 tools) by default. Model-aware routing ensures small models are not overwhelmed with too many tool choices. See Tools Overview for details on tier routing.
See the Local LLM guide for setup instructions.

Resilience

All providers include automatic retry with exponential backoff for transient errors (429, 503, network timeouts):
AI_MAX_RETRIES=2               # Default: 2 retries
AI_PROVIDER_TIMEOUT_MS=120000  # Default: 120 seconds

Health Check

profclaw doctor --providers
This checks connectivity for all configured providers and reports latency. Any provider that fails to respond within the timeout is flagged.