Provider List
| Provider | Type | Status | Key Variable |
|---|---|---|---|
| Anthropic | Cloud | Stable | ANTHROPIC_API_KEY |
| OpenAI | Cloud | Stable | OPENAI_API_KEY |
| Azure OpenAI | Cloud | Stable | AZURE_OPENAI_API_KEY |
| Google Gemini | Cloud | Stable | GOOGLE_API_KEY |
| Ollama | Local | Stable | OLLAMA_BASE_URL |
| OpenRouter | Gateway | Stable | OPENROUTER_API_KEY |
| Groq | Cloud | Stable | GROQ_API_KEY |
| Mistral | Cloud | Stable | MISTRAL_API_KEY |
| DeepSeek | Cloud | Beta | DEEPSEEK_API_KEY |
| xAI / Grok | Cloud | Beta | XAI_API_KEY |
| Cohere | Cloud | Beta | COHERE_API_KEY |
| Perplexity | Cloud | Beta | PERPLEXITY_API_KEY |
| Together AI | Cloud | Beta | TOGETHER_API_KEY |
| Fireworks AI | Cloud | Beta | FIREWORKS_API_KEY |
| Cerebras | Cloud | Experimental | CEREBRAS_API_KEY |
| AWS Bedrock | Cloud | Stable | AWS_ACCESS_KEY_ID |
| LM Studio | Local | Beta | LM_STUDIO_BASE_URL |
| Zhipu AI | Cloud | Beta | ZHIPU_API_KEY |
| Moonshot (Kimi) | Cloud | Beta | MOONSHOT_API_KEY |
| Qwen | Cloud | Beta | QWEN_API_KEY |
| Replicate | Cloud | Beta | REPLICATE_API_KEY |
| GitHub Models | Cloud | Beta | GITHUB_TOKEN |
| Volcengine (Doubao) | Cloud | Beta | VOLCENGINE_API_KEY |
| BytePlus | Cloud | Beta | BYTEPLUS_API_KEY |
| Baidu Qianfan | Cloud | Beta | QIANFAN_API_KEY |
| ModelStudio | Cloud | Experimental | MODELSTUDIO_API_KEY |
| Minimax | Cloud | Beta | MINIMAX_API_KEY |
| Xiaomi MiLM | Cloud | Experimental | XIAOMI_API_KEY |
| HuggingFace | Cloud | Beta | HUGGINGFACE_API_TOKEN |
| NVIDIA NIM | Cloud | Beta | NVIDIA_NIM_API_KEY |
| Venice AI | Cloud | Beta | VENICE_API_KEY |
| Kilocode | Cloud | Beta | KILOCODE_API_KEY |
| Vercel AI Gateway | Gateway | Beta | VERCEL_AI_API_KEY |
| Cloudflare AI | Gateway | Beta | CLOUDFLARE_AI_API_KEY |
| IBM Watsonx | Cloud | Beta | WATSONX_API_KEY |
| GitHub Copilot | Proxy | Experimental | COPILOT_API_URL |
| SambaNova | Cloud | Beta | SAMBANOVA_API_KEY |
“Stable” providers are fully tested and used in production deployments. “Beta” providers work but may have edge cases. “Experimental” providers are early integrations that may change.
Auto-Selection
When multiple providers are configured, profClaw picks the best available one based on a priority order. Cloud providers with tool-calling support are preferred over local models for full tool tier access.--model flag:
Configuration
- Environment Variables
- settings.yml
- CLI
Set API keys in your environment or See Environment Variables for the complete list.
.env file. profClaw reads these at startup and configures each provider:Model Aliases
Model aliases let you reference models by short names instead of full IDs:| Alias | Provider | Model |
|---|---|---|
opus | Anthropic | claude-opus-4-6 |
sonnet | Anthropic | claude-sonnet-4-5 |
haiku | Anthropic | claude-haiku-4-5 |
gpt | OpenAI | gpt-4o |
gemini | gemini-1.5-pro | |
groq | Groq | llama-3.3-70b-versatile |
local | Ollama | llama3.2 |
grok | xAI | grok-2 |
mistral | Mistral | mistral-large-latest |
deepseek | DeepSeek | deepseek-chat |
Local Models
For fully offline usage without API costs, profClaw supports Ollama and LM Studio. Both run models locally on your hardware. See the Local LLM guide for setup instructions.Resilience
All providers include automatic retry with exponential backoff for transient errors (429, 503, network timeouts):Health Check
Related
- profclaw provider - Add, remove, and test AI providers from the CLI
- profclaw models - List available models and manage aliases
- Configuration Overview - settings.yml and environment variables
- Local LLM Guide - Run profClaw fully offline with Ollama or LM Studio