Supported Models
Ollama supports hundreds of models. Popular choices in profClaw:| Alias | Model | Best For |
|---|---|---|
local / llama | llama3.2 | General purpose |
deepseek-local | deepseek-r1:7b | Reasoning tasks |
qwen | qwen2.5:14b | Multilingual |
mistral-local | mistral:7b | Fast inference |
ollama list can be used by its full name.
Most local Ollama models do not support native tool calling. profClaw automatically falls back to manual tool prompting for these models.
Setup
Environment Variables
Ollama server URL. Defaults to
http://localhost:11434. Override for remote Ollama instances.Configuration Example
- .env (remote Ollama)
- settings.yml
- Docker Compose
Model Aliases
| Alias | Model |
|---|---|
local | llama3.2 |
llama | llama3.2 |
deepseek-local | deepseek-r1:7b |
qwen | qwen2.5:14b |
mistral-local | mistral:7b |
Usage Examples
Notes
- Ollama is always lowest priority in auto-selection. If any cloud key is set, it takes precedence.
- Local models work without internet access - useful for air-gapped environments.
- GPU acceleration significantly improves performance. Ollama auto-detects CUDA/Metal.
- Tool calling is available via manual prompting fallback for models that don’t support it natively.
Related
- AI Providers Overview - Compare all 37 supported providers
- LM Studio - Alternative local model runner with a GUI
- Local LLM Guide - Run profClaw with fully local models
- profclaw provider - Add and test providers from the CLI