Supported Models
| Model | ID | Context | Max Output | Tools | Vision | Input $/1M | Output $/1M |
|---|---|---|---|---|---|---|---|
| Gemini 1.5 Pro | gemini-1.5-pro | 2M | 65K | Yes | Yes | $1.25 | $5.00 |
| Gemini 1.5 Flash | gemini-1.5-flash | 1M | 8K | Yes | Yes | $0.075 | $0.30 |
| Gemini 2.0 Flash Thinking | gemini-2.0-flash-thinking-exp | 1M | 8K | Yes | Yes | $0.075 | $0.30 |
Setup
Get an API key
Go to aistudio.google.com and create an API key (free tier available).
Environment Variables
Your Google AI API key. Either
GOOGLE_API_KEY or GOOGLE_GENERATIVE_AI_API_KEY is accepted.Alternative variable name for the Google API key. Takes precedence over
GOOGLE_API_KEY if both are set.Configuration Example
- .env
- settings.yml
Model Aliases
| Alias | Model |
|---|---|
gemini | gemini-1.5-pro |
gemini-flash | gemini-1.5-flash |
gemini-thinking | gemini-2.0-flash-thinking-exp |
Usage Examples
Notes
- Gemini 1.5 Pro has a 2M token context window - the largest of any profClaw provider.
- Gemini 1.5 Flash is very cheap at $0.075/1M input tokens, good for high-volume tasks.
- Free tier is available via Google AI Studio with rate limits.
- For Google Workspace / enterprise use, see the Vertex AI option via a custom
base_url.
Related
- AI Providers Overview - Compare all 37 supported providers
- Anthropic - Claude models with native tool calling
- OpenRouter - Access Gemini via OpenRouter for routing flexibility
- profclaw provider - Add and test providers from the CLI