Skip to main content
Google Gemini models offer some of the largest context windows available - up to 2M tokens for Gemini 1.5 Pro. All Gemini models support native function calling and vision.

Supported Models

ModelIDContextMax OutputToolsVisionInput $/1MOutput $/1M
Gemini 1.5 Progemini-1.5-pro2M65KYesYes$1.25$5.00
Gemini 1.5 Flashgemini-1.5-flash1M8KYesYes$0.075$0.30
Gemini 2.0 Flash Thinkinggemini-2.0-flash-thinking-exp1M8KYesYes$0.075$0.30

Setup

1

Get an API key

Go to aistudio.google.com and create an API key (free tier available).
2

Set the environment variable

# Either variable name works:
export GOOGLE_API_KEY=AIza...
export GOOGLE_GENERATIVE_AI_API_KEY=AIza...
3

Verify

profclaw doctor --provider google

Environment Variables

GOOGLE_API_KEY
string
required
Your Google AI API key. Either GOOGLE_API_KEY or GOOGLE_GENERATIVE_AI_API_KEY is accepted.
GOOGLE_GENERATIVE_AI_API_KEY
string
Alternative variable name for the Google API key. Takes precedence over GOOGLE_API_KEY if both are set.

Configuration Example

GOOGLE_API_KEY=AIzaSy...

Model Aliases

AliasModel
geminigemini-1.5-pro
gemini-flashgemini-1.5-flash
gemini-thinkinggemini-2.0-flash-thinking-exp

Usage Examples

# Large context analysis
profclaw chat --model gemini "Analyze this entire codebase"

# Fast and cheap
profclaw chat --model gemini-flash "Write a quick summary"

# Thinking model
profclaw chat --model gemini-thinking "Walk me through this proof"

Notes

  • Gemini 1.5 Pro has a 2M token context window - the largest of any profClaw provider.
  • Gemini 1.5 Flash is very cheap at $0.075/1M input tokens, good for high-volume tasks.
  • Free tier is available via Google AI Studio with rate limits.
  • For Google Workspace / enterprise use, see the Vertex AI option via a custom base_url.