Perplexity’s online models have built-in internet access and return grounded answers with citations. Unlike standard LLMs, they can answer questions about current events.
Supported Models
| Model | ID | Context | Web Search | Notes |
|---|
| Sonar Huge | llama-3.1-sonar-huge-128k-online | 128K | Yes | Best quality |
| Sonar Large | llama-3.1-sonar-large-128k-online | 128K | Yes | Balanced |
| Sonar Small | llama-3.1-sonar-small-128k-online | 128K | Yes | Fastest/cheapest |
Setup
Set the environment variable
export PERPLEXITY_API_KEY=pplx-...
Verify
profclaw doctor --provider perplexity
Environment Variables
Your Perplexity API key. Format: pplx-...
Configuration Example
PERPLEXITY_API_KEY=pplx-...
providers:
perplexity:
api_key: "${PERPLEXITY_API_KEY}"
Model Aliases
| Alias | Model |
|---|
perplexity | llama-3.1-sonar-huge-128k-online |
pplx-fast | llama-3.1-sonar-small-128k-online |
Usage Examples
# Current events / web search
profclaw chat --model perplexity "What are the latest changes to the TypeScript spec?"
# Fast web lookup
profclaw chat --model pplx-fast "Current Node.js LTS version?"
Notes
- Status: Beta - Perplexity models have unique behavior (web search, citations) that differs from standard chat models.
- API endpoint:
https://api.perplexity.ai (OpenAI-compatible)
- All Sonar Online models have real-time internet access built in.
- Responses include source citations automatically.
- Not recommended for tasks requiring deterministic, non-web outputs.