Craft Agents supports multiple API providers through a built-in preset system. You can connect to Anthropic directly, use aggregators like OpenRouter, run local models via Ollama, or point to any API endpoint compatible with the Anthropic Messages format.Documentation Index
Fetch the complete documentation index at: https://agents.craft.do/docs/llms.txt
Use this file to discover all available pages before exploring further.
This page covers Anthropic-compatible providers. For Codex/OpenAI connections and multi‑connection setup, see LLM Connections.
Supported Providers
| Provider | Base URL | API Key Required | Notes |
|---|---|---|---|
| Anthropic | https://api.anthropic.com | Yes | Default provider. No model override needed. |
| OpenRouter | https://openrouter.ai/api | Yes | Access multiple AI providers through one API. |
| Vercel AI Gateway | https://ai-gateway.vercel.sh | Yes | Unified gateway for AI model routing. |
| Ollama | http://localhost:11434 | No | Run models locally. Requires Ollama 0.14+. |
| Custom | Any URL | Depends on provider | Any Anthropic-compatible endpoint. |
Setting Up a Provider
During First Launch
- In the setup wizard, select API Key
- Enter your API key
- Select a Base URL preset from the dropdown (Anthropic, OpenRouter, Vercel AI Gateway, or Custom)
- Optionally specify a Model name (required for non-Anthropic providers)
- The connection is tested automatically before saving
In Settings
- Open Settings (gear icon or
Cmd+,) - Click on the API Connection section
- Change your API key, base URL, or model as needed
Model Names
For Anthropic, no model override is needed — Craft Agents uses its built-in model routing (Sonnet, Opus, Haiku) automatically. For OpenRouter and Vercel AI Gateway, models use theprovider/model-name format:
When the Model field is left empty for non-Anthropic providers, Craft Agents defaults to Anthropic model name formatting. This works for providers that support Anthropic model names natively but may not work for all providers.
Provider Details
OpenRouter
OpenRouter gives you access to hundreds of AI models through a single API key. It handles billing, rate limiting, and fallbacks across providers.- Get your API key at openrouter.ai/keys
- Select the OpenRouter preset in the Base URL dropdown
- Set your model (e.g.
anthropic/claude-sonnet-4)
Ollama (Local Models)
Ollama runs open-source models locally on your machine. No API key is required, and data never leaves your computer. Requirements:- Ollama 0.14 or newer (for Anthropic-compatible API format)
- A model pulled locally
- Select the Custom preset in the Base URL dropdown
- Enter
http://localhost:11434as the URL - Leave the API key empty
- Set the model name (e.g.
llama3.2)
Ollama requires version 0.14+ for compatibility with Craft Agents. Earlier versions do not support the Anthropic Messages API format. Update with
ollama update if needed.Vercel AI Gateway
Vercel AI Gateway provides a unified endpoint for routing requests to multiple AI providers with built-in observability and caching.- Get your API key from your Vercel dashboard
- Select the Vercel AI Gateway preset
- Set your model using
provider/model-nameformat
Custom Endpoint
For any API that implements the Anthropic Messages format:- Select the Custom preset
- Enter the full base URL of your endpoint
- Enter your API key (if required)
- Specify the model name your endpoint expects
/v1/messages endpoint.
Image Input for Custom Endpoints
Custom endpoints are text-only by default. If your endpoint serves a multimodal model — for example Gemma 4 via Ollama or another OpenAI-compatible proxy — you must opt it into image support explicitly. There is no automatic capability detection: a model that silently strips images would otherwise produce confusing answers (the user sees the image in their bubble, but the model never receives it).Toggle from the chat input (recommended)
For everyday use, enable image input directly from the chat input model picker:- Open the model dropdown above the chat input.
- Each model row on a custom-endpoint (
pi_compat) connection shows a small image icon on the right. The icon is dim when image input is off, bright when on. - Click the icon to flip the per-model
supportsImagesoverride. The change persists immediately to your config.
The picker toggle and the pre-flight banner write to per-model overrides only (
models[i].supportsImages). The endpoint-wide default (customEndpoint.supportsImages) is set via JSON config — see below.JSON config (automation / advanced)
For headless setups, automation scripts, or when you want every model on an endpoint multimodal at once, write the connection config directly. These examples use the low-level LLM connection schema from LLM Connections, wherecustomEndpoint.api selects the endpoint wire format.
Per-model opt-in
Whole-endpoint opt-in
Per-model overrides take precedence over the endpoint-wide default, so you can opt one specific model out of vision (
supportsImages: false) even if the endpoint default is true.How It Works
When you configure a non-default provider, Craft Agents stores:- The API key in the encrypted credentials file (
~/.craft-agent/credentials.enc) - The base URL and default model in the LLM connection configuration
ANTHROPIC_BASE_URL environment variable to the underlying Claude Code SDK.
Troubleshooting
Connection test fails
Connection test fails
Verify:
- The base URL is correct and accessible from your machine
- Your API key is valid and has sufficient permissions
- The endpoint supports the Anthropic Messages API format (
/v1/messages)
Model not found errors
Model not found errors
Check that the model name matches exactly what your provider expects:
- OpenRouter/Vercel: Use
provider/model-nameformat (e.g.anthropic/claude-sonnet-4) - Ollama: Use the local model name (e.g.
llama3.2) - Custom: Check your provider’s documentation for valid model identifiers
Authentication errors
Authentication errors
- Ensure your API key is correct and hasn’t expired
- For Ollama: no API key should be set (leave it empty)
- Check if your key has available credits/quota
Ollama not connecting
Ollama not connecting
- Verify Ollama is running:
ollama list - Check you’re on version 0.14+:
ollama --version - Ensure the model is pulled:
ollama pull llama3.2 - Verify the URL is
http://localhost:11434(note: HTTP, not HTTPS)
Rate limiting
Rate limiting
If you hit rate limits, check your provider’s usage limits and consider upgrading your plan or using a different provider.