Models & Providers
Connect AI providers to give your agent access to language models, image generation, and more.
How It Works
Your agent needs at least one AI provider to function. ClawManager supports cloud providers (like Anthropic and OpenAI) and local models (like Ollama). You can connect as many as you want and switch between them.
In the Models tab of ClawManager, you'll see a provider grid showing connection status at a glance - green for connected, a warning if auth is missing or expired. You can filter providers by type: OAuth, API Key, or Local.
Connecting a Provider
- 1Open ClawManager and go to the Models tab.
- 2In the Providers section, click the provider you want to set up.
- 3For API Key providers: paste your key and save. For OAuth providers: click the connect button and authorize in the popup.
- 4The provider card turns green when connected. You're ready to go.
Supported Providers
Anthropic
API Key or OAuthClaude models (Opus, Sonnet, Haiku). The default choice for many users.
Setup: Paste your API key from console.anthropic.com, or use one-click OAuth to connect your Anthropic account directly.
OpenAI
API Key or ChatGPT Plus/Pro OAuthGPT-4o, o1, o3, DALL-E, Whisper, and more.
Setup: Enter your API key from platform.openai.com. If you have ChatGPT Plus or Pro, you can also connect via OAuth for free usage through your subscription.
Google Gemini
API Key or OAuthGemini models via Google AI Studio.
Setup: Paste an API key from aistudio.google.com/apikey (free tier: 60 req/min), or connect with Google OAuth via the Gemini CLI flow.
OpenRouter
API KeyAccess hundreds of models from multiple providers through one API key.
Setup: Get your key from openrouter.ai and paste it in ClawManager.
Groq
API KeyUltra-fast inference for Llama, Mixtral, and other open models.
Setup: Get your key from console.groq.com.
Mistral
API KeyMistral and Codestral models from Mistral AI.
Setup: Get your key from console.mistral.ai.
xAI (Grok)
API KeyGrok models from xAI.
Setup: Enter your xAI API key.
GitHub Copilot
OAuthRoute through your GitHub Copilot subscription. Two setup methods: direct OAuth or via VS Code proxy.
Setup: Connect via OAuth through GitHub, or use the VS Code Copilot extension as a proxy (requires VS Code with Copilot installed and signed in).
Ollama
Local (no key needed)Run open-source models locally on your machine. No cloud, no cost.
Setup: Install Ollama from ollama.com and start the server. ClawManager auto-detects it when running.
LM Studio
Local (no key needed)Another popular option for running models locally via a GUI.
Setup: Start LM Studio's local server. ClawManager connects to it the same way as Ollama.
Custom providers: You can also add any OpenAI-compatible endpoint using the Add Provider button. This works with Azure OpenAI, AWS Bedrock proxies, and other compatible APIs.
Choosing Your Model
Once providers are connected, configure which models your agent uses:
Primary Model
The main model your agent uses for conversations. Choose the best balance of quality and cost for your needs.
Fallback Models
If the primary model is unavailable or rate-limited, your agent automatically falls back to these models in order.
Image Model
Which model handles image generation requests (e.g., DALL-E).
Heartbeat Model
Optionally use a cheaper/faster model for background heartbeat tasks to save costs.
Favorites
Click the ♥ heart icon next to any model to add it to your favorites. Favorited models appear at the top of selection dropdowns, making it quick to switch between models you use often.
AI Settings
The Models page also lets you tune AI behavior settings:
- •Thinking Level - Control how much the model "thinks" before responding (extended thinking / reasoning).
- •Channel Pins - Pin specific models to specific channels (e.g., use GPT-4o for Discord, Claude for Telegram).
- •Custom Models - Add any model by specifying provider and model ID manually.
Tips
- •Use
/modelin chat to switch models mid-conversation. - •Connecting multiple providers gives your agent resilience - if one goes down, fallbacks kick in.
- •Local models (Ollama) are free and private, great for testing or sensitive work.
- •Per-agent model overrides let you assign different models to different agents (e.g., a cheaper model for routine tasks).
Next Steps
With your models connected, head to Connections to set up a messaging platform, or check out Agents to customize your agent's personality.
