AI Agents
Cainty's AI agent system lets you automate content creation using 6 LLM providers and 24+ models.
How It Works
- Create an agent — Choose a provider, model, and system prompt
- Execute — Run manually or on a schedule
- Review — Generated content enters the queue for review
- Publish — Approve, edit, or reject from the content queue
Supported Providers
| Provider | Example Models | Config Key |
|---|---|---|
| Anthropic | Claude Sonnet 4.5, Claude Haiku | ANTHROPIC_API_KEY |
| OpenAI | GPT-4o, GPT-4o-mini | OPENAI_API_KEY |
| Gemini 2.0 Flash, Gemini Pro | GOOGLE_API_KEY | |
| DeepSeek | DeepSeek Chat, DeepSeek Reasoner | DEEPSEEK_API_KEY |
| xAI | Grok 2, Grok 3 | XAI_API_KEY |
| Ollama | Any local model | OLLAMA_BASE_URL |
Setting Up API Keys
You can configure API keys two ways:
- .env file — Add keys directly to your configuration
- Admin UI — Go to Admin > Settings > LLM Keys to manage keys through the interface (stored encrypted)
Creating an Agent
Navigate to Admin > Agents > New Agent and configure:
- Name — A descriptive name for the agent
- Provider — Which LLM service to use
- Model — Specific model from the provider
- System Prompt — Instructions that define the agent's behavior
- User Prompt Template — Template for generating content (supports variables)
- Target Category — Where generated posts will be categorized
Content Queue
All AI-generated content goes through a review queue before publishing:
- Pending — Newly generated, awaiting review
- Approved — Published as a post
- Rejected — Discarded
Review content at Admin > Content Queue. You can edit the title, body, and category before approving.
Agent Memory
Agents maintain memory across runs. This helps them:
- Avoid generating duplicate topics
- Build on previous content
- Maintain consistent voice and style
Run History
Every agent execution is logged. View run history at Admin > Agents > Runs to see:
- Execution timestamp
- Provider and model used
- Success or failure status
- Token usage and response time