The Ultimate AI Integration Package for Laravel
Enterprise-grade, multi-provider AI SDK with caching, cost tracking, and production-ready features
Laravel AI Integration provides a unified, elegant API to interact with multiple AI providers including OpenAI, Anthropic (Claude), Google (Gemini), Ollama, and Groq. Built specifically for Laravel 11+, it abstracts provider complexity while offering powerful features like streaming, function calling, embeddings, and more.
Installation β’ Usage β’ Features β’ FAQ β’ Examples
- π― 5 AI Providers: OpenAI, Anthropic (Claude), Google (Gemini), Ollama, Groq
- π¬ Chat Completion: Standard and streaming responses
- π§ Embeddings: Generate vector embeddings for semantic search
- πΌοΈ Image Generation: DALL-E and compatible APIs
- π οΈ Function Calling: Tool/function use support
- π Streaming: Real-time SSE streaming for chat
- πΎ Response Caching: Intelligent caching with Redis/database support (v2.0)
- π° Cost Tracking: Token counting and cost calculation (v2.0)
- π Retry Logic: Exponential backoff with circuit breaker (v2.0)
- π Prompt Templates: Reusable prompt system (v2.0)
- π¨ Eloquent Integration: Traits for AI-powered models
- β‘ Task Abstraction: Pre-built tasks for common operations
- π» Artisan Commands: CLI for code generation, cache management, usage stats
- π¦ Jobs: Queue support for background processing
Install via Composer:
composer require rahasistiyak/laravel-ai-integrationPublish the configuration file:
php artisan vendor:publish --tag=ai-configAdd your API keys to .env:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
GROQ_API_KEY=...
OLLAMA_BASE_URL=http://localhost:11434
AI_DEFAULT_PROVIDER=openai
# Optional: Enable Caching & Tracking (v2.0)
AI_CACHE_ENABLED=true
AI_TRACKING_ENABLED=trueEdit config/ai.php to customize provider settings:
return [
'default' => env('AI_DEFAULT_PROVIDER', 'openai'),
'providers' => [
'openai' => [
'driver' => 'openai',
'api_key' => env('OPENAI_API_KEY'),
'base_url' => env('OPENAI_BASE_URL', 'https://api.openai.com/v1'),
'timeout' => 30,
'models' => [
'chat' => ['gpt-4', 'gpt-3.5-turbo'],
'embedding' => ['text-embedding-ada-002'],
],
],
// Additional providers...
],
];use Rahasistiyak\LaravelAiIntegration\Facades\AI;
$response = AI::chat()
->messages([
['role' => 'user', 'content' => 'Explain quantum computing in simple terms']
])
->get();
echo $response->content();That's it! You're now using AI in Laravel with caching and cost tracking enabled by default.
- π― 5 AI Providers: OpenAI, Anthropic (Claude), Google (Gemini), Ollama, Groq
- π¬ Chat Completion: Standard and streaming responses
- π§ Embeddings: Generate vector embeddings for semantic search
- πΌοΈ Image Generation: DALL-E and compatible APIs
- π οΈ Function Calling: Tool/function use support
- π Streaming: Real-time SSE streaming for chat
- πΎ Response Caching: Intelligent caching with Redis/database support (v2.0)
- π° Cost Tracking: Token counting and cost calculation (v2.0)
- π Retry Logic: Exponential backoff with circuit breaker (v2.0)
- π Prompt Templates: Reusable prompt system (v2.0)
- π¨ Eloquent Integration: Traits for AI-powered models
- β‘ Task Abstraction: Pre-built tasks for common operations
- π» Artisan Commands: CLI for code generation, cache management, usage stats
- π¦ Jobs: Queue support for background processing
| Provider | Chat | Streaming | Embeddings | Images | Function Calling |
|---|---|---|---|---|---|
| OpenAI | β | β | β | β | β |
| Anthropic (Claude) | β | β | β | β | β |
| Google (Gemini) | β | β | β | β | β |
| Groq | β | β | β | β | β |
| Ollama | β | β | β | β | β |
$response = AI::chat()
->messages([
['role' => 'system', 'content' => 'You are a helpful assistant'],
['role' => 'user', 'content' => 'Explain Laravel in one sentence']
])
->get();
echo $response->content();
// "Laravel is a modern PHP framework..."Stream responses in real-time:
AI::chat()
->messages([
['role' => 'user', 'content' => 'Write a short story about AI']
])
->stream(function ($chunk) {
echo $chunk; // Output each chunk as it arrives
});// Use Anthropic (Claude)
$response = AI::driver('anthropic')
->chat([
['role' => 'user', 'content' => 'Hello Claude!']
]);
// Use Google Gemini
$response = AI::driver('google')
->chat([
['role' => 'user', 'content' => 'Hello Gemini!']
]);
// Use Groq
$response = AI::driver('groq')
->chat([
['role' => 'user', 'content' => 'Hello Groq!']
]);
// Use local Ollama
$response = AI::driver('ollama')
->chat([
['role' => 'user', 'content' => 'Hello Llama!']
]);Generate vector embeddings for semantic search:
$embedding = AI::embed()->generate('Your text here');
// Returns: [0.0123, -0.0234, 0.0156, ...]Add AI capabilities to your models:
use Rahasistiyak\LaravelAiIntegration\Traits\HasAiEmbeddings;
class Article extends Model
{
use HasAiEmbeddings;
}
// Generate embeddings
$article = Article::find(1);
$embedding = $article->generateEmbedding();Use function calling for structured outputs:
$response = AI::chat()
->withTools([
[
'type' => 'function',
'function' => [
'name' => 'get_weather',
'description' => 'Get the current weather for a location',
'parameters' => [
'type' => 'object',
'properties' => [
'location' => [
'type' => 'string',
'description' => 'City name',
],
'unit' => [
'type' => 'string',
'enum' => ['celsius', 'fahrenheit'],
],
],
'required' => ['location'],
],
],
],
])
->messages([
['role' => 'user', 'content' => 'What\'s the weather in Tokyo?']
])
->get();Use pre-built tasks for common operations:
// Text classification
$category = AI::task()->classify(
'This new GPU delivers incredible performance for AI workloads',
['Technology', 'Fashion', 'Sports', 'Politics']
);
// Returns: "Technology"$image = AI::image()->generate('A futuristic city at sunset', [
'size' => '1024x1024',
'quality' => 'hd'
]);
// Returns: ['url' => 'https://...']Generate code via Artisan:
php artisan ai:generate-code "Create a UserObserver that logs model events" --language=phpProcess AI tasks in the background:
use Rahasistiyak\LaravelAiIntegration\Jobs\ProcessAiTask;
ProcessAiTask::dispatch('classify', $text, [
'labels' => ['Positive', 'Negative', 'Neutral']
]);Use the model() method on the chat builder to specify a different model for the request:
// Use a specific model with the default provider
AI::chat()
->model('gpt-4-turbo')
->messages([...])
->get();AI::chat()
->withParameters([
'temperature' => 0.9,
'max_tokens' => 500,
'top_p' => 0.95,
])
->messages([...])
->get();$response = AI::chat()
->model('gpt-4')
->withParameters(['temperature' => 0.7])
->withTools([...])
->messages([...])
->get();use Rahasistiyak\LaravelAiIntegration\Support\PromptTemplate;
$prompt = PromptTemplate::load('classification')
->with(['text' => $userInput, 'categories' => 'Tech, Sports'])
->toMessages();
$response = AI::chat()->messages($prompt)->get();Save 60-80% on API costs automatically:
// First call - hits API
$response = AI::chat()->messages([...])->get();
// Second identical call - instant from cache!
$cached = AI::chat()->messages([...])->get();Track usage and costs:
php artisan ai:usage --provider=openaiAutomatic retry with circuit breaker pattern ensures 99.9% uptime.
This package is open-source software licensed under the MIT License.
- Author: Rahasistiyak
- Package: rahasistiyak/laravel-ai-integration
Made with β€οΈ for the Laravel community