kent013 / laravel-prism-prompt
Laravel Mailable-like API for LLM prompts with Prism
Installs: 7
Dependents: 0
Suggesters: 0
Security: 0
Stars: 0
Watchers: 0
Forks: 0
Open Issues: 0
pkg:composer/kent013/laravel-prism-prompt
Requires
- php: ^8.2
- echolabsdev/prism: ^0.10|^0.99|^1.0
- illuminate/support: ^10.0|^11.0|^12.0
- illuminate/view: ^10.0|^11.0|^12.0
- symfony/yaml: ^6.0|^7.0
- webmozart/assert: ^1.11|^2.0
Requires (Dev)
- larastan/larastan: ^2.0|^3.0
- laravel/pint: ^1.0
- orchestra/testbench: ^8.0|^9.0|^10.0
- pestphp/pest: ^2.0|^3.0
Suggests
- react/promise: Required for async execution with executeAsync()
README
Laravel Mailable-like API for LLM prompts with Prism.
Structure your LLM prompts with YAML templates + PHP classes, just like Laravel's Mailable.
Features
- YAML-driven prompt management — Manage prompt text, model settings, and variable definitions in YAML files. Change prompts without touching code
- System / User role separation — Separate
system_promptandpromptin YAML, sent as proper roles via Prism'swithMessages() - Blade templating — Full Blade syntax (
{{ $var }},@if, etc.) in bothsystem_promptandprompt - 3-level message override — Customize message structure at three levels:
buildMessages()/buildSystemMessage()/buildConversationMessages(). Supports conversation history injection and multi-turn dialogue - Structured response parsing — Convert LLM text responses to DTOs via
parseResponse()+extractJson() - Multiple provider fallback — Automatic provider selection based on available API keys using YAML
modelslist andwithApiKeys() - Mailable-like testing — Mock LLM calls with
Prompt::fake(). Verify message contents withassertSystemMessageContains()/assertUserMessageContains()and more - Embedding support — Vector generation via
EmbeddingPromptusingPrism::embeddings() - Performance logging — Log execution time and token usage, with optional debug file output
Installation
composer require kent013/laravel-prism-prompt
Configuration
Publish the config file:
php artisan vendor:publish --tag=prism-prompt-config
Settings Priority
Settings are resolved in the following priority (high to low):
- Class property
- YAML template
- Config default
Usage
Quick Start with load()
Just write a YAML template and use Prompt::load() — no PHP class needed:
# resources/prompts/greeting.yaml name: greeting provider: anthropic model: claude-sonnet-4-5-20250929 max_tokens: 1024 temperature: 0.7 system_prompt: | You are a friendly greeting assistant. Always respond in JSON format with "message" and "tone" fields. prompt: | Say hello to {{ $userName }}.
use Kent013\PrismPrompt\Prompt; $result = Prompt::load('greeting', ['userName' => 'Alice'])->executeSync(); // Returns raw text string
load() resolves YAML from {config('prism-prompt.prompts_path')}/{name}.yaml.
Subclass for Custom Response Parsing
When you need DTO mapping or custom logic, create a subclass:
use Kent013\PrismPrompt\Prompt; class GreetingPrompt extends Prompt { public function __construct( public readonly string $userName, ) { parent::__construct(); } protected function parseResponse(string $text): GreetingResponse { $data = $this->extractJson($text); return new GreetingResponse($data['message'], $data['tone']); } } $result = (new GreetingPrompt('Alice'))->executeSync();
YAML Template Resolution
YAML template is resolved in the following priority:
$promptNameproperty — relative path fromprompts_path- Naming convention — derived from class name (
GreetingPrompt→greeting.yaml)
// 1. $promptName: resources/prompts/standard/greeting.yaml class GreetingPrompt extends Prompt { protected string $promptName = 'standard/greeting'; // ... } // 2. Naming convention: resources/prompts/greeting.yaml class GreetingPrompt extends Prompt { // No $promptName needed — auto-derived from class name // ... }
Use $promptsDirectory to group prompts in a subdirectory:
// resources/prompts/training/hint_generation.yaml class HintGenerationPrompt extends Prompt { protected string $promptsDirectory = 'training'; // Naming convention: hint_generation.yaml // ... }
You can still override getTemplatePath() for full path control.
System Prompt and Message Structure
YAML templates support a system_prompt field that is sent as a separate system-role message to the LLM, distinct from the user-role prompt. This enables proper role separation for better instruction following.
system_prompt: | You are {{ $npcName }}, a {{ $npcRole }}. Always respond in character. prompt: | {{ $conversationHistory }} User: {{ $userMessage }}
Both system_prompt and prompt support Blade syntax with the same template variables.
When sent to the LLM via Prism's withMessages(), this becomes:
| Role | Content |
|---|---|
SystemMessage |
Rendered system_prompt |
UserMessage |
Rendered prompt |
If system_prompt is omitted, only a UserMessage is sent (backward compatible).
Customizing Message Structure
Override these methods in your Prompt subclass for fine-grained control:
class MyPrompt extends Prompt { // Full control over all messages protected function buildMessages(): array { return [ new SystemMessage('You are a helpful assistant.'), new UserMessage($previousQuestion), new AssistantMessage($previousAnswer), new UserMessage($this->render()), ]; } // Or override just the system message protected function buildSystemMessage(): ?SystemMessage { return new SystemMessage('Custom system prompt'); } // Or override just the conversation messages protected function buildConversationMessages(): array { return [ new UserMessage($this->previousQuestion), new AssistantMessage($this->previousAnswer), new UserMessage($this->render()), ]; } }
Override hierarchy:
| Method | Scope | Default behavior |
|---|---|---|
buildMessages() |
Full message array | Calls buildSystemMessage() + buildConversationMessages() |
buildSystemMessage() |
System message only | Renders system_prompt from YAML |
buildConversationMessages() |
User/assistant messages | Returns [new UserMessage($this->render())] |
Runtime API Key Configuration
You can provide a custom API key at runtime using fluent methods:
// Set custom API key $result = (new GreetingPrompt('Alice')) ->withApiKey('user-provided-api-key') ->executeSync(); // Or use withProviderConfig for more options $result = (new GreetingPrompt('Alice')) ->withProviderConfig([ 'api_key' => 'custom-api-key', 'url' => 'https://custom-endpoint.example.com', ]) ->executeSync();
Note: Do not reuse Prompt instances after calling these methods. Use one instance per request.
Multiple Provider Fallback
You can configure multiple models with automatic selection based on available API keys.
YAML Configuration
Add models field to specify available models in priority order:
name: greeting # System default (used when no user API keys provided) provider: anthropic model: claude-sonnet-4-5-20250929 max_tokens: 1024 temperature: 0.7 # Available models (used when multiple API keys provided via withApiKeys) models: - provider: anthropic model: claude-sonnet-4-5-20250929 priority: 1 - provider: openai model: gpt-4o priority: 2 - provider: google model: gemini-2.0-flash-exp priority: 3 prompt: | Say hello to {{ $userName }}.
Runtime Usage
System use (no user API keys):
// Uses provider/model from YAML $result = Prompt::load('greeting', ['userName' => 'Alice'])->executeSync();
Single user API key:
// Uses provider/model from YAML with provided key $result = Prompt::load('greeting', ['userName' => 'Alice']) ->withApiKey($userApiKey) ->executeSync();
Multiple user API keys (automatic selection):
use Kent013\PrismPrompt\Prompt; // Method 1: withApiKeys (simple) $result = Prompt::load('greeting', ['userName' => 'Alice']) ->withApiKeys([ 'anthropic' => 'sk-ant-...', 'openai' => 'sk-...', 'google' => 'API_KEY...', ]) ->executeSync(); // Method 2: withProviderConfigs (with additional options) $result = Prompt::load('greeting', ['userName' => 'Alice']) ->withProviderConfigs([ 'anthropic' => ['api_key' => 'sk-ant-...'], 'openai' => [ 'api_key' => 'sk-...', 'url' => 'https://custom-openai-endpoint.com', ], ]) ->executeSync();
When multiple API keys are provided, the package automatically selects the highest-priority model from the models list for which you have provided an API key. If anthropic key is provided, it will be used. If not, it will fallback to openai, and so on.
Use Cases
User-provided API Keys
When users provide their own API keys, you may not know which provider they prefer. By specifying models, the system will automatically select the best available option.
// User has only OpenAI key, but prompt prefers Anthropic $result = Prompt::load('greeting', ['userName' => $userName]) ->withApiKeys([ 'openai' => $userApiKey, // Only OpenAI key available ]) ->executeSync(); // Automatically uses OpenAI since Anthropic key is not available
Provider Redundancy If you want to ensure high availability, configure fallback models in case the primary provider is unavailable.
Backward Compatibility
Existing YAML files without models continue to work as before. The feature is entirely opt-in.
Embedding
EmbeddingPrompt provides embedding generation via Prism::embeddings().
Quick Start with load()
# resources/prompts/document-embedding.yaml provider: openai model: text-embedding-3-small
use Kent013\PrismPrompt\EmbeddingPrompt; $embedding = EmbeddingPrompt::load('document-embedding') ->withApiKey($userApiKey) ->executeSync('Text to embed'); // Returns array<int, float>
Testing
use Kent013\PrismPrompt\EmbeddingPrompt; use Kent013\PrismPrompt\Testing\EmbeddingResponseFake; $fake = EmbeddingPrompt::fake([ EmbeddingResponseFake::make()->withEmbedding([0.1, 0.2, 0.3]), ]); $result = EmbeddingPrompt::load('document-embedding')->executeSync('test'); $fake->assertCallCount(1); $fake->assertTextContains('test'); $fake->assertProvider('openai'); EmbeddingPrompt::stopFaking();
Testing with Fake
Similar to Prism::fake(), you can mock prompt executions in tests:
use Kent013\PrismPrompt\Prompt; use Kent013\PrismPrompt\Testing\TextResponseFake; // Set up fake responses $fake = Prompt::fake([ TextResponseFake::make()->withText('{"message": "Hello!", "tone": "friendly"}'), TextResponseFake::make()->withText('{"message": "Goodbye!", "tone": "warm"}'), ]); // Execute prompts - they will return fake responses in sequence $result1 = (new GreetingPrompt('Alice'))->executeSync(); $result2 = (new GreetingPrompt('Bob'))->executeSync(); // Make assertions $fake->assertCallCount(2); $fake->assertPromptContains('Alice'); // Searches all messages $fake->assertUserMessageContains('Alice'); // User message only $fake->assertHasSystemMessage(); // System message exists $fake->assertSystemMessageContains('greeting'); // System message content $fake->assertMessageCount(2); // system + user $fake->assertProvider('anthropic'); $fake->assertModel('claude-sonnet-4-5-20250929'); // Stop faking when done Prompt::stopFaking();
Available Assertions
| Method | Description |
|---|---|
assertCallCount(int $count) |
Assert number of prompt executions |
assertPromptContains(string $text) |
Assert any message contains specific text |
assertSystemMessageContains(string $text) |
Assert system message contains specific text |
assertUserMessageContains(string $text) |
Assert user message contains specific text |
assertHasSystemMessage() |
Assert a system message was sent |
assertMessageCount(int $count) |
Assert number of messages sent |
assertPrompt(string $prompt) |
Assert exact prompt text was sent |
assertPromptClass(string $class) |
Assert specific prompt class was used |
assertProvider(string $provider) |
Assert provider was used |
assertModel(string $model) |
Assert model was used |
assertRequest(Closure $fn) |
Custom assertion with recorded requests |
TextResponseFake Builder
TextResponseFake::make() ->withText('response text') ->withUsage(100, 50); // promptTokens, completionTokens
Debug Logging
Enable performance logging for debugging LLM calls:
PRISM_PROMPT_DEBUG=true PRISM_PROMPT_LOG_CHANNEL=prism-prompt PRISM_PROMPT_SAVE_FILES=true
When enabled, logs include:
- Execution ID
- Prompt class
- Provider and model
- Duration (ms)
- Token usage (prompt/completion/total)
When save_files is enabled, debug files are saved to storage/prism-prompt-debug/{date}/{execution-id}/:
prompt.txt- The rendered promptresponse.txt- The LLM responsemetadata.json- Execution metadata
Custom Logger
You can provide a custom logger by extending Prompt and overriding getPerformanceLogger():
use Kent013\PrismPrompt\Contracts\PerformanceLoggerInterface; class MyPrompt extends Prompt { protected function getPerformanceLogger(): ?PerformanceLoggerInterface { return app(MyCustomLogger::class); } }
Response Parsing
JSON Response
protected function parseResponse(string $text): SomeDto { $data = $this->extractJson($text); return new SomeDto($data); }
Plain Text Response
protected function parseResponse(string $text): string { return trim($text); }
Traits
ValidatesPromptVariables
For validating required variables:
use Kent013\PrismPrompt\Traits\ValidatesPromptVariables; class MyService { use ValidatesPromptVariables; public function process(PromptTemplate $template, array $variables): void { $this->validateVariables($variables, $template); } }
YAML Template Reference
Basic Fields
| Field | Required | Description |
|---|---|---|
name |
No | Template name (informational) |
version |
No | Template version (informational) |
description |
No | Template description (informational) |
provider |
No | Default LLM provider (e.g., anthropic, openai, google) |
model |
No | Default model name |
max_tokens |
No | Maximum tokens in response |
temperature |
No | Response randomness (0.0 - 1.0) |
system_prompt |
No | Blade template for the system-role message (instructions, role definitions, constraints) |
prompt |
Yes | Blade template for the user-role message (dynamic data, task description) |
Multiple Models Support
The models field allows automatic selection when multiple API keys are provided:
# System default provider: anthropic model: claude-sonnet-4-5-20250929 # Available models (for withApiKeys) models: - provider: anthropic # Provider name (required) model: claude-sonnet-4-5 # Model name (required) priority: 1 # Priority (lower = higher priority, optional, default: 999) - provider: openai model: gpt-4o priority: 2
models fields:
| Field | Required | Description |
|---|---|---|
provider |
Yes | Provider name (e.g., anthropic, openai) |
model |
Yes | Model identifier |
priority |
No | Selection priority (lower number = higher priority, default: 999) |
Priority behavior:
- Lower values have higher priority (e.g.,
priority: 1is selected beforepriority: 2) - If not specified, defaults to
999 - When multiple API keys are provided via
withApiKeys(), the system selects the available model with the lowest priority value - When no API keys are provided or only single key via
withApiKey(), the system usesprovider/modelfields
Meta Section
The meta section supports custom application metadata:
meta: # Custom metadata for your application variables: runtime: - userName - npcName
Complete Example
name: generate_greeting version: 1.0.0 description: Generate personalized greeting message # System default settings provider: anthropic model: claude-sonnet-4-5-20250929 max_tokens: 500 temperature: 0.8 # Available models (for withApiKeys) models: - provider: anthropic model: claude-sonnet-4-5-20250929 priority: 1 - provider: openai model: gpt-4o priority: 2 # Custom application metadata meta: variables: runtime: - userName - userRole - scenarioTitle # System-role message (instructions, constraints) system_prompt: | You are a professional greeter for {{ $scenarioTitle }}. Always respond in JSON format with "message" and "tone" fields. Keep the tone warm and professional. # User-role message (dynamic data, task) prompt: | Generate a greeting for {{ $userName }} ({{ $userRole }}).
Configuration Reference
| Key | Default | Description |
|---|---|---|
default_provider |
anthropic |
Default LLM provider for text generation |
default_model |
claude-sonnet-4-5-20250929 |
Default model for text generation |
default_max_tokens |
4096 |
Maximum tokens in LLM response |
default_temperature |
0.7 |
Response randomness (0.0 - 1.0) |
default_embedding_provider |
openai |
Default provider for embeddings (separate since not all providers support embeddings) |
default_embedding_model |
text-embedding-3-small |
Default model for embeddings |
prompts_path |
resource_path('prompts') |
Base path for YAML templates. Used by load(), $promptName, and naming convention |
cache.enabled |
true |
Enable YAML template caching |
cache.ttl |
3600 |
Cache TTL in seconds |
cache.store |
null |
Cache store (null = default) |
debug.enabled |
false |
Enable performance logging |
debug.log_channel |
prism-prompt |
Log channel for performance logs |
debug.save_files |
false |
Save prompt/response/metadata files to disk |
debug.storage_path |
storage_path('prism-prompt-debug') |
Directory for debug files |
Examples
The examples/ directory contains runnable samples for common use cases:
| File | Description |
|---|---|
| 01-basic-system-prompt.php | Prompt::load() with system_prompt — simplest pattern, no PHP class needed |
| 02-json-dto-response.php | Subclass with extractJson() → DTO mapping, JSON schema in system_prompt |
| 03-conversation-history.php | Override buildConversationMessages() to send chat history as native UserMessage/AssistantMessage |
| 04-testing.php | Testing patterns with message-aware assertions (assertSystemMessageContains, assertUserMessageContains, etc.) |
License
MIT