forgeomni / superagent
AI Agent SDK โ standalone CLI or Laravel integration. Run the full agentic loop in-process with tool use support.
Requires
- php: ^8.1
- guzzlehttp/guzzle: ^7.0
- psr/log: ^3.0
Requires (Dev)
- illuminate/support: ^10.0|^11.0|^12.0
- orchestra/testbench: ^8.0|^9.0|^10.0
- phpunit/phpunit: ^10.0|^11.0
- symfony/console: ^6.0|^7.0
Suggests
- illuminate/support: Required for Laravel service provider, facades, and Artisan commands
- symfony/console: Required for standalone CLI mode (superagent command)
- dev-main
- v0.9.6
- v0.9.5
- v0.9.2
- v0.9.1
- v0.9.0
- v0.8.9
- v0.8.8
- v0.8.7
- v0.8.6
- v0.8.5
- v0.8.2
- v0.8.1
- 0.8.0
- v0.7.9
- v0.7.8
- 0.7.7
- v0.7.6
- v0.7.5
- v0.7.2
- v0.7.1
- v0.7.0
- v0.6.19
- v0.6.18
- v0.6.17
- v0.6.16
- v0.6.15
- v0.6.12
- 0.6.11
- 0.6.10
- v0.6.9
- v0.6.8
- v0.6.7
- v0.6.6
- v0.6.5
- v0.6.2
- v0.6.1
- v0.6.0
- v0.5.7
- v0.5.6
- v0.5.5
- v0.5.2
- v0.5.1
- v0.5.0
- dev-cli
- dev-copilot/update-tag-to-v082
This package is auto-updated.
Last update: 2026-05-01 18:57:52 UTC
README
๐ Language: English | ไธญๆ | Franรงais ๐ Docs: Installation ยท ๅฎ่ฃ ยท Installation FR ยท Advanced usage ยท API docs
An AI agent SDK for PHP โ run the full agentic loop (LLM turn โ tool call โ tool result โ next turn) in-process, with thirteen providers, real-time streaming, multi-agent orchestration, and a machine-readable wire protocol. Usable as a standalone CLI or as a Laravel library.
superagent "fix the login bug in src/Auth/"
$agent = new SuperAgent\Agent([ 'provider' => 'openai-responses', 'model' => 'gpt-5', ]); $result = $agent->run('Summarise docs/ADVANCED_USAGE.md in one paragraph'); echo $result->text();
Table of Contents
- Quick Start
- Providers & Authentication
- OpenAI Responses API
- Cross-provider handoff
- DeepSeek V4
- Agent Loop
- Tools & Multi-Agent
- Agent Definitions
- Skills
- MCP Integration
- Wire Protocol
- Retry, Errors & Observability
- Guardrails & Checkpoints
- Standalone CLI
- Laravel Integration
- Configuration reference
Every feature section ends with a Since line pointing at the release that introduced it. Full release notes live in CHANGELOG.md.
Quick Start
Install:
# As a standalone CLI: composer global require forgeomni/superagent # Or as a Laravel dependency: composer require forgeomni/superagent
See INSTALL.md for the full matrix (system requirements, auth setup, IDE bridges, CI integration).
Smallest possible agent run:
$agent = new SuperAgent\Agent(['provider' => 'anthropic']); $result = $agent->run('what day is it?'); echo $result->text();
Smallest agent run with tools:
$agent = (new SuperAgent\Agent(['provider' => 'openai'])) ->loadTools(['read', 'write', 'bash']); $result = $agent->run('inspect composer.json and tell me what PHP version this project targets'); echo $result->text();
One-shot via CLI:
export ANTHROPIC_API_KEY=sk-... superagent "inspect composer.json and tell me what PHP version this project targets"
Providers & Authentication
Thirteen registry-backed providers, with region-aware base URLs and multiple auth modes per provider. All implement the same LLMProvider contract, so swapping one for another is one line.
| Registry key | Provider | Notes |
|---|---|---|
anthropic |
Anthropic | API key or stored Claude Code OAuth |
openai |
OpenAI Chat Completions (/v1/chat/completions) |
API key, OPENAI_ORGANIZATION / OPENAI_PROJECT |
openai-responses |
OpenAI Responses API (/v1/responses) |
Dedicated section below |
openrouter |
OpenRouter | API key |
gemini |
Google Gemini | API key |
kimi |
Moonshot Kimi | API key; regions intl / cn / code (OAuth) |
qwen |
Alibaba Qwen (OpenAI-compat default) | API key; regions intl / us / cn / hk / code (OAuth + PKCE) |
qwen-native |
Alibaba Qwen (DashScope-native body) | Kept for parameters.thinking_budget callers |
glm |
BigModel GLM | API key; regions intl / cn |
minimax |
MiniMax | API key; regions intl / cn |
deepseek |
DeepSeek V4 | API key; regions default / beta (since v0.9.6) |
bedrock |
AWS Bedrock | AWS SigV4 |
ollama |
Local Ollama daemon | No auth โ localhost:11434 by default |
lmstudio |
Local LM Studio server | Placeholder auth โ localhost:1234 by default (since v0.9.1) |
Auth options, by priority:
- API key from environment โ
ANTHROPIC_API_KEY,OPENAI_API_KEY,KIMI_API_KEY,QWEN_API_KEY,GLM_API_KEY,MINIMAX_API_KEY,DEEPSEEK_API_KEY,OPENROUTER_API_KEY,GEMINI_API_KEY. - Stored OAuth credentials at
~/.superagent/credentials/<name>.json. Device-code flow โ runsuperagent auth login <name>:claude-codeโ reuses an existing Claude Code logincodexโ reuses a Codex CLI logingeminiโ reuses a Gemini CLI loginkimi-codeโ RFC 8628 device flow againstauth.kimi.com(since v0.9.0)qwen-codeโ device flow with PKCE S256 + per-accountresource_url(since v0.9.0)
- Explicit config โ
api_key/access_token/account_idon the agent options.
OAuth refresh is serialised across processes via CredentialStore::withLock() โ parallel queue workers sharing one credential file don't race on refresh (since v0.9.0).
Declarative headers
new Agent([ 'provider' => 'openai', 'env_http_headers' => [ 'OpenAI-Project' => 'OPENAI_PROJECT', // sent only when env set + non-empty 'OpenAI-Organization' => 'OPENAI_ORGANIZATION', ], 'http_headers' => [ 'x-app' => 'my-host-app', // static header ], ]);
Since v0.9.1
Model catalog
Every provider ships with model-id + pricing metadata bundled in resources/models.json. Refresh to the vendor's live /models endpoint at any time:
superagent models refresh # every provider with env creds superagent models refresh openai # one provider superagent models list # show merged catalog superagent models status # catalog source + age
Since v0.9.0
OpenAI Responses API
Dedicated provider at provider: 'openai-responses'. Hits /v1/responses with the full modern OpenAI shape.
Why use it over openai:
| Feature | Responses | Chat Completions |
|---|---|---|
previous_response_id continuation |
โ โ server holds state; new turn skips resending context | โ โ must re-send messages[] every turn |
reasoning.effort (minimal / low / medium / high / xhigh) |
โ native | โ requires model-id hacks for o-series |
reasoning.summary |
โ native | โ |
prompt_cache_key (server-side cache pinning) |
โ native | โ |
text.verbosity (low / medium / high) |
โ native | โ |
service_tier (priority / default / flex / scale) |
โ native | โ |
| Classified error types | โ
via response.failed event codes |
Pattern-matched on HTTP body |
$agent = new Agent([ 'provider' => 'openai-responses', 'model' => 'gpt-5', ]); $result = $agent->run('analyse this codebase and propose refactors', [ 'reasoning' => ['effort' => 'high', 'summary' => 'auto'], 'verbosity' => 'low', 'prompt_cache_key' => 'session:42', 'service_tier' => 'priority', 'store' => true, // required to use previous_response_id next turn ]); // Continue the conversation without resending history: $provider = $agent->getProvider(); $nextAgent = new Agent([ 'provider' => 'openai-responses', 'options' => ['previous_response_id' => $provider->lastResponseId()], ]); $nextResult = $nextAgent->run('now go one level deeper on the auth layer');
ChatGPT subscription routing
Pass access_token (or set auth_mode: 'oauth') to auto-route through chatgpt.com/backend-api/codex โ so Plus / Pro / Business subscribers bill against their subscription instead of getting rejected at api.openai.com.
new Agent([ 'provider' => 'openai-responses', 'access_token' => $token, 'account_id' => $accountId, // adds chatgpt-account-id header ]);
Azure OpenAI
Six base-URL markers auto-flip the provider into Azure mode. api-version query string is added (default 2025-04-01-preview, overridable); api-key header is set alongside Authorization.
new Agent([ 'provider' => 'openai-responses', 'base_url' => 'https://my-resource.openai.azure.com/openai/deployments/gpt-5', 'api_key' => $azureKey, 'azure_api_version' => '2024-12-01-preview', // optional override ]);
Trace-context passthrough
Inject W3C traceparent into client_metadata so OpenAI-side logs correlate with your distributed trace:
$tc = SuperAgent\Support\TraceContext::fresh(); // mint fresh // OR: SuperAgent\Support\TraceContext::parse($headerValue); // from incoming HTTP header $agent->run($prompt, ['trace_context' => $tc]); // OR: $agent->run($prompt, ['traceparent' => '00-0af7-...', 'tracestate' => 'v=1']);
Since v0.9.1
Cross-provider handoff
Agent::switchProvider($name, $config, $policy) swaps the active provider mid-conversation. The message history is preserved and re-encoded into the new provider's wire format on the next request โ so a tool history that ran against Claude can continue under Kimi without losing parallel tool calls or tool_use_id correlation.
use SuperAgent\Conversation\HandoffPolicy; $agent = new Agent(['provider' => 'anthropic', 'api_key' => $key, 'model' => 'claude-opus-4-7']); $agent->run('analyse this codebase'); // Hand off to a cheaper / faster model for the next phase: $agent->switchProvider('kimi', ['api_key' => $kimiKey, 'model' => 'kimi-k2-6']) ->run('write the unit tests'); // Token-window check after switching โ different tokenizers count // the same history differently (Anthropic vs GPT-4 drift 20โ30%): $status = $agent->lastHandoffTokenStatus(); if ($status !== null && ! $status['fits']) { // Trigger your existing IncrementalContext compression before the next call. }
Handoff policy
HandoffPolicy::default() // keep tool history, drop signed thinking, append handoff marker HandoffPolicy::preserveAll() // keep everything โ useful when swap is temporary and you'll come back HandoffPolicy::freshStart() // collapse history to (latest user turn) โ fresh shot at a stuck conversation
Provider-only artifacts the new wire shape can't carry (Anthropic signed thinking, Kimi prompt_cache_key, Responses-API encrypted reasoning, Gemini cachedContent refs) get parked under AssistantMessage::$metadata['provider_artifacts'][$providerKey] โ HandoffPolicy::preserveAll() keeps them around so a later swap back to the originating family can re-stitch them; default() keeps them stashed but invisible to the new provider.
Atomic swap
switchProvider() constructs the new provider before mutating any state. If construction fails (missing api_key, unknown region, network probe rejection) the agent stays on the old provider with its history untouched.
Six wire-format families share one Transcoder
All conversion goes through Conversation\Transcoder, which dispatches by WireFamily enum: Anthropic (also bedrock's anthropic.* invocations), OpenAIChat (OpenAI/Kimi/GLM/MiniMax/Qwen/OpenRouter/LMStudio), OpenAIResponses, Gemini (the only family that correlates tool calls by name+order, no ids), DashScope, Ollama. Useful directly for offline transcoding:
use SuperAgent\Conversation\Transcoder; use SuperAgent\Conversation\WireFamily; $wire = (new Transcoder())->encode($messages, WireFamily::Gemini);
Since v0.9.5
DeepSeek V4
DeepSeek V4 (released 2026-04-24) ships two MoE models โ deepseek-v4-pro (1.6T total / 49B active) and deepseek-v4-flash (284B / 13B active) โ with 1M context as the default and a single-model thinking / non-thinking toggle. The same backend exposes both an OpenAI-wire and an Anthropic-wire endpoint, so the SDK supports two routes:
// OpenAI-wire: native DeepSeekProvider $agent = new Agent([ 'provider' => 'deepseek', 'api_key' => getenv('DEEPSEEK_API_KEY'), 'model' => 'deepseek-v4-pro', // or 'deepseek-v4-flash' ]); // Anthropic-wire: reuse AnthropicProvider with a custom base_url $agent = new Agent([ 'provider' => 'anthropic', 'api_key' => getenv('DEEPSEEK_API_KEY'), 'base_url' => 'https://api.deepseek.com/anthropic', 'model' => 'deepseek-v4-pro', ]);
Reasoning channel. V4-thinking, R1, Kimi-thinking, Qwen-reasoning and any future OpenAI-compat reasoner stream their internal monologue on delta.reasoning_content. The shared ChatCompletionsProvider SSE parser now surfaces it as a separate ContentBlock::thinking() block prepended to the assistant turn โ callers render or hide it deliberately rather than mixing it into the user-facing answer.
$result = $agent->run('hard reasoning prompt', ['thinking' => true]); foreach ($result->message()->content as $block) { if ($block->type === 'thinking') { // model's reasoning chain } elseif ($block->type === 'text') { // user-facing answer } }
Deprecation lane. deepseek-chat and deepseek-reasoner retire 2026-07-24. The catalog flags both with deprecated_until and replaced_by fields; ModelResolver emits a one-shot warning per process recommending deepseek-v4-flash / deepseek-v4-pro respectively. Set SUPERAGENT_SUPPRESS_DEPRECATION=1 to silence.
Cache-aware billing. OpenAI-compat backends report prompt_tokens as gross (cache hits + misses). The parser now subtracts the cached portion before populating Usage::inputTokens, so the cache discount lands correctly โ CostCalculator charges 10% of input price for read hits instead of effectively 110%. Affects every OpenAI-compat backend with caching (DeepSeek, Kimi, OpenAI itself).
Beta endpoint. Set region: 'beta' to route to https://api.deepseek.com/beta for FIM / prefix completion access on the same auth.
Since v0.9.6
Agent Loop
Agent::run($prompt, $options) drives the full turn loop until the model stops emitting tool_use blocks. Each turn's cost, usage, and messages flow into AgentResult.
$result = $agent->run('...', [ 'model' => 'claude-sonnet-4-5-20250929', // per-call override 'max_tokens' => 8192, 'temperature' => 0.3, 'response_format' => ['type' => 'json_schema', 'json_schema' => [...]], 'idempotency_key' => 'job-42:turn-7', // since v0.9.1 'system_prompt' => 'You are a precise analyst.', ]); echo $result->text(); $result->turns(); // turn count $result->totalUsage(); // Usage{inputTokens, outputTokens, cache*} $result->totalCostUsd; // float, across all turns $result->idempotencyKey; // passthrough for usage-log dedup (since v0.9.1)
Budget + turn caps
$agent = (new Agent(['provider' => 'openai'])) ->withMaxTurns(50) ->withMaxBudget(5.00); // USD โ hard cap; aborts mid-loop if breached
Streaming
foreach ($agent->stream('...') as $assistantMessage) { echo $assistantMessage->text(); }
For machine-readable event streams (JSON / NDJSON for IDE / CI consumers) see the Wire Protocol section.
Auto-mode (task detection)
new Agent([ 'provider' => 'anthropic', 'auto_mode' => true, // delegates to TaskAnalyzer to pick model + tools ]);
Idempotency
$result = $agent->run($prompt, ['idempotency_key' => $queueJobId . ':' . $turnNumber]); // $result->idempotencyKey is truncated to 80 chars; surfaces on the AgentResult // so hosts that write ai_usage_logs can dedupe on it.
Since v0.9.1
Tools & Multi-Agent
Tools are subclasses of SuperAgent\Tools\Tool. Built-in tools โ read / write / edit / bash / glob / grep / search / fetch โ auto-load unless the caller opts out. Custom tools register via $agent->registerTool(new MyTool()).
$agent = (new Agent(['provider' => 'anthropic'])) ->loadTools(['read', 'write', 'bash']) ->registerTool(new MyDomainTool()); $result = $agent->run('apply the refactor plan in ./plan.md');
Multi-agent orchestration (AgentTool)
Dispatch sub-agents in parallel by emitting multiple agent tool_use blocks in one assistant message:
$agent->registerTool(new AgentTool()); $result = $agent->run(<<<PROMPT Run these three investigations in parallel: 1. Read CHANGELOG.md and summarise the last three releases 2. Read composer.json and list all runtime dependencies 3. Grep for TODO comments in src/ Collate the three reports. PROMPT);
Each sub-agent runs in its own PHP process (via ProcessBackend); blocking I/O in one child doesn't block siblings. When proc_open is disabled, fibers take over.
Productivity evidence
Every AgentTool result carries hard evidence of what the child actually did โ not just success: true:
[
'status' => 'completed', // or 'completed_empty' / 'async_launched'
'filesWritten' => ['/abs/path/a.md'], // deduped absolute paths
'toolCallsByName' => ['Read' => 3, 'Write' => 1],
'totalToolUseCount' => 4, // observed, not self-reported turn count
'productivityWarning' => null, // or advisory string (CJK-localised โ since v0.9.1)
'outputWarnings' => [], // since v0.9.1 โ filesystem audit findings
]
completed_empty โ zero tool calls observed. Re-dispatch or pick a stronger model.
completed + non-empty productivityWarning โ the child invoked tools but wrote no files (often fine for advisory consults; check the text).
Productivity instrumentation since v0.8.9. CJK localisation + filesystem audit since v0.9.1.
Output-directory audit + guard injection
Pass output_subdir to opt into both (a) a CJK-aware guard-block prepended to the child's prompt and (b) a post-exit filesystem scan:
$agent->run('...', [ 'output_subdir' => '/abs/path/to/reports/analyst-1', ]); // Audit catches: // - non-whitelisted extensions (defaults to .md / .csv / .png) // - consolidator-reserved filenames (summary.md / ๆ่ฆ.md / mindmap.md / ...) // - sibling-role sub-dirs (ceo / cfo / cto / marketing / ... or kebab-case role slugs) // Configurable via AgentOutputAuditor constructor. Never modifies disk.
Since v0.9.1
Provider-native tools
Any main brain can call these as regular tools โ no provider switch needed.
Moonshot server-hosted builtins (execute server-side; results inlined in the assistant reply):
| Tool | Attributes | Since |
|---|---|---|
KimiMoonshotWebSearchTool ($web_search) |
network | v0.9.0 |
KimiMoonshotWebFetchTool ($web_fetch) |
network | v0.9.1 |
KimiMoonshotCodeInterpreterTool ($code_interpreter) |
network, cost, sensitive | v0.9.1 |
Other provider-native tool families:
- Kimi โ
KimiFileExtractTool,KimiBatchTool,KimiSwarmTool,KimiMediaUploadTool - Qwen โ
QwenLongFileTool+dashscope_cache_controlfeature - GLM โ
glm_web_search,glm_web_reader,glm_ocr,glm_asr - MiniMax โ
minimax_tts,minimax_music,minimax_video,minimax_image
Agent Definitions (YAML / Markdown)
Auto-loaded from ~/.superagent/agents/ (user scope) and <project>/.superagent/agents/ (project scope). Three formats: .yaml, .yml, .md. Cross-format extend: inheritance.
# ~/.superagent/agents/reviewer.yaml name: reviewer description: Code reviewer with strict style enforcement extend: base-coder # can be .yaml / .yml / .md system_prompt: | You review PRs with a focus on correctness and hidden state. allowed_tools: [read, grep, glob] disallowed_tools: [write, edit, bash] model: claude-sonnet-4-5-20250929
<!-- ~/.superagent/agents/analyst.md --> --- name: analyst extend: reviewer model: gpt-5 --- Your job is to surface architectural risks. Write findings as Markdown.
Tool-list fields (allowed_tools, disallowed_tools, exclude_tools) accumulate through extend: chains. Cycle depth-limited.
Since v0.9.0
Skills
Markdown-based capabilities you can register globally and pull into any agent run:
superagent skills install ./my-skill.md
superagent skills list
superagent skills show review
superagent skills remove review
superagent skills path # show install directory
Skill markdown supports frontmatter with name, description, allowed_tools, system_prompt. Skill runs inherit the caller's provider.
MCP Integration
Server registration
superagent mcp list superagent mcp add sqlite stdio uvx --arg mcp-server-sqlite superagent mcp add brave stdio npx --arg @brave/mcp --env BRAVE_API_KEY=... superagent mcp remove sqlite superagent mcp status superagent mcp path
Config persists atomically at ~/.superagent/mcp.json.
OAuth-gated MCP servers
superagent mcp auth <name> # run RFC 8628 device flow superagent mcp reset-auth <name> # clear stored token superagent mcp test <name> # probe availability (stdio `command -v` or HTTP reachability)
Servers declaring an oauth: {client_id, device_endpoint, token_endpoint} block in their config use this flow. Since v0.9.0.
Declarative catalog + non-destructive sync
Drop a catalog at .mcp-servers/catalog.json (or .mcp-catalog.json) in your project root:
{
"mcpServers": {
"sqlite": {"command": "uvx", "args": ["mcp-server-sqlite"]},
"brave": {"command": "npx", "args": ["@brave/mcp"], "env": {"BRAVE_API_KEY": "k"}}
},
"domains": {
"baseline": ["sqlite"],
"all": ["sqlite", "brave"]
}
}
Sync to a project .mcp.json:
superagent mcp sync # full catalog superagent mcp sync --domain=baseline # only the "baseline" domain superagent mcp sync --servers=sqlite,brave # explicit subset superagent mcp sync --dry-run # preview, no disk writes
Non-destructive contract โ byte-equal disk hash โ unchanged; a user-edited file is kept as user-edited; first-time writes or our-last-hash matches become written. A manifest at <project>/.superagent/mcp-manifest.json tracks sha256 of every file we've written so stale entries clean up automatically.
Since v0.9.1
Wire Protocol
v1 โ line-delimited JSON (NDJSON), one event per line, self-describing via wire_version + type top-level fields. Foundation for IDE bridges, CI integrations, structured logs.
superagent --output json-stream "summarise src/" # Emits events like: # {"wire_version":1,"type":"turn.begin","turn_number":1} # {"wire_version":1,"type":"text.delta","delta":"I'll start by..."} # {"wire_version":1,"type":"tool.call","name":"read","input":{"path":"src/"}} # {"wire_version":1,"type":"turn.end","turn_number":1,"usage":{...}}
Transport (since v0.9.1)
Choose where the stream goes via a DSN:
| DSN | Meaning |
|---|---|
stdout (default) / stderr |
Standard streams |
file:///path/to/log.ndjson |
Append-mode file write |
tcp://host:port |
Connect to a listening TCP peer |
unix:///path/to/sock |
Connect to a listening unix socket |
listen://tcp/host:port |
Listen on TCP, accept one client |
listen://unix//path/to/sock |
Listen on unix socket, accept one client |
Programmatic use:
$factory = new SuperAgent\CLI\AgentFactory(); [$emitter, $transport] = $factory->makeWireEmitterForDsn('listen://unix//tmp/agent.sock'); // IDE plugin attaches, then: $agent->run($prompt, ['wire_emitter' => $emitter]); $transport->close();
Non-blocking peer socket means a dropped IDE doesn't stall the agent loop.
Wire Protocol v1 since v0.9.0. Socket / TCP / file transport since v0.9.1.
Retry, Errors & Observability
Layered retry
new Agent([ 'provider' => 'openai', 'request_max_retries' => 4, // HTTP connect / 4xx / 5xx (default 3) 'stream_max_retries' => 5, // reserved for mid-stream resume (Responses API) 'stream_idle_timeout_ms' => 60_000, // cURL low-speed cutoff on SSE (default 300 000) ]);
Jittered exponential backoff (0.9โ1.1ร multiplier) prevents thundering-herd retries from parallel workers. Retry-After header honoured exactly (no jitter โ the server knows best).
Since v0.9.1
Classified errors
Six subclasses of ProviderException emitted by OpenAIErrorClassifier against the response body's error.code / error.type / HTTP status:
try { $agent->run($prompt); } catch (\SuperAgent\Exceptions\Provider\ContextWindowExceededException $e) { // prompt was too long; compact history or swap models } catch (\SuperAgent\Exceptions\Provider\QuotaExceededException $e) { // monthly cap hit; notify operator } catch (\SuperAgent\Exceptions\Provider\UsageNotIncludedException $e) { // ChatGPT plan doesn't include this model; upgrade or switch to API key } catch (\SuperAgent\Exceptions\Provider\CyberPolicyException $e) { // policy rejection โ don't retry } catch (\SuperAgent\Exceptions\Provider\ServerOverloadedException $e) { // retryable with backoff; check $e->retryAfterSeconds } catch (\SuperAgent\Exceptions\Provider\InvalidPromptException $e) { // malformed body โ inspect and fix } catch (\SuperAgent\Exceptions\ProviderException $e) { // catch-all base; every subclass above extends this }
All subclasses extend ProviderException, so pre-existing catch (ProviderException) sites keep working unchanged.
Since v0.9.1
Health dashboard
superagent health # 5s cURL probe of every configured provider superagent health --all # include providers with no env key (useful for "what did I forget to set?") superagent health --json # machine-readable table; exits non-zero on any failure
Wraps ProviderRegistry::healthCheck() โ distinguishes auth rejection (401/403) from network timeout from "no API key" so an operator can fix the right thing without guessing.
Since v0.9.1
SSE parser hardening (since v0.9.0)
- Per-index tool-call assembly โ one streamed call split across N chunks now produces one tool-use block, not N fragments.
finish_reason: error_finishdetection โ DashScope-compat throttles raiseStreamContentError(retryable, HTTP 429) instead of silently polluting the message body.- Truncated tool-call JSON repair โ one-shot attempt to close unbalanced braces before falling back to an empty arg dict.
- Dual-shape cached-token reads โ
usage.prompt_tokens_details.cached_tokens(current OpenAI shape) ANDusage.cached_tokens(legacy) both populateUsage::cacheReadInputTokens.
Guardrails & Checkpoints
Loop detection (since v0.9.0)
Five detectors observe the streaming event bus; first trigger is sticky:
| Detector | Signal |
|---|---|
TOOL_LOOP |
Same tool + same normalised args 5ร in a row |
STAGNATION |
Same tool name 8ร regardless of args |
FILE_READ_LOOP |
โฅ 8 of last 15 tool calls are read-like, with cold-start exemption |
CONTENT_LOOP |
Same 50-char rolling window appears 10ร in streamed text |
THOUGHT_LOOP |
Same thinking-channel text appears 3ร |
new Agent([ 'provider' => 'openai', 'loop_detection' => true, // defaults // OR per-detector overrides: // 'loop_detection' => ['TOOL_LOOP' => 10, 'STAGNATION' => 15], ]);
Violations fan out as loop_detected wire events โ the agent keeps running, the host decides whether to intervene.
Checkpoints + shadow-git (since v0.9.0)
Every turn snapshots the agent state (messages, cost, usage). Attach a GitShadowStore and file-level snapshots land alongside in a separate bare git repo at ~/.superagent/history/<project-hash>/shadow.git โ never touches the user's own .git.
use SuperAgent\Checkpoint\CheckpointManager; use SuperAgent\Checkpoint\GitShadowStore; $mgr = new CheckpointManager(shadowStore: new GitShadowStore('/path/to/project')); $mgr->createCheckpoint($agentState, label: 'after-refactor'); // Later: $checkpoints = $mgr->list(); $mgr->restore($checkpoints[0]->id); $mgr->restoreFiles($checkpoints[0]); // plays back the shadow commit
Restore reverts tracked files and leaves untracked files in place for safety. The project's own .gitignore is respected (the shadow's worktree IS the project dir).
Permission modes
new Agent([ 'provider' => 'anthropic', 'permission_mode' => 'ask', // or 'default' / 'plan' / 'bypassPermissions' ]);
ask prompts the caller's PermissionCallbackInterface before any write-class tool. Wrap it in WireProjectingPermissionCallback to surface the request as a wire event for IDE prompts.
Standalone CLI
superagent # interactive REPL superagent "fix the login bug" # one-shot superagent init # initialize ~/.superagent/ superagent auth login <provider> # import OAuth login superagent auth status # show stored credentials superagent models list / update / refresh / status / reset superagent mcp list / add / remove / sync / auth / reset-auth / test / status / path superagent skills install / list / show / remove / path superagent swarm <prompt> # plan + execute a swarm superagent health [--all] [--json] [--providers=a,b,c] # provider reachability
Options:
-m, --model <model> Model name
-p, --provider <provider> Provider key (openai, anthropic, openai-responses, ...)
--max-turns <n> Maximum agent turns (default 50)
-s, --system-prompt <prompt> Custom system prompt
--project <path> Project working directory
--json Output results as JSON
--output json-stream Emit NDJSON wire events
--verbose-thinking Show full thinking stream
--no-thinking Hide thinking
--plain Disable ANSI colours
--no-rich Legacy minimal renderer
-V, --version Show version
-h, --help Show help
Interactive commands (inside the REPL):
/help available commands
/model <name> switch model
/cost show cost tracking
/compact force context compaction
/session save|load|list|delete
/clear clear conversation
/quit exit
Standalone CLI since v0.8.6.
Laravel Integration
The service provider auto-registers when you composer require forgeomni/superagent:
// config/superagent.php return [ 'default_provider' => env('SUPERAGENT_PROVIDER', 'anthropic'), 'providers' => [ 'anthropic' => ['api_key' => env('ANTHROPIC_API_KEY')], 'openai' => ['api_key' => env('OPENAI_API_KEY')], 'openai-responses' => ['api_key' => env('OPENAI_API_KEY'), 'model' => 'gpt-5'], // ... ], 'agent' => [ 'max_turns' => 50, 'max_budget_usd' => 5.00, ], ];
use SuperAgent\Facades\SuperAgent; $result = SuperAgent::agent(['provider' => 'openai']) ->run('summarise this week\'s commits');
Artisan commands mirror the CLI:
php artisan superagent:chat "fix the bug"
php artisan superagent:mcp sync
php artisan superagent:models refresh
php artisan superagent:health --json
See docs/LARAVEL.md for queue integration, job dispatching, and the ai_usage_logs schema.
Host Integrations
Frameworks that embed SuperAgent โ typically multi-tenant platforms that store encrypted provider credentials in a database row and spin up an agent per request โ use ProviderRegistry::createForHost() instead of create(). The host passes a normalised shape and the SDK dispatches to the right constructor via per-provider adapters.
use SuperAgent\Providers\ProviderRegistry; // One call, every provider โ no `match ($type)` on the host side. $agent = ProviderRegistry::createForHost($sdkKey, [ 'api_key' => $aiProvider->decrypted_api_key, 'base_url' => $aiProvider->base_url, 'model' => $resolvedModel, 'max_tokens' => $extra['max_tokens'] ?? null, 'region' => $extra['region'] ?? null, 'credentials' => $extra, // opaque blob; adapter picks what it needs 'extra' => $extra, // provider-specific passthrough (organization, reasoning, verbosity, ...) ]);
Every ChatCompletions-style provider (Anthropic, OpenAI, OpenAI-Responses, OpenRouter, Ollama, LM Studio, Gemini, Kimi, Qwen, Qwen-native, GLM, MiniMax) uses the default pass-through adapter. Bedrock ships a built-in adapter that splits credentials.aws_access_key_id / aws_secret_access_key / aws_region into the AWS SDK's shape.
Plugins or hosts that need to customise an adapter register their own:
ProviderRegistry::registerHostConfigAdapter('my-custom-provider', function (array $host): array { return [ 'api_key' => $host['credentials']['my_custom_token'] ?? null, 'model' => $host['model'] ?? 'default-model', // ... arbitrary transform ]; });
New SDK provider keys in future releases register their own adapter (or ride the default one), so the host-side factory code never needs to grow a new match arm per release.
Since v0.9.2
Configuration reference
Every option accepted by the Agent constructor, grouped. Defaults in parentheses.
Provider selection
| Key | Accepts |
|---|---|
provider |
Registry key or an LLMProvider instance |
model |
Model id โ overrides provider default |
base_url |
URL โ overrides provider default; also triggers auto-detection (Azure) |
region |
intl / cn / us / hk / code (provider-specific) |
api_key |
Provider API key |
access_token + account_id |
OAuth (OpenAI ChatGPT / Anthropic Claude Code) |
auth_mode |
'api_key' (default) or 'oauth' |
organization |
OpenAI org id (adds OpenAI-Organization header) |
Agent loop
| Key | Default |
|---|---|
max_turns |
50 |
max_budget_usd |
0.0 (no cap) |
system_prompt |
null |
auto_mode |
false |
allowed_tools / denied_tools |
null / [] |
permission_mode |
'default' |
options |
[] (per-call defaults forwarded to provider) |
Per-call options ($agent->run($prompt, $options))
| Key | Since | Notes |
|---|---|---|
model / max_tokens / temperature / tool_choice / response_format |
v0.1.0 | Standard Chat Completions knobs |
features |
v0.8.8 | thinking / prompt_cache_key / dashscope_cache_control / ... routed via FeatureDispatcher |
extra_body |
v0.9.0 | Power-user escape hatch โ deep-merged into the request body |
loop_detection |
v0.9.0 | true (defaults), false, or threshold overrides |
idempotency_key |
v0.9.1 | Passthrough to AgentResult::$idempotencyKey |
reasoning |
v0.9.1 | Responses API โ {effort, summary} |
verbosity |
v0.9.1 | Responses API โ low / medium / high |
prompt_cache_key |
v0.9.0 | Cache key for Kimi + OpenAI Responses |
previous_response_id |
v0.9.1 | Responses API continuation |
store / include / service_tier / parallel_tool_calls |
v0.9.1 | Responses API |
client_metadata |
v0.9.1 | Responses API opaque key-value map |
trace_context / traceparent / tracestate |
v0.9.1 | W3C Trace Context injection |
output_subdir |
v0.9.1 | AgentTool guard-block + post-exit audit |
Retry + transport (provider-level)
| Key | Default | Since |
|---|---|---|
max_retries |
3 |
v0.1.0 (legacy single knob) |
request_max_retries |
3 (inherits max_retries) |
v0.9.1 |
stream_max_retries |
5 |
v0.9.1 |
stream_idle_timeout_ms |
300_000 |
v0.9.1 |
env_http_headers |
[] |
v0.9.1 |
http_headers |
[] |
v0.9.1 |
experimental_ws_transport |
false |
v0.9.1 (scaffold) |
azure_api_version |
'2025-04-01-preview' |
v0.9.1 (Azure only) |
Links
- CHANGELOG โ full per-release notes
- INSTALL โ install + first-run setup
- Advanced usage โ patterns, sample agents, debugging
- Native providers โ region maps + capability matrix
- Wire protocol โ v1 spec
- Features matrix โ which provider supports which feature
License
MIT โ see LICENSE.