forgeomni / superaicore
Laravel package for AI execution engine: pluggable backends (Claude CLI / Codex CLI / SuperAgent / Anthropic API / OpenAI API), provider/service/routing management UI, MCP server management, usage tracking, cost analytics, process monitor. Works standalone in a fresh Laravel install; UI is optional
Requires
- php: ^8.1
- forgeomni/superagent: ^0.9.5
- guzzlehttp/guzzle: ^7.0
- illuminate/console: ^10.0|^11.0|^12.0
- illuminate/database: ^10.0|^11.0|^12.0
- illuminate/http: ^10.0|^11.0|^12.0
- illuminate/support: ^10.0|^11.0|^12.0
- symfony/console: ^6.0|^7.0
- symfony/process: ^6.0|^7.0
Requires (Dev)
- orchestra/testbench: ^8.0|^9.0|^10.0
- phpunit/phpunit: ^10.0|^11.0
This package is auto-updated.
Last update: 2026-04-29 02:13:20 UTC
README
Laravel package for unified AI execution across seven execution engines — Claude Code CLI, Codex CLI, Gemini CLI, GitHub Copilot CLI, AWS Kiro CLI, Moonshot Kimi Code CLI, and SuperAgent SDK. Ships with a framework-agnostic CLI, a capability-based dispatcher, MCP server management, usage tracking, cost analytics, and a complete admin UI.
Works standalone in a fresh Laravel install. The UI is optional and fully overridable — embed it inside a host application (e.g. SuperTeam) or disable it entirely when only the services are needed.
Table of contents
- Relationship to SuperAgent
- Features
- Execution engines + provider types
- Skill & sub-agent runner
- Skill engine — telemetry, ranking, evolution
- CLI installer & health
- Dispatcher & streaming
- Model catalog
- Provider type system
- Usage tracking & cost
- Idempotency & tracing
- MCP server manager
- SuperAgent SDK integration
- Agent-spawn hardening
- Process monitor & admin UI
- Host integration
- Requirements
- Install
- CLI quick start
- PHP quick start
- Architecture
- Advanced usage
- Configuration
- License
Relationship to SuperAgent
forgeomni/superaicore and forgeomni/superagent are sibling packages, not a parent and a child:
- SuperAgent is a minimal in-process PHP SDK that drives a single LLM tool-use loop (one agent, one conversation).
- SuperAICore is a Laravel-wide orchestration layer — it picks a backend, resolves provider credentials, routes by capability, tracks usage, calculates cost, manages MCP servers, and ships an admin UI.
SuperAICore does not require SuperAgent to function. The SDK is one of several backends. The six CLI engines and the three HTTP backends work without it, and SuperAgentBackend gracefully reports itself as unavailable (class_exists(Agent::class) check) when the SDK is absent. Set AI_CORE_SUPERAGENT_ENABLED=false in your .env and the Dispatcher falls back to the remaining backends.
The forgeomni/superagent entry in composer.json is there so the SuperAgent backend compiles out of the box. If you never use it, remove it from composer.json before composer install in your host app — nothing else in SuperAICore imports the SuperAgent namespace.
Features
Each feature below is tagged with the version it landed in. Features without a tag have been there since before 0.6.0.
Execution engines + provider types
- Seven execution engines unified behind a single
Dispatchercontract:- Claude Code CLI — provider types:
builtin(local login),anthropic,anthropic-proxy,bedrock,vertex. - Codex CLI —
builtin(ChatGPT login),openai,openai-compatible. - Gemini CLI —
builtin(Google OAuth),google-ai,vertex. - GitHub Copilot CLI —
builtinonly (copilotbinary owns OAuth/keychain/refresh). Reads.claude/skills/natively (zero-translation skill pass-through). Subscription billed — tracked separately on the dashboard. - AWS Kiro CLI (since 0.6.1) —
builtin(localkiro-cli login),kiro-api(stored key injected asKIRO_API_KEYfor headless). Ships the richest out-of-the-box CLI feature set — native agents, skills, MCP, and subagent DAG orchestration (noSpawnPlanemulation). Reads Claude'sSKILL.mdformat verbatim. Subscription billed — credit-based Pro / Pro+ / Power plans. - Moonshot Kimi Code CLI (since 0.6.8) —
builtin(kimi loginOAuth viaauth.kimi.com). Complements the SDK's direct-HTTPKimiProviderby covering the OAuth-subscription agentic-loop path, mirroring theanthropic_api↔claude_clisplit. NativeAgentfanout is honoured by default; opt into AICore's three-phase Pipeline viause_native_agents=false. Subscription billed — Moonshot Pro / Power. - SuperAgent SDK — provider types:
anthropic,anthropic-proxy,openai,openai-compatible, plusopenai-responses(since 0.7.0) andlmstudio(since 0.7.0).
- Claude Code CLI — provider types:
openai-responsesprovider type (since 0.7.0) — routes through the SDK'sOpenAIResponsesProvideragainst/v1/responses. Auto-detects Azure OpenAI deployments from thebase_urlpattern (addsapi-version=2025-04-01-previewquery string; override viaextra_config.azure_api_version). When the row stores anaccess_tokenfrom a host-app ChatGPT-OAuth flow instead of an API key, the SDK flips the base URL tochatgpt.com/backend-api/codexso Plus / Pro / Business subscribers hit their subscription quota.lmstudioprovider type (since 0.7.0) — local LM Studio server (defaulthttp://localhost:1234). OpenAI-compat wire; no real API key needed — the SDK synthesises a placeholderAuthorizationheader.- Ten dispatcher adapters behind the seven engines (
claude_cli,codex_cli,gemini_cli,copilot_cli,kiro_cli,kimi_cli,superagent,anthropic_api,openai_api,gemini_api). CLI adapters when a provider usesbuiltin/kiro-api; HTTP adapters when it uses an API key. Addressable directly from the CLI when needed. EngineCatalogsingle source of truth — engine labels, icons, dispatcher backends, supported provider types, available models, and the declarativeProcessSpec(binary, version/auth-status args, prompt/output/model flags, default flags) live in one PHP service. Adding a new CLI engine means editingEngineCatalog::seed()and every picker updates automatically. Host apps override per-engine fields viasuper-ai-core.enginesconfig.modelOptions($key)/modelAliases($key)(since 0.5.9) drive host-app model dropdowns.
Skill & sub-agent runner
- Skill & sub-agent discovery — auto-discovers Claude Code skills (
.claude/skills/<name>/SKILL.md) and sub-agents (.claude/agents/<name>.md) from three sources for skills (project > plugin > user) and two for agents. Exposes each as a first-class CLI subcommand (skill:list,skill:run,agent:list,agent:run). - Cross-backend native execution —
--exec=nativeruns a skill on the selected backend's CLI;CompatibilityProbeflags incompatible skills;SkillBodyTranslatorrewrites canonical tool names (`Read`→read_file, …) and injects backend preamble (Gemini / Codex). - Side-effect-locking fallback chain —
--exec=fallback --fallback-chain=gemini,claudetries hops in order, skips incompatible ones, and hard-locks on the first hop that writes to cwd (mtime diff + stream-jsontool_useevents). gemini:sync— mirrors skills/agents into Gemini custom commands (/skill:init,/agent:reviewer). Respects manual edits via~/.gemini/commands/.superaicore-manifest.json.copilot:sync— mirrors agents into~/.copilot/agents/*.agent.md. Auto-fires beforeagent:run --backend=copilot.copilot:sync-hooks— merges Claude-style hooks (.claude/settings.json:hooks) into Copilot's~/.copilot/config.json:hooks.copilot:fleet— runs the same task across N Copilot sub-agents concurrently, aggregates results, registers each child in the Process Monitor.kiro:sync(since 0.6.1) — translates Claude agent frontmatter into~/.kiro/agents/*.jsonfor native Kiro DAG execution.kimi:sync(since 0.6.8) — translates.claude/agents/*.mdtool lists into~/.kimi/agents/*.yaml+~/.kimi/mcp.json.claude:mcp-syncfans out to Kimi automatically.
Skill engine — telemetry, ranking, evolution
Three orthogonal services (since 0.8.6) that turn the static skill catalog into a feedback loop. Borrowed in spirit from HKUDS/OpenSpace's skill_engine, trimmed to the safe subset for production use — DERIVED / CAPTURED modes intentionally omitted (humans curate new skills on Day 0); cloud registry omitted (no cross-project sharing need yet).
SkillTelemetry(since 0.8.6) — one row per Claude Code Skill tool invocation insac_skill_executions. PreToolUse hook →php artisan skill:track-start(insertin_progressrow, returns id). Stop hook →php artisan skill:track-stop(close every still-open row for the session). Both commands accept Claude Code's hook JSON payload on stdin so the wiring lives in.claude/settings.local.json, not in PHP. Aggregation seam:SkillTelemetry::metrics(?since, ?skillName)returns per-skillapplied / completed / failed / orphaned / interrupted / completion_rate / failure_rate / last_used_at.sweepOrphaned(maxAgeSeconds=7200)recovers from crashed sessions.SkillRanker(since 0.8.6) — pure-PHP BM25 over theSkillRegistrycatalog (Robertson-WalkerK1=1.5,B=0.75, BM25-Plus IDF). CJK-aware tokeniser emits each Han character as its own token (Chinese skill descriptions are short — char-grams suffice), tiny EN+zh stopword list, hyphenated words yield their parts. Confidence-weighted telemetry boost:final = bm25 * (1 + 0.4 * (success_rate - 0.5) * applied_signal), whereapplied_signal = min(1, applied / 10)saturates near 10 runs. Skills with no telemetry getboost = 1.0. Drivesphp artisan skill:rank "your task description"— table or JSON, with full per-term IDF×TF breakdown for debugging.SkillEvolver(since 0.8.6) — FIX-mode only. Reads recent failures + current SKILL.md, builds a constrained LLM prompt ("smallest possible patch", "do not invent failures the evidence does not support", "do not restructure sections / rename / change frontmattername/ add new tools toallowed-toolsunless evidence demands it"), and persists aSkillEvolutionCandidaterow inpendingstatus. Never modifies SKILL.md directly — humans review viaphp artisan skill:candidates --id=N --show-prompt --show-diff.--dispatchmode (off by default — costs tokens) routes the prompt through the Dispatcher withcapability: 'reasoning', parses the\``diffblock, and stores bothproposed_bodyandproposed_diff.--sweep --threshold=0.30 --min-applied=5queues candidates for every skill that exceeds the threshold; de-duped against existing pending rows so it's safe to run daily. Triggers:manual/failure/metric_degradation`.- Six artisan commands:
skill:track-start,skill:track-stop,skill:stats,skill:rank,skill:evolve,skill:candidates. All registered throughSuperAICoreServiceProvider::boot()—php artisan skill:*works in any host that mounts the package. - Two new tables:
sac_skill_executions(skill_name, host_app, session_id, status, started_at, completed_at, duration_ms, transcript_path, error_summary, cwd, metadata json) andsac_skill_evolution_candidates(skill_name, trigger_type, execution_id, status, rationale, proposed_diff, proposed_body, llm_prompt, context json, reviewed_at, reviewed_by). Both honoursuper-ai-core.table_prefixviaHasConfigurablePrefix.php artisan migrateto pick them up.
CLI installer & health
cli:status— shows which engine CLIs are installed / logged in, plus install hints for anything missing.cli:install [backend] [--all-missing]— shells out to the canonical package manager (npm/brew/script) with confirmation by default. Explicit by design — no CLI ever auto-installs as a dispatch side-effect.api:status(since 0.6.8) — 5-second cURL probe against every direct-HTTP API provider (anthropic / openai / openrouter / gemini / kimi / qwen / glm / minimax). Returns{ok, latency_ms, reason}per provider so operators can tell auth rejections (401/403), network timeouts, and missing keys apart at a glance.--all/--providers=a,b,c/--jsonflags. Parallel sibling ofcli:statusfor direct-HTTP providers.
Dispatcher & streaming
- Capability-based routing —
Dispatcher::dispatch(['task_type' => 'tasks.run', 'capability' => 'summarise'])resolves the right backend + provider credentials viaRoutingRepository→ProviderResolver→ fallback chain. Contracts\StreamingBackend(since 0.6.6) — every CLI backend streams chunks through anonChunkcallback while tee'ing to disk and registering anai_processesrow for the Monitor UI.Dispatcher::dispatch(['stream' => true, ...])opts in transparently. Honours per-calltimeout/idle_timeout/mcp_mode('empty'for claude prevents global MCPs from blocking exit). Seedocs/streaming-backends.md.Runner\TaskRunner— one-call task execution (since 0.6.6) — drop-in wrapper aroundDispatcher::dispatch(['stream' => true, ...])that returns a typedTaskResultEnvelope(success / output / summary / usage / cost / log file / spawn report). Replaces ~150 lines of host-side "build prompt → spawn → tee log → extract usage → wrap result" glue with one call. Identical across all 6 CLIs. Seedocs/task-runner-quickstart.md.AgentSpawn\Pipeline— spawn-plan protocol for codex/gemini (since 0.6.6) — three-phase choreography (preamble → parallel fanout → consolidation re-call) upstream in SuperAICore.TaskRunneractivates it whenspawn_plan_diris passed. New CLIs that need the protocol implementBackendCapabilities::spawnPreamble()+consolidationPrompt()once and inherit the rest. Seedocs/spawn-plan-protocol.md.- Per-call
cwdon every CLI (since 0.6.7) — hosts whose PHP process runs fromweb/publiccan still spawn aclaudethat findsartisan+.claude/at the project root. Claude-only options (permission_mode,allowed_tools,session_id) let headless callers bypass interactive approval prompts and restrict the tool surface. - Headless Claude from PHP-FPM now works (since 0.6.7) —
ClaudeCliBackendscrubsCLAUDECODE/CLAUDE_CODE_ENTRYPOINT/ … from the child env so a Laravel server launched from a parentclaudeshell no longer trips claude's recursion guard. On macOS,builtinauth falls back to reading the OAuth token viasecurity find-generic-passwordand injecting it asANTHROPIC_API_KEY— the only path that works for web workers. Contracts\ScriptedSpawnBackend(since 0.7.1) — sibling ofStreamingBackendfor hosts that detach the child (nohup/background job) and poll the log asynchronously.prepareScriptedProcess([...])returns a configuredSymfony\Component\Process\Processthat readsprompt_filevia stdin, tees combined stdout+stderr tolog_file, applies env scrub + capability transforms (Gemini tool-name rewrite), and honourstimeout/idle_timeout.streamChat($prompt, $onChunk, $options)is the blocking one-shot sibling — backend owns argv composition, prompt-vs-argv passing, output parsing, and ANSI stripping (Kiro/Copilot). All six CLI backends (claude/codex/gemini/copilot/kiro/kimi) implement the contract on 0.7.1; hosts collapse a per-backendmatchstatement into one polymorphic call viaBackendRegistry::forEngine($engineKey).Support\CliBinaryLocator(singleton) centralises filesystem probing for CLI binaries (~/.npm-global/bin,/opt/homebrew/bin, nvm paths, Windows%APPDATA%/npm).Backends\Concerns\BuildsScriptedProcesstrait supplies shared wrapper-script helpers for implementers. See docs/host-spawn-uplift-roadmap.md.
Model catalog
- Dynamic model catalog (since 0.6.0) —
CostCalculator,ClaudeModelResolver,GeminiModelResolver, andEngineCatalog::seed()'savailable_modelsall fall through to SuperAgent'sModelCatalog(bundledresources/models.json+ user override at~/.superagent/models.json). super-ai-core:models update(since 0.6.0) — fetches$SUPERAGENT_MODELS_URLand refreshes pricing + model lists for every Anthropic / OpenAI / Gemini / Bedrock / OpenRouter row withoutcomposer update.super-ai-core:models refresh [--provider <p>](since 0.6.9) — pulls each provider's liveGET /modelsendpoint into a per-provider overlay cache at~/.superagent/models-cache/<provider>.json. Supports anthropic / openai / openrouter / kimi / glm / minimax / qwen. Overlay sits above the user override but below runtimeregister(), so bundled pricing is preserved when the vendor's/modelsomits rates (usually the case).statusgains arefresh cacherow.
Provider type system
ProviderTypeRegistry+ProviderEnvBuilder(since 0.6.2) — every provider type (Anthropic / OpenAI / Google / Kiro / …) lives in a single bundled registry carrying its label, icon, form fields, env-var name, base-url env, allowed backends, andextra_config → envmap. One source of truth for/providersUI + CLI backend env injection +AiProvider::requiresApiKey(). Host apps override viasuper-ai-core.provider_types. New types surface oncomposer updatewith zero code changes.sdkProvideron the descriptor (since 0.7.0) — wrapper types (anthropic-proxy,openai-compatible) now declare which SDKProviderRegistrykey they route to.SuperAgentBackend::buildAgent()consults the descriptor whenprovider_config.providerisn't set, fixing a long-standing gap where wrapper types silently defaulted to'anthropic'.http_headers/env_http_headerson the descriptor (since 0.7.0) — declarative HTTP-header injection via the SDK's 0.9.1ChatCompletionsProviderknobs.http_headersare literal;env_http_headersreference env vars and are silently dropped when the env var isn't set. Host apps injectOpenAI-Project,LangSmith-Project,OpenRouter-Appetc. without package code changes.
Usage tracking & cost
ai_usage_logs— every call persists prompt/response tokens, duration, and cost. Rows also carryshadow_cost_usd+billing_model(since 0.6.2) so subscription engines (Copilot, Kiro, Claude Code builtin) surface a meaningful pay-as-you-go USD estimate instead of a $0 row.- Cache-aware shadow cost (since 0.6.5) —
cache_read_tokenspriced at 0.1× andcache_write_tokensat 1.25× the baseinputrate (falls back to explicit catalog rows). Heavy-cache Claude sessions now match the real Anthropic invoice instead of over-reporting by ~10×. - CLI-reported
total_cost_usd(since 0.6.5) — when the backend envelope carries its owntotal_cost_usd(Claude CLI does), Dispatcher uses that figure as the billed cost and marks the row withmetadata.cost_source=cli_envelope. Matters because only the CLI knows whether a given session is on a subscription or an API key. UsageRecorderfor host-side runners (since 0.6.2) — thin façade overUsageTracker+CostCalculatorthat host apps spawning CLIs directly (e.g.App\Services\ClaudeRunner, PPT stage jobs) call after each turn to drop oneai_usage_logsrow withcost_usd/shadow_cost_usd/billing_modelauto-filled from the catalog.CliOutputParser— extracts{text, model, input_tokens, output_tokens, …}from captured stdout (parseClaude()/parseCodex()/parseCopilot()/parseGemini()) without constructing a full backend object.MonitoredProcess::runMonitoredAndRecord()(since 0.6.5) — opt-in trait method that buffers stdout, parses it, and writes anai_usage_logsrow on process exit. Parser failures never propagate — plain-text Codex/Copilot output gets adebug-level note instead of a row.- Cost dashboard — per-model pricing, USD rollups, "By Task Type" card + per-row
usage/subbilling-model badge + shadow-cost column on every breakdown (since 0.6.2). Dashboards hide 0-token andtest_connectionrows by default.
Idempotency & tracing
ai_usage_logs.idempotency_key60s dedup window (since 0.6.6) —EloquentUsageRepository::record()honours anidempotency_key; matching keys within 60s return the existing row id instead of inserting.Dispatcher::dispatch()auto-generates"{backend}:{external_label}"so hosts that accidentally double-record the same logical turn auto-collapse to one row. Migration:php artisan migrateadds a nullable column + composite index. Seedocs/idempotency.md.- Round-trip key through the SDK (since 0.7.0) — Dispatcher now computes the key before
generate()and forwards it toSuperAgentBackend, which threads it throughAgent::run($prompt, ['idempotency_key' => $k])→AgentResult::$idempotencyKey(SDK 0.9.1). The backend echoes it back onto the envelope asidempotency_key; Dispatcher's write toai_usage_logsprefers the envelope-echoed value. Net effect: hosts whose Dispatcher runs on a different PHP process than the write-through still observe the same key the SDK saw. - W3C
traceparent/tracestatepassthrough (since 0.7.0) — passtraceparent: '<w3c-string>'onDispatcher::dispatch()options.SuperAgentBackendforwards toAgent::run()options; the SDK projects it onto the Responses API'sclient_metadataenvelope so OpenAI-side logs correlate with the host's distributed trace.tracestateand pre-builtTraceContextinstances also accepted. Empty strings are filtered.
MCP server manager
- UI-driven manager — install, enable, and configure MCP servers from the admin UI.
- Catalog-driven sync (since 0.6.8) —
claude:mcp-syncreads.mcp-servers/mcp-catalog.json+ a thin.claude/mcp-host.jsonmapping and fans the right server set out to project.mcp.json, per-agentmcpServers:frontmatter blocks inside.claude/agents/*.md, and every installed CLI backend's user-scope config.mcp:sync-backendsis the standalone entry point for hand-edited.mcp.jsonor file-watcher auto-sync. Non-destructive: user-edited files flaguser-editedvia a sha256 manifest and are left alone. Seedocs/mcp-sync.md. - OAuth helpers for mcp.json servers (since 0.6.9) —
McpManager::oauthStatus(key)/oauthLogin(key)/oauthLogout(key)wrap SDK 0.9.0'sMcpOAuthfor MCP servers declaring anoauth: {client_id, device_endpoint, token_endpoint, scope?}block. Host UIs render an OAuth button per server. - Portable
.mcp.jsonwrites (since 0.8.1) — opt in by settingAI_CORE_MCP_PORTABLE_ROOT_VAR=SUPERTEAM_ROOT(or any env var name your MCP runtime exports) and everyMcpManager::install*()writer emits bare commands (node,php,uvx,uv,python) plus${SUPERTEAM_ROOT}/<rel>placeholders for paths under the project root, so the generated file survives being copied / synced across machines / users / containers without re-pollution fromwhich()andPHP_BINARY. Egress to per-machine targets (Codex~/.codex/config.toml, Gemini / Claude / Copilot / Kiro / Kimi user-scope MCP configs,codex exec -cruntime flags) materialises the placeholders back to absolute paths so backends that don't expand${VAR}still spawn correctly. Default staysnull— legacy "absolute path everywhere" behaviour preserved for hosts that haven't opted in. Seedocs/advanced-usage.md§13.
SuperAgent SDK integration
- Real agentic loop (since 0.6.8) —
SuperAgentBackendhonoursmax_turns,max_cost_usd→Agent::withMaxBudget(),allowed_tools/denied_toolsfilters,mcp_config_file(loads a.mcp.json, auto-disconnects infinally{}), andprovider_config.regionfor Kimi / Qwen / GLM / MiniMax regions. Envelope gainsusage.cache_read_input_tokens,usage.cache_creation_input_tokens,cost_usd(SDK turn-summed), andturns. AgentToolproductivity forwarded (since 0.6.8) — when callers opt into SDK sub-agent dispatch (load_tools: ['agent', …]), the envelope forwardsAgentToolproductivity info (filesWritten,toolCallsByName,productivityWarning,status: completed|completed_empty) under an optionalsubagentskey.- Three 0.9.0 options forwarded (since 0.6.9) —
extra_body(deep-merged at the top level of everyChatCompletionsProviderrequest body),features(routed through SDK'sFeatureDispatcher; useful keys:prompt_cache_key.session_id,thinking.*,dashscope_cache_control),loop_detection: true|array(wraps streaming handler inLoopDetectionHarness). Convenience shim:prompt_cache_key: '<sessionId>'accepted as session-id shorthand. - Classified
ProviderExceptionsubclasses (since 0.7.0) —SuperAgentBackend::generate()catches six typed SDK subclasses (ContextWindowExceeded,QuotaExceeded,UsageNotIncluded,CyberPolicy,ServerOverloaded,InvalidPrompt) each logged with a stableerror_classtag +retryableflag. Contract unchanged (still returnsnull); alogProviderError()seam lets subclasses route on the classification. createForHosthost-config adapter migration complete (since 0.8.5) —SuperAgentBackend::buildAgent()collapses to a singleProviderRegistry::createForHost($sdkKey, $hostConfig)call instead of branching onregionand hand-rolling the constructor shape per provider. The SDK-side per-key adapter (default for ChatCompletions-style; dedicated one forbedrockthat splits AWS credentials; built-in Azure auto-detection onopenai-responses; LMStudio synthetic auth) owns the constructor-shape mapping. Future SDK provider keys land here without backend changes — the adapter is the extension point.- SDK pinned to 0.9.5 (since 0.8.5) — Composer constraint
^0.9.5. Multi-turn tool-use replays against non-Anthropic providers now work correctly (pre-0.9.5,ChatCompletionsProvider::convertMessage()early-returned on the firsttool_useblock and read nonexistentContentBlockproperties — every replayed tool call against Kimi / GLM / MiniMax / Qwen / OpenAI / OpenRouter / LMStudio went out as{id: null, name: null, arguments: "null"}); the SDK's six-encoderConversation\Transcoderputs every wire family behind one canonical converter so the bug fix lands in one place. PlusAgent::switchProvider($name, $config, $policy)is available for in-process mid-conversation handoff (withHandoffPolicy::default() / preserveAll() / freshStart()presets) — useful for hosts that wrapSuperAgentBackenddirectly.
Agent-spawn hardening
Five layered defences (since 0.6.8) so weak children (Gemini Flash, GLM Air) can't pollute the consolidator's view:
SpawnPlan::appendGuards()— host-injected per-agent guard block appended to every child'stask_prompt(six rules: stay in lane, no consolidator filenames, language uniformity, extension whitelist, canonical_signals/<name>.mdpath, don't apologise for tool failures). Language-aware via CJK regex.SpawnPlan::fromFile()canonical ASCIIoutput_subdir— forcesoutput_subdir = agent.nameso Flash emitting首席执行官instead ofceo-bezosno longer breaks the audit walk.Pipeline::cleanPrematureConsolidatorFiles()— before fanout, deletes any early摘要.md/思维导图.md/流程图.md/ English variants at$outputDirtop-level that the first-pass model wrote in violation of "emit plan and STOP".Orchestrator::auditAgentOutput()— post-fanout, flags non-whitelisted extensions, consolidator-reserved filenames inside agent subdirs, and sibling-role sub-directories; warnings land inreport[N].warnings[]without modifying disk. Per-agent plumbing (run.log/ prompt / exec script) moves out of the user-facing output dir into$TMPDIR.- Language-aware
SpawnConsolidationPrompt::build()— hard-codes the English → Chinese section-heading map for zh runs and bans fabricated error-filenames likeError_No_Agent_Outputs_Found.md.GeminiCliBackend::parseJson()tolerates Gemini's "YOLO mode is enabled." / "MCP issues detected." preamble.
Process monitor & admin UI
- Live-only Process Monitor (since 0.6.7) —
AiProcessSource::list()queries the liveps auxsnapshot first and only emitsai_processesrows whose PID is alive. Finished / failed / killed runs disappear from the Monitor UI the moment their subprocess exits. host_owned_label_prefixes(since 0.6.7) — hosts with their ownProcessSource(e.g. SuperTeam'stask:rows) claim a namespace so AiProcessSource doesn't double-render the same logical run.- Admin pages —
/integrations,/providers,/services,/ai-models,/usage,/costs,/processes. Admin-only/processes, disabled by default.
Host integration
- Trilingual UI — English, Simplified Chinese, French, runtime-switchable.
- Disable routes / views — embed inside a parent app, swap the Blade layout, or reuse the back-link + locale switcher.
BackendCapabilitiesDefaultstrait (since 0.6.6) — host implementersusethe trait to inherit safe no-op defaults for methods added in future minor releases. Host class stays satisfying the interface without code changes. Seedocs/api-stability.mdfor the full SemVer contract.
Requirements
- PHP ≥ 8.1
- Laravel 10, 11, or 12
- Guzzle 7, Symfony Process 6/7
Optional, only when the respective backend is enabled:
claudeCLI on$PATH—npm i -g @anthropic-ai/claude-codecodexCLI on$PATH—brew install codexgeminiCLI on$PATH—npm i -g @google/gemini-clicopilotCLI on$PATH—npm i -g @github/copilot(thencopilot login)kiro-clion$PATH— install from kiro.dev (thenkiro-cli login, or setKIRO_API_KEYfor headless Pro/Pro+/Power)kimiCLI on$PATH(since 0.6.8) — install from kimi.com (thenkimi login)- An Anthropic / OpenAI / Google AI Studio API key for the HTTP backends
Don't want to remember package names? Run ./vendor/bin/superaicore cli:status to see what's missing and ./vendor/bin/superaicore cli:install --all-missing to bootstrap everything (confirmation prompt by default).
Install
composer require forgeomni/superaicore php artisan vendor:publish --tag=super-ai-core-config php artisan vendor:publish --tag=super-ai-core-migrations php artisan migrate
Full step-by-step guide: INSTALL.md.
CLI quick start
# List Dispatcher adapters and their availability ./vendor/bin/superaicore list-backends # Drive the seven engines from the CLI ./vendor/bin/superaicore call "Hello" --backend=claude_cli # Claude Code CLI (local login) ./vendor/bin/superaicore call "Hello" --backend=codex_cli # Codex CLI (ChatGPT login) ./vendor/bin/superaicore call "Hello" --backend=gemini_cli # Gemini CLI (Google OAuth) ./vendor/bin/superaicore call "Hello" --backend=copilot_cli # GitHub Copilot CLI (subscription) ./vendor/bin/superaicore call "Hello" --backend=kiro_cli # AWS Kiro CLI (subscription) ./vendor/bin/superaicore call "Hello" --backend=kimi_cli # Moonshot Kimi Code CLI (OAuth subscription) ./vendor/bin/superaicore call "Hello" --backend=superagent --api-key=sk-ant-... # SuperAgent SDK # Skip the CLI wrapper and hit the HTTP APIs directly ./vendor/bin/superaicore call "Hello" --backend=anthropic_api --api-key=sk-ant-... # Claude engine, HTTP mode ./vendor/bin/superaicore call "Hello" --backend=openai_api --api-key=sk-... # Codex engine, HTTP mode ./vendor/bin/superaicore call "Hello" --backend=gemini_api --api-key=AIza... # Gemini engine, HTTP mode # Health + install ./vendor/bin/superaicore cli:status # table of installed / version / auth / hint ./vendor/bin/superaicore api:status # 5s probe against every direct-HTTP API (0.6.8+) ./vendor/bin/superaicore cli:install --all-missing # npm/brew/script install with confirmation # Model catalog ./vendor/bin/superaicore super-ai-core:models status # sources, override mtime, total rows ./vendor/bin/superaicore super-ai-core:models list --provider=anthropic # per-1M pricing + aliases ./vendor/bin/superaicore super-ai-core:models update # fetch $SUPERAGENT_MODELS_URL (0.6.0+) ./vendor/bin/superaicore super-ai-core:models refresh --provider=kimi # live GET /models overlay (0.6.9+)
Skill & sub-agent CLI
# Discover what's installed ./vendor/bin/superaicore skill:list ./vendor/bin/superaicore agent:list # Run a skill on Claude (default) ./vendor/bin/superaicore skill:run init # Native on Gemini — probe + translate + preamble ./vendor/bin/superaicore skill:run simplify --backend=gemini --exec=native # Try Gemini first, fall back to Claude on incompatibility; hard-lock on cwd-touching hop ./vendor/bin/superaicore skill:run simplify --exec=fallback --fallback-chain=gemini,claude # Run a sub-agent; backend inferred from its `model:` frontmatter ./vendor/bin/superaicore agent:run security-reviewer "audit this diff" # Sync engines ./vendor/bin/superaicore gemini:sync # expose skills/agents as Gemini custom commands ./vendor/bin/superaicore copilot:sync # ~/.copilot/agents/*.agent.md ./vendor/bin/superaicore copilot:sync-hooks # merge Claude-style hooks into Copilot ./vendor/bin/superaicore kiro:sync --dry-run # ~/.kiro/agents/*.json (0.6.1+) ./vendor/bin/superaicore kimi:sync # ~/.kimi/agents/*.yaml + mcp.json (0.6.8+) # Run the same task across N Copilot agents in parallel ./vendor/bin/superaicore copilot:fleet "refactor auth" --agents planner,reviewer,tester
Skill engine — telemetry / ranking / evolution (artisan, since 0.8.6)
Mounted on Laravel artisan via the package service provider — invoke with php artisan from any host:
# Hook targets — read Claude Code hook payload on stdin php artisan skill:track-start --json # PreToolUse(Skill) — insert in_progress row php artisan skill:track-stop --json # Stop — close session's open rows # Read the table php artisan skill:stats --since=7d --sort=failure_rate php artisan skill:stats --skill=research --format=json # Rank skills against a task description (BM25 + telemetry boost) php artisan skill:rank "estimate effort for an outsource project" php artisan skill:rank "重构认证模块" --no-telemetry --format=json # Queue a FIX-mode candidate (review-only — never auto-applied) php artisan skill:evolve --skill=research # manual trigger php artisan skill:evolve --skill=research --dispatch # also invoke LLM (costs tokens) php artisan skill:evolve --sweep --threshold=0.30 --min-applied=5 # all degraded skills # Inspect the queue php artisan skill:candidates # list pending php artisan skill:candidates --id=42 --show-prompt --show-diff # full detail
PHP quick start
Long-running task — TaskRunner (since 0.6.6)
For anything where you want a tail-able log, a Process Monitor row, live UI previews, automatic usage recording, and optional spawn-plan emulation for codex/gemini:
use SuperAICore\Runner\TaskRunner; $envelope = app(TaskRunner::class)->run('claude_cli', $prompt, [ 'log_file' => $logFile, 'timeout' => 7200, 'idle_timeout' => 1800, 'mcp_mode' => 'empty', 'spawn_plan_dir' => $outputDir, 'task_type' => 'tasks.run', 'capability' => $task->type, 'user_id' => auth()->id(), 'external_label' => "task:{$task->id}", 'metadata' => ['task_id' => $task->id], 'onChunk' => fn ($chunk) => $taskResult->updateQuietly(['preview' => $chunk]), ]); if ($envelope->success) { $taskResult->update([ 'content' => $envelope->summary, 'raw_output' => $envelope->output, 'metadata' => ['usage' => $envelope->usage, 'cost_usd' => $envelope->costUsd], ]); }
Returns a typed TaskResultEnvelope with success / output / summary / usage / costUsd / shadowCostUsd / billingModel / logFile / usageLogId / spawnReport / error. Identical API across all 6 CLI engines.
Short call — Dispatcher::dispatch()
use SuperAICore\Services\BackendRegistry; use SuperAICore\Services\CostCalculator; use SuperAICore\Services\Dispatcher; $dispatcher = new Dispatcher(new BackendRegistry(), new CostCalculator()); $result = $dispatcher->dispatch([ 'prompt' => 'Hello', 'backend' => 'anthropic_api', 'provider_config' => ['api_key' => 'sk-ant-...'], 'model' => 'claude-sonnet-4-5-20241022', 'max_tokens' => 200, ]); echo $result['text'];
Also accepts 'stream' => true to opt into the same streaming path TaskRunner uses internally.
Advanced options (idempotency, tracing, SDK features, classified errors): see docs/advanced-usage.md.
Architecture
Engines (user-facing) Provider types Dispatcher adapters
──────────────────── ────────────────────── ───────────────────
Claude Code CLI ────────▶ builtin ────▶ claude_cli
anthropic / bedrock / ────▶ anthropic_api
vertex / anthropic-proxy
Codex CLI ────────▶ builtin ────▶ codex_cli
openai / openai-compat ────▶ openai_api
Gemini CLI ────────▶ builtin / vertex ────▶ gemini_cli
google-ai ────▶ gemini_api
Copilot CLI ────────▶ builtin ────▶ copilot_cli
Kiro CLI ────────▶ builtin / kiro-api ────▶ kiro_cli
Kimi Code CLI ────────▶ builtin ────▶ kimi_cli
SuperAgent SDK ────────▶ anthropic(-proxy) / ────▶ superagent
openai(-compatible) /
openai-responses / (0.7.0+)
lmstudio (0.7.0+)
Dispatcher ← BackendRegistry (owns the 10 adapters above)
← ProviderResolver (active provider from ProviderRepository)
← RoutingRepository (task_type + capability → service)
← UsageTracker (writes to UsageRepository)
← CostCalculator (model pricing → USD)
All repositories are interfaces. The service provider auto-binds Eloquent implementations; swap them for JSON files, Redis, or an external API without touching the dispatcher.
Advanced usage
- Advanced usage guide — idempotency round-trip, W3C trace context, classified provider exceptions,
openai-responses+ Azure OpenAI + ChatGPT OAuth, LM Studio,http_headers/env_http_headersoverrides, SDK features (extra_body/features/loop_detection),ScriptedSpawnBackendhost migration, skill engine telemetry / BM25 ranker / FIX-mode evolution (since 0.8.6). - Task runner quickstart — full
TaskRunneroption reference. - Streaming backends —
mcp_mode, per-backend stream formats,onChunk. - Spawn plan protocol — codex/gemini agent emulation.
- Host spawn uplift roadmap — why
ScriptedSpawnBackendexists + the 700-line glue it replaces. - Idempotency — 60s dedup window, auto-key derivation.
- MCP sync — catalog + host map → every backend.
- API stability — the SemVer contract.
Configuration
The published config (config/super-ai-core.php) covers host integration, locale switcher, route/view registration, per-backend toggles, default backend, usage retention, MCP directory, process monitor toggle, and per-model pricing. See inline comments for every key.
License
MIT. See LICENSE.