1tomany / llm-sdk
A single, unified, framework-independent library for integration with many popular AI platforms and large language models
Requires
- php: >=8.4
- ext-fileinfo: *
- fakerphp/faker: ^1.24
- psr/container: ^2.0
- symfony/http-client: ^7.2|^8.0
- symfony/property-access: ^7.2|^8.0
- symfony/property-info: ^7.2|^8.0
- symfony/serializer: ^7.2|^8.0
Requires (Dev)
- friendsofphp/php-cs-fixer: ^3.93
- phpdocumentor/reflection-docblock: ^5.6
- phpstan/phpstan: ^2.1
- phpunit/phpunit: ^12.5
- dev-master
- v0.8.0
- v0.7.7
- v0.7.6
- v0.7.5
- v0.7.4
- v0.7.3
- v0.7.2
- v0.7.1
- v0.7.0
- v0.6.4
- v0.6.3
- v0.6.2
- v0.6.1
- v0.6.0
- v0.5.1
- v0.5.0
- v0.4.3
- v0.4.1
- v0.4.0
- v0.3.4
- v0.3.3
- v0.3.2
- v0.3.1
- v0.3.0
- v0.2.0
- v0.1.3
- v0.1.2
- v0.1.1
- v0.1.0
- v0.0.8
- v0.0.7
- v0.0.6
- v0.0.5
- v0.0.4
- v0.0.3
- v0.0.2
- v0.0.1
- dev-llm-76
- dev-llm-77
- dev-llm-77-to_search_store
This package is auto-updated.
Last update: 2026-04-07 18:26:04 UTC
README
This library provides a single, unified, framework-independent library for integration with several popular AI platforms and large language models.
Installation
Install the library using Composer:
composer require 1tomany/llm-sdk
Usage
There are two ways to use this library:
- Direct Instantiate the AI client you wish to use and send a request object to it. This method is easier to use, but comes with the cost that your application will be less flexible and testable.
- Actions Register the clients you wish to use with a
OneToMany\LlmSdk\Factory\ClientFactoryinstance, inject that instance into each action you wish to take, and interact with the action instead of through the client.
Note: A Symfony bundle is available if you wish to integrate this library into your Symfony applications with autowiring and configuration support.
Examples
Review the examples below to get an idea of how the library works.
Embeddings
examples/embeddings/create.phpCreates an embedding vector from a prompt sent to an LLEM (large language embedding model)
Files
examples/files/upload.phpUploads a file to an LLM vendorexamples/files/delete.phpDeletes a file from an LLM vendor
Outputs
examples/outputs/generate.phpGenerates output from a prompt sent to an LLM
Search Stores
examples/search-stores/create.phpCreates a search store for RAG outputsexamples/search-stores/read.phpDisplays information about a search storeexamples/search-stores/search.phpSearches a store with a given promptexamples/search-stores/files/import.phpImports an uploaded file to a search store
Supported platforms
- Anthropic
- Gemini
- Mock
- OpenAI
Platform feature support
Note: Each platform refers to generating output (inference) differently; OpenAI uses the word "Responses" while Gemini uses the word "Content". I've decided the word "Output" best represents what a large language model produces in the case of generative models, and "Embedding" in the case of embedding models.
To generate output or create an embedding, you must first compile a "Query". A query is made up of different input components: text prompts, files, a JSON schema, and/or system instructions.
This library allows you to compile a query before sending it to the model for two reasons:
- You can log/analyze the request payload before sending it to the model.
- You can compile individual requests for batching.
| Feature | Anthropic | Gemini | Mock | OpenAI |
|---|---|---|---|---|
| Batches | ||||
| Create | ❌ | ✅ | ✅ | ✅ |
| Read | ❌ | ✅ | ✅ | ✅ |
| Cancel | ❌ | ❌ | ❌ | ❌ |
| Embeddings | ||||
| Create | ❌ | ✅ | ✅ | ✅ |
| Files | ||||
| Upload | ✅ | ✅ | ✅ | ✅ |
| Read | ❌ | ❌ | ❌ | ❌ |
| List | ❌ | ❌ | ❌ | ❌ |
| Download | ❌ | ❌ | ❌ | ❌ |
| Delete | ✅ | ✅ | ✅ | ✅ |
| Outputs | ||||
| Generate | ❌ | ✅ | ✅ | ✅ |
| Queries | ||||
| Compile | ❌ | ✅ | ✅ | ✅ |
| Search Stores | ||||
| Create | ❌ | ✅ | ❌ | ❌ |
| Read | ❌ | ✅ | ❌ | ❌ |
| Search | ❌ | ✅ | ❌ | ❌ |
| ImportFile | ❌ | ✅ | ❌ | ❌ |
Credits
License
The MIT License