kambo / llama-cpp-php
The package enables the use of the LLama C++ library in PHP, thereby allowing the setup and execution of LLM models in PHP on your local machine.
Fund package maintenance!
kambo-1st
Requires
- php: ^8.1
- symfony/event-dispatcher: ^6.2
Requires (Dev)
- phpunit/phpunit: ^9.5
- slevomat/coding-standard: ^8.8
This package is auto-updated.
Last update: 2024-12-24 20:50:08 UTC
README
The package enables the use of the LLama C++ library in PHP, thereby allowing the setup and execution of LLM models in PHP on your local machine.
This is highly experimental and not suitable for production use!
Use at your own risk!
Only Linux is supported!
Installation
You can install the package via composer:
composer require kambo/llama-cpp-php kambo/llama-cpp-php-linux-lib
Note: the kambo/llama-cpp-php-linux-lib package contains a binary library for Linux.
Usage
Get model, you can use for example this command:
wget https://huggingface.co/LLukas22/gpt4all-lora-quantized-ggjt/resolve/main/ggjt-model.bin
$template = "You are a programmer, write PHP class that will add two numbers and print the result. Stop at class end."; $context = Context::createWithParameter(new ModelParameters(__DIR__ .'/models/ggjt-model.bin')); $llama = new LLamaCPP($context); echo "Prompt: \033[0;32m".$template."\033[0m".PHP_EOL; foreach ($llama->generate($template, new GenerationParameters(predictLength: 200)) as $token) { echo $token; }
License
The MIT License (MIT). Please see License File for more information.