sagarsdeshmukh / swayam-ai-chatbot
WordPress Swayam AI Chatbot plugin using LLPhant, Llama 3.2, and Elasticsearch
Package info
github.com/sagarsdeshmukh/swayam-ai-chatbot
Type:wordpress-plugin
pkg:composer/sagarsdeshmukh/swayam-ai-chatbot
Requires
- php: ^8.2
- elasticsearch/elasticsearch: ^8.19
- theodo-group/llphant: ^0.11.12
This package is not auto-updated.
Last update: 2026-03-11 06:31:34 UTC
README
A WordPress plugin that provides an AI-powered chatbot using RAG (Retrieval-Augmented Generation) architecture with LLPhant, Llama 3.2 (via Ollama), and Elasticsearch.
Why "Swayam"?
Swayam (स्वयं)—an ancient Sanskrit word meaning "self." Your content. Your knowledge. Autonomously intelligent.
Features
- RAG-Powered Q&A: Answers questions based on your WordPress content
- Automatic Content Indexing: Syncs posts, pages, and custom post types to Elasticsearch
- Auto-Sync on Publish: Automatically updates the index when content is published/updated/deleted
- Customizable Chat Interface: Shortcode and floating widget options
- Admin Dashboard: Easy configuration with connection testing
- REST API: Programmatic access to the chatbot
- Rate Limiting: Built-in protection against spam
- PHP 8.2+ Compatible: Works with PHP 8.2, 8.3, and later versions
Download plugin on wordpress.org
Requirements
- PHP: 8.2 or higher
- WordPress: 6.0 or higher
- Ollama: Running locally with Llama 3.2 model
- Elasticsearch: 9.x with vector search support
- Composer: For dependency management
Installation
1. Install the Plugin
# Navigate to your WordPress plugins directory cd /path/to/wordpress/wp-content/plugins/ # Clone or copy the plugin cp -r /path/to/swayam-ai-chatbot ./ # Install dependencies cd swayam-ai-chatbot composer install
2. Install and Start Ollama
You can install Llama 3.2 using ollama.
For installing ollama on Linux, run the following command:
curl -fsSL https://ollama.com/install.sh | sh
For macOS or Windows use the download page.
It is recommended to install Llama 3.2-1B or 3B for optimized CPU/GPU, and RAM usage.
For installing Llama3.2-3B use the following command:
ollama run llama3.2:3b
You can start interacting to the LLama3.2 model using a chat. To exit, write /bye in the chat.
3. Install and Start Elasticsearch
curl -fsSL https://elastic.co/start-local | sh
This script will install Elasticsearch and Kibana using a docker-compose.yml file stored in
elastic-start-local folder.
Elasticsearch and Kibana will run locally at http://localhost:9200 and http://localhost:5601.
All the settings of Elasticsearch and Kibana are stored in the elastic-start-local/.env file.
You can use the start and stop commands available in the elastic-start-local folder.
To stop the Elasticsearch and Kibana Docker services, use the stop command:
cd elastic-start-local
./stop.sh
To start the Elasticsearch and Kibana Docker services, use the start command:
cd elastic-start-local
./start.sh
License
Credits
- LLPhant - PHP LLM framework
- Ollama - Local LLM runtime
- Elasticsearch - Vector database