llux/README.md

109 lines
4.6 KiB
Markdown
Raw Permalink Normal View History

2025-01-19 05:30:13 +00:00
<img src="https://sij.ai/sij/llux/raw/branch/main/ai_selfportrait.jpg" width="400" alt="llux self-portrait">
2025-01-08 07:11:57 +00:00
2025-01-07 18:19:23 -08:00
# llux
2025-01-07 20:31:26 -08:00
2025-01-19 05:25:11 +00:00
llux is an AI chatbot for the [Matrix](https://matrix.org/) chat protocol. It uses local LLMs via [Ollama](https://ollama.ai/) for chat and image recognition, offers image generation via [Diffusers](https://github.com/huggingface/diffusers), specifically [FLUX.1](https://github.com/black-forest-labs/flux), and an OpenAI-compatible API for text-to-speech (e.g. [Kokoro FasAPI by remsky](https://github.com/remsky/Kokoro-FastAPI)). Each user in a Matrix room can set a unique personality (or system prompt), and conversations are kept per user, per channel. Model switching is also supported if you have multiple models installed and configured.
2025-01-07 20:31:26 -08:00
2025-01-07 22:58:07 -08:00
You're welcome to try the bot out on [We2.ee](https://we2.ee/about) at [#ai:we2.ee](https://we2.ee/@@ai).
2025-01-07 20:31:26 -08:00
## Getting Started
1. **Install Ollama**
2025-01-09 18:01:01 +00:00
Youll need [Ollama](https://ollama.ai/) to run local LLMs (text and multimodal). A quick install:
2025-01-07 20:31:26 -08:00
2025-01-07 22:56:47 -08:00
```bash
curl https://ollama.ai/install.sh | sh
```
2025-01-07 20:31:26 -08:00
2025-01-09 18:01:01 +00:00
Choose your preferred models. For base chat functionality, good options include: [llama3.3](https://ollama.com/library/llama3.3) and [phi4](https://ollama.com/library/phi4). For multimodal chat, youll need a vision model. I recommend [llama3.2-vision](https://ollama.com/library/llama3.2-vision). This can be — but doesnt have to be — the same as your base chat model.
Pull your chosen model(s) with:
2025-01-07 20:31:26 -08:00
2025-01-07 22:56:47 -08:00
```bash
ollama pull <modelname>
```
2. **Create a Python Environment (Recommended)**
You can use either `conda/mamba` or `venv`:
```bash
# Using conda/mamba:
mamba create -n llux python=3.10
conda activate llux
# or using Python's built-in venv:
python3 -m venv venv
source venv/bin/activate
```
3. **Install Dependencies**
Install all required Python libraries from `requirements.txt`:
```bash
pip install -r requirements.txt
```
This will install:
- `matrix-nio` for Matrix connectivity
- `diffusers` for image generation
- `ollama` for local LLMs
- `torch` for the underlying deep learning framework
- `pillow` for image manipulation
- `markdown`, `pyyaml`, etc.
4. **Set Up Your Bot**
2025-01-07 20:31:26 -08:00
- Create a Matrix account for your bot (on a server of your choice).
- Record the server, username, and password.
2025-01-07 22:51:27 -08:00
- **Copy `config.yaml-example` to `config.yaml`** (e.g., `cp config.yaml-example config.yaml`).
- In your new `config.yaml`, fill in the relevant fields (Matrix server, username, password, channels, admin usernames, etc.). Also configure the Ollama section for your model settings and the Diffusers section for image generation (model, device, steps, etc.).
2025-01-07 20:31:26 -08:00
2025-01-09 18:02:15 +00:00
**Note**: this bot was designed for macOS on Apple Silicon. It has not been tested on Linux. It should work on Linux but might require some minor changes, particularly for image generation. At the very least you will need to change `device` in config.yaml from `mps` to your torch device, e.g., `cuda`.
2025-01-09 18:01:01 +00:00
2025-01-07 22:56:47 -08:00
5. **Run llux**
```bash
python3 llux.py
```
If youre using a virtual environment, ensure its activated first.
2025-01-07 20:31:26 -08:00
## Usage
- **.ai message** or **botname: message**
2025-01-09 18:01:01 +00:00
Basic conversation or roleplay prompt. By replying with this prompt to an image attachment on Matrix, you engage your multimodal / vision model and can ask the model questions about the image attachment.
- **.img prompt**
Generate an image with the prompt
2025-01-07 20:31:26 -08:00
2025-01-19 05:25:11 +00:00
- **.tts text**
Convert the provided text to speech
2025-01-07 20:31:26 -08:00
- **.x username message**
Interact with another users chat history (use the display name of that user).
- **.persona personality**
Set or change to a specific roleplaying personality.
- **.custom prompt**
Override the default personality with a custom system prompt.
- **.reset**
Clear your personal conversation history and revert to the preset personality.
- **.stock**
Clear your personal conversation history, but do not apply any system prompt.
### Admin Commands
- **.model modelname**
2025-01-07 22:56:47 -08:00
2025-01-07 20:31:26 -08:00
- Omit `modelname` to show the current model and available options.
2025-01-07 22:51:27 -08:00
- Include `modelname` to switch to that model.
2025-01-07 20:31:26 -08:00
- **.clear**
2025-01-09 18:01:01 +00:00
Reset llux for everyone, clearing all stored conversations, deleting image cache, and returning to the default settings.
2025-02-03 18:22:24 +00:00
2025-02-03 18:35:36 +00:00
## License & Attribution
2025-02-03 18:23:01 +00:00
**llux** is based in part on [ollamarama-matrix](https://github.com/h1ddenpr0cess20/ollamarama-matrix) by [h1ddenpr0cess20](https://github.com/h1ddenpr0cess20). For that reason it is covered by the same [AGPL-3.0 license](https://github.com/h1ddenpr0cess20/ollamarama-matrix/raw/refs/heads/main/LICENSE).