llux is an AI chatbot for the [Matrix](https://matrix.org/) chat protocol. It uses local LLMs via [Ollama](https://ollama.ai/) for chat and image recognition, and offers image generation via [Diffusers](https://github.com/huggingface/diffusers), specifically [FLUX.1](https://github.com/black-forest-labs/flux). Each user in a Matrix room can set a unique personality (or system prompt), and conversations are kept per user, per channel. Model switching is also supported if you have multiple models installed and configured.
> **Note**: For image recognition in multimodal chat, you’ll need a vision model. This can be—but doesn’t have to be—the same as your primary chat model.
2.**Create a Python Environment (Recommended)**
You can use either `conda/mamba` or `venv`:
```bash
# Using conda/mamba:
mamba create -n llux python=3.10
conda activate llux
# or using Python's built-in venv:
python3 -m venv venv
source venv/bin/activate
```
3.**Install Dependencies**
Install all required Python libraries from `requirements.txt`:
```bash
pip install -r requirements.txt
```
This will install:
-`matrix-nio` for Matrix connectivity
-`diffusers` for image generation
-`ollama` for local LLMs
-`torch` for the underlying deep learning framework
- **Copy `config.yaml-example` to `config.yaml`** (e.g., `cp config.yaml-example config.yaml`).
- In your new `config.yaml`, fill in the relevant fields (Matrix server, username, password, channels, admin usernames, etc.). Also configure the Ollama section for your model settings and the Diffusers section for image generation (model, device, steps, etc.).