From c221a2149aee3a0443825d878d33017e000fcb49 Mon Sep 17 00:00:00 2001 From: sij <sij@sij.law> Date: Thu, 9 Jan 2025 18:02:15 +0000 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b18da0b..067ae25 100644 --- a/README.md +++ b/README.md @@ -60,7 +60,7 @@ You're welcome to try the bot out on [We2.ee](https://we2.ee/about) at [#ai:we2. - **Copy `config.yaml-example` to `config.yaml`** (e.g., `cp config.yaml-example config.yaml`). - In your new `config.yaml`, fill in the relevant fields (Matrix server, username, password, channels, admin usernames, etc.). Also configure the Ollama section for your model settings and the Diffusers section for image generation (model, device, steps, etc.). - **Note**: this bot was designed for macOS on Apple Silicon. It has not been tested on Linux. It should work on Linux but might require some minor changes, particularly for image generation. At the very least you will need to change `device` in config.yaml from `mps` to your torch device. + **Note**: this bot was designed for macOS on Apple Silicon. It has not been tested on Linux. It should work on Linux but might require some minor changes, particularly for image generation. At the very least you will need to change `device` in config.yaml from `mps` to your torch device, e.g., `cuda`. 5. **Run llux** ```bash