mirror of
https://github.com/khoj-ai/khoj.git
synced 2024-11-27 17:35:07 +01:00
Set >=6Gb RAM required for offline chat
Llama v2 7B with 4bit quantization technically needs ~3.5Gb RAM (7B * 0.5byte), practically a system with 6Gb of RAM should suffice
This commit is contained in:
parent
8346e1193c
commit
d93395ae48
1 changed files with 1 additions and 1 deletions
|
@ -10,7 +10,7 @@
|
|||
Offline chat stays completely private and works without internet. But it is slower, lower quality and more compute intensive.
|
||||
|
||||
> **System Requirements**:
|
||||
> - You need at least **16 GB of RAM** and **4 GB of Disk**
|
||||
> - Machine with at least **6 GB of RAM** and **4 GB of Disk** available
|
||||
> - A CPU supporting [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) is required
|
||||
> - A Mac M1+ or [Vulcan supported GPU](https://vulkan.gpuinfo.org/) should significantly speed up chat response times
|
||||
|
||||
|
|
Loading…
Reference in a new issue