- How to pip install khoj to run offline chat on GPU
After migration to llama-cpp-python more GPU types are supported but
require build step so mention how
- New default offline chat model
- Where to get supported chat models from on HuggingFace
This allows using open or commerical, local or hosted LLM models that
are not supported in Khoj by default.
It also allows users to use other local LLM API servers that support
their GPU
Closes#407
* Add a page about privacy and organize some of the documentation
* Add notice about telemetry
* Improve copy for privacy section, link to telemetry section
- Make the docs overview page available at docs.khoj.dev root instead of
under docs.khoj.dev/docs path
- Remove the new landing page, it is unnecessary.
- Remove /docs path prefix from links to internal doc pages
- Remove .md path suffix in internal doc pages for consistency