Commit graph

6 commits

Author SHA1 Message Date
Debanjum Singh Solanky
dcdd1edde2 Update docs to show how to setup llama-cpp with Khoj
- How to pip install khoj to run offline chat on GPU
  After migration to llama-cpp-python more GPU types are supported but
  require build step so mention how
- New default offline chat model
- Where to get supported chat models from on HuggingFace
2024-03-26 22:33:01 +05:30
Debanjum Singh Solanky
7211eb9cf5 Default to gpt-4-turbo-preview for chat model, extract questions actor
GPT-4 is more expensive and generally less capable than gpt-4-turbo-preview
2024-03-14 01:22:33 +05:30
Debanjum Singh Solanky
42d4bc6b14 Document installing Khoj on Phone as a Progressive Web App (PWA) 2024-03-08 21:18:06 +05:30
Debanjum Singh Solanky
c40f642afa Move Use OpenAI Compatible LLM Server section to existing advanced page
Add footnote on supported chat models to the self-hosting section
2024-02-04 16:16:55 +05:30
sabaimran
9ad44f0e77
Include info about privacy in the docs (#631)
* Add a page about privacy and organize some of the documentation
* Add notice about telemetry
* Improve copy for privacy section, link to telemetry section
2024-01-29 17:47:23 +05:30
sabaimran
9b991eb4fe
Migrate to using docusaurus, rather than docsify for documentation (#603)
* Add docusaurus documentation (to replace the docsify setup
* Remove older docs
* Specify documentation as the gh pages build action working directory
2024-01-07 20:28:15 +05:30