mirror of
https://github.com/khoj-ai/khoj.git
synced 2024-12-03 20:33:00 +01:00
dcdd1edde2
- How to pip install khoj to run offline chat on GPU After migration to llama-cpp-python more GPU types are supported but require build step so mention how - New default offline chat model - Where to get supported chat models from on HuggingFace
13 lines
949 B
Markdown
13 lines
949 B
Markdown
---
|
|
sidebar_position: 4
|
|
---
|
|
|
|
# Credits
|
|
Many Open Source projects are used to power Khoj. Here's a few of them:
|
|
|
|
- [Multi-QA MiniLM Model](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1), [All MiniLM Model](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) for Text Search. See [SBert Documentation](https://www.sbert.net/examples/applications/retrieve_rerank/README.html)
|
|
- [OpenAI CLIP Model](https://github.com/openai/CLIP) for Image Search. See [SBert Documentation](https://www.sbert.net/examples/applications/image-search/README.html)
|
|
- Charles Cave for [OrgNode Parser](http://members.optusnet.com.au/~charles57/GTD/orgnode.html)
|
|
- [Org.js](https://mooz.github.io/org-js/) to render Org-mode results on the Web interface
|
|
- [Markdown-it](https://github.com/markdown-it/markdown-it) to render Markdown results on the Web interface
|
|
- [Llama.cpp](https://github.com/ggerganov/llama.cpp) to chat with local LLM
|