Commit graph

1509 commits

Author SHA1 Message Date
sabaimran
8dd5756ce9 Add new director tests for the offline chat model with llama v2 2023-07-31 20:24:52 -07:00
sabaimran
209975e065 Resolve merge conflicts: let Khoj fail if the model tokenizer is not found 2023-07-31 19:12:26 -07:00
sabaimran
2d6c3cd4fa Misc. quality improvements for Llama V2
- Fix download url -- was mapping to q3_K_M, but fixed to use q4_K_S
- Use a proper Llama Tokenizer for counting tokens for truncation with Llama
- Add additional null checks when running
2023-07-31 19:11:20 -07:00
sabaimran
ca195097d7 Update chat hint message at first run 2023-07-31 17:46:09 -07:00
Debanjum Singh Solanky
ded606c7cb Fix format of user query during general conversation with Llama 2 2023-07-31 17:21:14 -07:00
Debanjum Singh Solanky
48e5ac0169 Do not drop system message when truncating context to max prompt size
Previously the system message was getting dropped when the context
size with chat history would be more than the max prompt size
supported by the cat model

Now only the previous chat messages are dropped or the current
message is truncated but the system message is kept to provide
guidance to the chat model
2023-07-31 17:21:14 -07:00
Saba
02e216c135 Clarify usage in telmetry.md 2023-07-30 22:37:20 -07:00
Saba
7eabf8ab0f Add instructions for installing the desktop app and opting out of telemetry 2023-07-30 22:26:52 -07:00
sabaimran
88ef86ad5c
Fix typing issues for mypy (#372) 2023-07-30 19:27:48 -07:00
sabaimran
ca2c942b65 Add typing to compiled_references and inferred_queries 2023-07-30 19:10:30 -07:00
sabaimran
dbb54cfcfa Merge branch 'master' of github.com:khoj-ai/khoj 2023-07-30 18:52:17 -07:00
sabaimran
3646fd1449 Add a warning to indicate that Khoj is not configured to work with personal data sources 2023-07-30 18:52:10 -07:00
sabaimran
996832dc72 Allow user to chat even if content types aren't configured - use empty references 2023-07-30 18:47:45 -07:00
Debanjum
41d36a5ecc
Merge pull request #371 from felixonmars/patch-1
Correct typos in setup.md in the Khoj documentation
2023-07-30 18:37:22 -07:00
Felix Yan
f4fdfe8d8c
Correct typos in setup.md 2023-07-31 03:32:56 +03:00
Debanjum Singh Solanky
28df08b907 Fix configure openai processor for khoj docker
Store khoj search models and embeddings in default location in docker
container under /root/.khoj
2023-07-30 02:07:33 -07:00
Debanjum Singh Solanky
dffbfee62b Fix sample khoj docker config to index test data using new schema 2023-07-30 01:48:18 -07:00
Debanjum Singh Solanky
53810a0ff7 Create khoj config dir if non-existant, before writing to khoj env file 2023-07-30 01:35:36 -07:00
Debanjum Singh Solanky
56394d2879 Update demo video to configure offline chat via the web interface 2023-07-29 19:17:40 -07:00
Debanjum Singh Solanky
b32673db8e Fix link to Docs website in Khoj readme on Github 2023-07-29 12:50:39 -07:00
Debanjum Singh Solanky
a3d1212e79 Align docs landing page with updated github readme
- Screenshots of khoj search, chat
- Put quickstart on landing page
- Put miscellaneous pages under separate section
- Move credits to separate page under miscellaneous
2023-07-29 12:42:36 -07:00
Debanjum Singh Solanky
d7205aed36 Update docs with setup instructions for Offline and Online Chat 2023-07-29 11:18:12 -07:00
Debanjum
0404e33437
Add screenshots, style content in README 2023-07-29 01:22:48 -07:00
sabaimran
f65d157244 Release Khoj version 0.10.0 2023-07-28 19:27:47 -07:00
Debanjum Singh Solanky
f76af869f1 Do not log the gpt4all chat response stream in khoj backend
Stream floods stdout and does not provide useful info to user
2023-07-28 19:14:04 -07:00
sabaimran
5ccb01343e
Add Offline chat to Obsidian (#359)
* Add support for configuring/using offline chat from within Obsidian
* Fix type checking for search type
* If Github is not configured, /update call should fail
* Fix regenerate tests same as the update ones
* Update help text for offline chat in obsidian
* Update relevant description for Khoj settings in Obsidian
* Simplify configuration logic and use smarter defaults
2023-07-28 18:47:56 -07:00
Debanjum
b3c1507708
Merge pull request #361 from khoj-ai/configure-offline-chat-from-emacs
- Configure using Offline Chat from Emacs: 
- Enable, Disable Offline Chat from Emacs

- Use: Enable offline chat with `(setq khoj-chat-offline t)' during khoj setup
- Benefits: Offline chat models are better for privacy but not great at answering questions
2023-07-28 18:06:58 -07:00
sabaimran
9f78db0579
Let Offline chat override OpenAI API settings (#362)
* Let Offline chat override OpenAI API settings
* Download the offline model whenever offline chat is enabled
* Add progressbar for download for llamav2 model to track progress
* Change ordering of n due to switch of default processor
* Flip ordering of offline/openai checks when extracting questions from query
2023-07-28 17:26:20 -07:00
Debanjum Singh Solanky
ebfbef1f68 Configure using offline chat from Emacs
Closes #358
2023-07-28 16:07:33 -07:00
Debanjum Singh Solanky
9b1048caf7 Remove asymmetric from name of remaining text search tests
Asymmetric search is the only search type used now in khoj.el. So
making distinction between between symmetric and asymmetric search
isn't necessary anymore
2023-07-28 15:33:22 -07:00
sabaimran
12cfb48f16
Fix gpt4all import error in Desktop builds (#356)
* Add gpt4all to imports via sysconfig path
2023-07-28 11:54:18 -07:00
Debanjum
4b0639cfbd
Merge pull request #354 from ducksblock/master
Fix #353: Remove references to localhost:8000 in docs
2023-07-28 11:00:12 -07:00
ducksblock
cbecd7b66f Fix #353: Remove references to localhost:8000 2023-07-28 13:57:00 +05:30
sabaimran
702486dab7 Add gpt4all for copying metadata 2023-07-27 22:22:24 -07:00
sabaimran
29081f4429 Adjust parameters for offline chat 2023-07-27 22:22:09 -07:00
sabaimran
124d97c26d
Replace Falcon 🦅 model with Llama V2 🦙 for offline chat (#352)
* Working example with LlamaV2 running locally on my machine

- Download from huggingface
- Plug in to GPT4All
- Update prompts to fit the llama format

* Add appropriate prompts for extracting questions based on a query based on llama format

* Rename Falcon to Llama and make some improvements to the extract_questions flow

* Do further tuning to extract question prompts and unit tests

* Disable extracting questions dynamically from Llama, as results are still unreliable
2023-07-27 20:51:20 -07:00
sabaimran
55965eea7d
Delete FUNDING.yml
Instead of this file, use an organization-level file: https://github.com/khoj-ai/.github
2023-07-27 15:28:47 -07:00
sabaimran
925177b150
Update FUNDING.yml
Change to use a single organization (remove list brackets)
2023-07-27 15:19:20 -07:00
sabaimran
78197bb5c3
Create FUNDING.yml
- Add github sponsor information directly to khoj project. Closes #302
2023-07-27 15:16:45 -07:00
Debanjum Singh Solanky
da3f4dc7e4 Fix test config to run OpenAI Chat Actor, Director tests
OpenAI conversation processor schema had updated but conftest hadn't
been updated to reflect the same.

Update conftest setup of conversation processor to fix this
2023-07-27 11:30:04 -07:00
Debanjum Singh Solanky
715d56d4f0 Use new schema to update khoj.yml config from khoj.el 2023-07-26 17:34:16 -07:00
sabaimran
8b2af0b5ef
Add support for our first Local LLM 🤖🏠 (#330)
* Add support for gpt4all's falcon model as an additional conversation processor
- Update the UI pages to allow the user to point to the new endpoints for GPT
- Update the internal schemas to support both GPT4 models and OpenAI
- Add unit tests benchmarking some of the Falcon performance
* Add exc_info to include stack trace in error logs for text processors
* Pull shared functions into utils.py to be used across gpt4 and gpt
* Add migration for new processor conversation schema
* Skip GPT4All actor tests due to typing issues
* Fix Obsidian processor configuration in auto-configure flow
* Rename enable_local_llm to enable_offline_chat
2023-07-26 16:27:08 -07:00
sabaimran
23d77ee338
Fix import issues in desktop image builds (#343) 2023-07-26 15:45:52 -07:00
Justin Bassett-Green
8dcc21052f
Add chat-model param in sample config yml and document (#341)
* add chat-model config param to docs

* add chat-model param to sample config yml
2023-07-22 16:53:08 -07:00
Debanjum Singh Solanky
5bb42e56a8 Fix formatting of khoj test config and unused references in conftests 2023-07-22 00:29:26 -07:00
Debanjum Singh Solanky
7722a9c347 Default to using the gpt-3.5-turbo model for chat from khoj.el 2023-07-22 00:29:26 -07:00
Saba
36d25c4f1d Center the title, add table headers 2023-07-21 23:36:38 -07:00
Saba
01b6a10cd1 Simplify readme 2023-07-21 23:30:44 -07:00
sabaimran
4ce072c4b3
Make the README on our Github minimal (#334)
* Make the README on our Github minimal
* Add a bit of formatting and more background
2023-07-21 23:29:04 -07:00
Debanjum Singh Solanky
4089e38283 Fix links to demos and screenshots in docs 2023-07-21 20:01:19 -07:00