Debanjum Singh Solanky
1812473d27
Extract new schema version for each migration script into a variable
...
This should ease readability, indicates which version this
migration script will update the schema to once applied
2023-08-01 21:41:08 -07:00
Debanjum Singh Solanky
b9937549aa
Simplify migration scripts management. Make them use static version
...
- Only make them update config when it's run conditions are satisfies
- Use static schema version to simplify reasoning about run conditions
2023-08-01 21:28:20 -07:00
Debanjum Singh Solanky
185a1fbed7
Remove old chat setup timer. It is mislabelled, irrelevant since streaming
2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
95acb1583d
Update local Chat Actor and Director tests expected to fail
2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
c2b7a14ed5
Fix context, response size for Llama 2 to stay within max token limits
...
Create regression text to ensure it does not throw the prompt size
exceeded context window error
2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
6e4050fa81
Make Llama 2 stop generating response on hitting specified stop words
...
It would previously some times start generating fake dialogue with
it's internal prompt patterns of <s>[INST] in responses.
This is a jarring experience. Stop generation response when hit <s>
Resolves #398
2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
aa6846395d
Fix offline model migration script to run for version < 0.10.1
...
- Use same batch_size in extract question actor as the chat actor
- Log final location the chat model is to be stored in, instead of
it's temp filename while it is being downloaded
2023-08-01 20:51:53 -07:00
sabaimran
d8fa967b43
Update chat actor unit tests for greater accuracy and benchmarking
2023-08-01 12:24:43 -07:00
sabaimran
f409e16137
Update some of the extract question prompts for llamav2
2023-08-01 12:23:36 -07:00
sabaimran
b11b00a9ff
Add log line for time to first response
2023-08-01 10:57:38 -07:00
sabaimran
778df6be71
Add a logline when the offline model migration script runs
2023-08-01 09:27:42 -07:00
sabaimran
48363ec861
Add additional check for chat_messages length in UT
2023-08-01 09:25:52 -07:00
sabaimran
3a5d93d673
Add migration script for getting the new offline model
2023-08-01 09:25:05 -07:00
sabaimran
90efc2ea7a
Update comments and add explanations
2023-08-01 09:24:03 -07:00
sabaimran
f7e03f6d63
Switch spinner snake case -> camel case
2023-08-01 08:52:25 -07:00
sabaimran
1c52a6993f
add a lock around chat operations to prevent the offline model from getting bombarded and stealing a bunch of compute resources
...
- This also solves #367
2023-08-01 00:23:17 -07:00
sabaimran
6c3074061b
Disable the input bar when chat response is in flight
2023-08-01 00:21:39 -07:00
sabaimran
c14cbe926a
Add a loading symbol to web chat. Closes #392
2023-07-31 23:35:48 -07:00
sabaimran
8054bdc896
Use n_batch parameter to increase resource consumption on host machine (and implicitly engage GPU)
2023-07-31 23:25:08 -07:00
sabaimran
e55e9a7b67
Fix unit tests and truncation logic
2023-07-31 21:37:59 -07:00
sabaimran
2335f11b00
Add better error handling for download processes incase of failure
2023-07-31 21:07:38 -07:00
sabaimran
95c7b07c20
Make the fake message longer
2023-07-31 20:55:19 -07:00
sabaimran
8dd5756ce9
Add new director tests for the offline chat model with llama v2
2023-07-31 20:24:52 -07:00
sabaimran
209975e065
Resolve merge conflicts: let Khoj fail if the model tokenizer is not found
2023-07-31 19:12:26 -07:00
sabaimran
2d6c3cd4fa
Misc. quality improvements for Llama V2
...
- Fix download url -- was mapping to q3_K_M, but fixed to use q4_K_S
- Use a proper Llama Tokenizer for counting tokens for truncation with Llama
- Add additional null checks when running
2023-07-31 19:11:20 -07:00
sabaimran
ca195097d7
Update chat hint message at first run
2023-07-31 17:46:09 -07:00
Debanjum Singh Solanky
ded606c7cb
Fix format of user query during general conversation with Llama 2
2023-07-31 17:21:14 -07:00
Debanjum Singh Solanky
48e5ac0169
Do not drop system message when truncating context to max prompt size
...
Previously the system message was getting dropped when the context
size with chat history would be more than the max prompt size
supported by the cat model
Now only the previous chat messages are dropped or the current
message is truncated but the system message is kept to provide
guidance to the chat model
2023-07-31 17:21:14 -07:00
Saba
02e216c135
Clarify usage in telmetry.md
2023-07-30 22:37:20 -07:00
Saba
7eabf8ab0f
Add instructions for installing the desktop app and opting out of telemetry
2023-07-30 22:26:52 -07:00
sabaimran
88ef86ad5c
Fix typing issues for mypy ( #372 )
2023-07-30 19:27:48 -07:00
sabaimran
ca2c942b65
Add typing to compiled_references and inferred_queries
2023-07-30 19:10:30 -07:00
sabaimran
dbb54cfcfa
Merge branch 'master' of github.com:khoj-ai/khoj
2023-07-30 18:52:17 -07:00
sabaimran
3646fd1449
Add a warning to indicate that Khoj is not configured to work with personal data sources
2023-07-30 18:52:10 -07:00
sabaimran
996832dc72
Allow user to chat even if content types aren't configured - use empty references
2023-07-30 18:47:45 -07:00
Debanjum
41d36a5ecc
Merge pull request #371 from felixonmars/patch-1
...
Correct typos in setup.md in the Khoj documentation
2023-07-30 18:37:22 -07:00
Felix Yan
f4fdfe8d8c
Correct typos in setup.md
2023-07-31 03:32:56 +03:00
Debanjum Singh Solanky
28df08b907
Fix configure openai processor for khoj docker
...
Store khoj search models and embeddings in default location in docker
container under /root/.khoj
2023-07-30 02:07:33 -07:00
Debanjum Singh Solanky
dffbfee62b
Fix sample khoj docker config to index test data using new schema
2023-07-30 01:48:18 -07:00
Debanjum Singh Solanky
53810a0ff7
Create khoj config dir if non-existant, before writing to khoj env file
2023-07-30 01:35:36 -07:00
Debanjum Singh Solanky
56394d2879
Update demo video to configure offline chat via the web interface
2023-07-29 19:17:40 -07:00
Debanjum Singh Solanky
b32673db8e
Fix link to Docs website in Khoj readme on Github
2023-07-29 12:50:39 -07:00
Debanjum Singh Solanky
a3d1212e79
Align docs landing page with updated github readme
...
- Screenshots of khoj search, chat
- Put quickstart on landing page
- Put miscellaneous pages under separate section
- Move credits to separate page under miscellaneous
2023-07-29 12:42:36 -07:00
Debanjum Singh Solanky
d7205aed36
Update docs with setup instructions for Offline and Online Chat
2023-07-29 11:18:12 -07:00
Debanjum
0404e33437
Add screenshots, style content in README
2023-07-29 01:22:48 -07:00
sabaimran
f65d157244
Release Khoj version 0.10.0
2023-07-28 19:27:47 -07:00
Debanjum Singh Solanky
f76af869f1
Do not log the gpt4all chat response stream in khoj backend
...
Stream floods stdout and does not provide useful info to user
2023-07-28 19:14:04 -07:00
sabaimran
5ccb01343e
Add Offline chat to Obsidian ( #359 )
...
* Add support for configuring/using offline chat from within Obsidian
* Fix type checking for search type
* If Github is not configured, /update call should fail
* Fix regenerate tests same as the update ones
* Update help text for offline chat in obsidian
* Update relevant description for Khoj settings in Obsidian
* Simplify configuration logic and use smarter defaults
2023-07-28 18:47:56 -07:00
Debanjum
b3c1507708
Merge pull request #361 from khoj-ai/configure-offline-chat-from-emacs
...
- Configure using Offline Chat from Emacs:
- Enable, Disable Offline Chat from Emacs
- Use: Enable offline chat with `(setq khoj-chat-offline t)' during khoj setup
- Benefits: Offline chat models are better for privacy but not great at answering questions
2023-07-28 18:06:58 -07:00
sabaimran
9f78db0579
Let Offline chat override OpenAI API settings ( #362 )
...
* Let Offline chat override OpenAI API settings
* Download the offline model whenever offline chat is enabled
* Add progressbar for download for llamav2 model to track progress
* Change ordering of n due to switch of default processor
* Flip ordering of offline/openai checks when extracting questions from query
2023-07-28 17:26:20 -07:00