Commit graph

1550 commits

Author SHA1 Message Date
Debanjum Singh Solanky
cc36b87345 Render the web interface directly within the desktop app as a webview 2023-08-07 13:41:12 -07:00
Debanjum
cc951450fb
Build Khoj Debian package on Ubuntu 20.04 to work with Glibc 2.31 (#424)
Build the Debian package using Ubuntu 20.04 instead of 22.04 as Ubuntu 20.04 comes pre-installed with glibc_2.31 unlike Ubuntu 22.04 which uses glibc_2.35
2023-08-06 23:21:51 -07:00
Debanjum Singh Solanky
75c16432a4 Loosen dateparser dependency to get python3.10 wheel for regex package
This should reduce chances of installation errors due to regex package
being built from source for python3.11

Previously, the regex dependency of dateparser = 1.1.1 didn't have a
wheel for python 3.11. This would trigger building the regex package
from scratch which would fail for a lot of folks
2023-08-06 22:48:40 -07:00
Debanjum Singh Solanky
8b41eb9f14 Create Pypi package on Ubuntu 20.04 LTS as well 2023-08-06 21:34:38 -07:00
Debanjum Singh Solanky
1cbacf20dc Build Khoj Debian package on Ubuntu 20.04 to work with glibc 2.31 2023-08-06 20:02:42 -07:00
Jason Qin
0bb5c808e5
Update manifest.json (#422)
Mark plugin as desktop-only in Obsidian to stop 'fails to load' messages in Obsidian Mobile
2023-08-06 14:21:32 -07:00
Muftawo
c8ef619090
fixed reference link to landing page (#417)
* Fixed zsh error no matches found
* Fixed home page 404 error
2023-08-04 10:38:14 -07:00
Debanjum
952bd39536
Merge pull request #413 from Muftawo/update-docs
updated the setup file path to fix the 404 error
2023-08-03 10:44:17 -07:00
Muftawo
18e94d9e60 updated the setup file path to fix the 404 error 2023-08-03 13:35:16 +00:00
sabaimran
78012b8111 Avoid null ref issue when setting model state for web UI. Closes #410 2023-08-03 00:39:06 -07:00
sabaimran
0baed742e4
Add checksums to verify the correct model is downloaded as expected (#405)
* Add checksums to verify the correct model is downloaded as expected
- This should help debug issues related to corrupted model download
- If download fails, let the application continue
* If the model is not download as expected, add some indicators in the settings UI
* Add exc_info to error log if/when download fails for llamav2 model
* Simplify checksum checking logic, update key name in model state for web client
2023-08-02 23:26:52 -07:00
sabaimran
6aa998e047 Add note about system requirements for Linux - debian installation. Closes #378. 2023-08-02 10:57:36 -07:00
sabaimran
d00f51b531 Fix the minimum version of the transformers library required to address #404 2023-08-02 09:10:14 -07:00
Debanjum Singh Solanky
e6e3acdbe4 Release Khoj version 0.10.1 2023-08-01 23:55:13 -07:00
Debanjum Singh Solanky
7c1d70aa17 Bump GPT4All response generation batch size to 512 from 256
A batch size of 512 performs ~20% better on a XPS with no GPU and 16Gb
RAM. Seems worth the tradeoff for now
2023-08-01 23:34:02 -07:00
Debanjum Singh Solanky
e42fd8ae91 Make desktop app workflow apt update before install of linux packages
- See if this fixes the issue with the workflows failing to install
system packages

- Make the build desktop app run on changes to the workflow file as well
2023-08-01 23:15:13 -07:00
Debanjum
16c6bfce8e
Improve Quality and Reliability of Offline Chat (#393)
# Incoming
## Major
### Fix Prompt Size Exceeded Issue
- Fix issues related to prompt size, Closes #386. Use the correct tokenizer to calculate whether the input needs to be truncated or not.

### Improve Llama 2 Model Download
- Use the correct download link for LlamaV2 -- should have been using the small model, but was using the medium
- Add better downloading logic to retry download if it failed, Closes #379 

### Fix Segmentation Fault due to Race
- Add a lock around generating chat responses from the offline model to avoid segmentation faults. Closes #367.
- Add a loading symbol to the web chat UI when the model is thinking. Closes #392

### Improve Chat Response Latency
- Improve performance of offline chat by increasing batch size (via `n_batch`) to automatically engage more cores/GPU, using smaller model and fixing prompt vs response token generation numbers. Closes #363

### Fix Fake Dialogue Continuation
- Fix formatting of user query with offline chat, this was contributing to #398
- Stop Llama 2 from Creating Fake Dialogue Continuations. Closes #398

## Minor
- Improve default message for Chat window on web when it's not configured. Include hint to use offline chat.
- Add null check in `perform_chat_checks` method
- Add offline chat director unit tests

## Performance Analysis (Time to First Token)
|  | v0.10.0 | this branch |
|-|-|-|
| Query 1 | 52s | 28s |
| Query 2 | 33s| 42s |
| Query 3 | 67s| 38s|
2023-08-01 22:07:27 -07:00
Debanjum Singh Solanky
44292afff2 Put offline model response generation behind the chat lock as well
Not just the chat response streaming
2023-08-01 21:53:52 -07:00
Debanjum Singh Solanky
1812473d27 Extract new schema version for each migration script into a variable
This should ease readability, indicates which version this
migration script will update the schema to once applied
2023-08-01 21:41:08 -07:00
Debanjum Singh Solanky
b9937549aa Simplify migration scripts management. Make them use static version
- Only make them update config when it's run conditions are satisfies
- Use static schema version to simplify reasoning about run conditions
2023-08-01 21:28:20 -07:00
Debanjum Singh Solanky
185a1fbed7 Remove old chat setup timer. It is mislabelled, irrelevant since streaming 2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
95acb1583d Update local Chat Actor and Director tests expected to fail 2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
c2b7a14ed5 Fix context, response size for Llama 2 to stay within max token limits
Create regression text to ensure it does not throw the prompt size
exceeded context window error
2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
6e4050fa81 Make Llama 2 stop generating response on hitting specified stop words
It would previously some times start generating fake dialogue with
it's internal prompt patterns of <s>[INST] in responses.

This is a jarring experience. Stop generation response when hit <s>

Resolves #398
2023-08-01 20:52:00 -07:00
Debanjum Singh Solanky
aa6846395d Fix offline model migration script to run for version < 0.10.1
- Use same batch_size in extract question actor as the chat actor
- Log final location the chat model is to be stored in, instead of
  it's temp filename while it is being downloaded
2023-08-01 20:51:53 -07:00
Ikko Eltociear Ashimine
49abb9df9c
Fix typo in orgnode.py (#397)
Fix spelling of Ouput in org parser property drawer comment to Output.
2023-08-01 19:54:57 -07:00
sabaimran
d8fa967b43 Update chat actor unit tests for greater accuracy and benchmarking 2023-08-01 12:24:43 -07:00
sabaimran
f409e16137 Update some of the extract question prompts for llamav2 2023-08-01 12:23:36 -07:00
sabaimran
b11b00a9ff Add log line for time to first response 2023-08-01 10:57:38 -07:00
sabaimran
778df6be71 Add a logline when the offline model migration script runs 2023-08-01 09:27:42 -07:00
sabaimran
48363ec861 Add additional check for chat_messages length in UT 2023-08-01 09:25:52 -07:00
sabaimran
3a5d93d673 Add migration script for getting the new offline model 2023-08-01 09:25:05 -07:00
sabaimran
90efc2ea7a Update comments and add explanations 2023-08-01 09:24:03 -07:00
sabaimran
f7e03f6d63 Switch spinner snake case -> camel case 2023-08-01 08:52:25 -07:00
sabaimran
1c52a6993f add a lock around chat operations to prevent the offline model from getting bombarded and stealing a bunch of compute resources
- This also solves #367
2023-08-01 00:23:17 -07:00
sabaimran
6c3074061b Disable the input bar when chat response is in flight 2023-08-01 00:21:39 -07:00
sabaimran
c14cbe926a Add a loading symbol to web chat. Closes #392 2023-07-31 23:35:48 -07:00
sabaimran
8054bdc896 Use n_batch parameter to increase resource consumption on host machine (and implicitly engage GPU) 2023-07-31 23:25:08 -07:00
sabaimran
e55e9a7b67 Fix unit tests and truncation logic 2023-07-31 21:37:59 -07:00
sabaimran
2335f11b00 Add better error handling for download processes incase of failure 2023-07-31 21:07:38 -07:00
sabaimran
95c7b07c20 Make the fake message longer 2023-07-31 20:55:19 -07:00
sabaimran
8dd5756ce9 Add new director tests for the offline chat model with llama v2 2023-07-31 20:24:52 -07:00
sabaimran
209975e065 Resolve merge conflicts: let Khoj fail if the model tokenizer is not found 2023-07-31 19:12:26 -07:00
sabaimran
2d6c3cd4fa Misc. quality improvements for Llama V2
- Fix download url -- was mapping to q3_K_M, but fixed to use q4_K_S
- Use a proper Llama Tokenizer for counting tokens for truncation with Llama
- Add additional null checks when running
2023-07-31 19:11:20 -07:00
sabaimran
ca195097d7 Update chat hint message at first run 2023-07-31 17:46:09 -07:00
Debanjum Singh Solanky
ded606c7cb Fix format of user query during general conversation with Llama 2 2023-07-31 17:21:14 -07:00
Debanjum Singh Solanky
48e5ac0169 Do not drop system message when truncating context to max prompt size
Previously the system message was getting dropped when the context
size with chat history would be more than the max prompt size
supported by the cat model

Now only the previous chat messages are dropped or the current
message is truncated but the system message is kept to provide
guidance to the chat model
2023-07-31 17:21:14 -07:00
Saba
02e216c135 Clarify usage in telmetry.md 2023-07-30 22:37:20 -07:00
Saba
7eabf8ab0f Add instructions for installing the desktop app and opting out of telemetry 2023-07-30 22:26:52 -07:00
sabaimran
88ef86ad5c
Fix typing issues for mypy (#372) 2023-07-30 19:27:48 -07:00