Commit graph

3471 commits

Author SHA1 Message Date
Debanjum Singh Solanky
1842017393 Skip trying to index deleted files, folders from Desktop app
Previously app would crash on startup if desktop app was told to
index a file that had been deleted afterwards
2024-04-25 15:23:05 +05:30
Debanjum
17a06f152c
Support Llama 3 and Improve Offline Chat Actors (#724)
- Add support for Llama 3 in Khoj offline mode
- Make chat actors generate valid json with more local models
- Fix offline chat actor tests
2024-04-25 14:00:56 +05:30
Debanjum
220e5516ab
Make Search Models More Configurable. Upgrade Default Cross-Encoder (#722)
- Upgrade default cross-encoder to mixedbread ai's mxbai-rerank-xsmall
- Support more embedding models by making query, docs encoding configurable
2024-04-25 13:55:49 +05:30
Debanjum Singh Solanky
cf08eaf786 Add comments explaining each field in the search model config in DB 2024-04-25 13:54:13 +05:30
Debanjum
4ee5ac7c20
Fix Chat UI and Indexing on Desktop App (#723)
- Make valid file extension checking case insensitive on Desktop app
- Skip indexing non-existent folders on Desktop app
- Pass auth headers to fix lazy load of chat messages on Desktop app
- Set chat-message height to height of content in web, desktop
2024-04-24 18:49:03 +05:30
Debanjum Singh Solanky
89ef23de50 Upgrade gunicorn and make it only a production dependency 2024-04-24 11:28:55 +05:30
Debanjum Singh Solanky
799efb5974 Create DB migration to add new fields and change default cross-encoder 2024-04-24 09:50:34 +05:30
Debanjum Singh Solanky
ec41482324 Upgrade default cross-encoder to mixedbread ai's mxbai-rerank-xsmall
Previous cross-encoder model was a few years old, newer models should
have improved in quality. Model size increases by 50% compared to
previous for better performance, at least on benchmarks
2024-04-24 09:50:09 +05:30
Debanjum Singh Solanky
7eaf9367fe Support more embedding models by making query, docs encoding configurable
Most newer, better embeddings models add a query, docs prefix when
encoding. Previously Khoj admins couldn't configure these, so it
wasn't possible to use these newer models.

This change allows configuring the kwargs passed to the query, docs
encoders by updating the search config in the database.
2024-04-24 09:49:17 +05:30
Debanjum Singh Solanky
f2db8d7d99 Fix offline chat actor tests
Do not check for original q in extracted questions. Since this was
removed in a previous commit
2024-04-24 09:40:00 +05:30
Debanjum Singh Solanky
4f7237b158 Make chat actors generate valid json with more local models
Improve tool, online search, webpage links, docs search chat actor
prompts. Ensure works with hermes-2-pro and llama-3.

Be more specific about generating JSON and not saying anything else.
2024-04-24 09:40:00 +05:30
Debanjum Singh Solanky
a2e4e4bede Add support for Llama 3 in Khoj offline mode
- Improve extract question prompts to explicitly request JSON list
- Use llama-3 chat format if HF repo_id mentions llama-3. The
  llama-cpp-python logic for detecting when to use llama-3 chat format
  isn't robust enough currently
2024-04-24 09:40:00 +05:30
Debanjum Singh Solanky
8e77b3dc82 Fix infer_max_tokens func when configured_max_tokens is set to None 2024-04-24 09:36:29 +05:30
Debanjum Singh Solanky
8196ab62f9 Make valid file extension checking case insensitive on Desktop app 2024-04-24 09:35:20 +05:30
Debanjum Singh Solanky
5def14e3bb Skip indexing non-existent folders on Desktop app 2024-04-24 09:35:20 +05:30
Debanjum Singh Solanky
cd05f262a6 Pass auth headers to fix lazy load of chat messages on Desktop app 2024-04-24 09:35:20 +05:30
Debanjum Singh Solanky
4d5d3e6433 Set chat-message height to height of content in web, desktop
In some cases, especially with image generation requests, this was
causing the chat messages to overlap in the chat UI
2024-04-24 09:35:20 +05:30
sabaimran
60658a8037 Get rid of enable flag for the offline chat processor config
- Default, assume that offline chat is enabled if there is an offline chat model option configured
2024-04-23 23:08:29 +05:30
sabaimran
ac474fce38 Ensure that the tokenizer and max prompt size are used the wrapper method 2024-04-23 21:22:23 +05:30
Olatoyan George
ad59180fb8
Added indication in the desktop UI for back-end connectivity (#711)
* Changed the styling of the link that takes a user to the settings page into a button
* added an indicator that shows if a user is connected to the server or not
* made a class name more descriptive and also made the text in first run message more intuitive
* changed the command to install dependencies in the README.md
* changed the class name of the first run message text to be more descriptive
* added icons in the desktop UI that shows if a file is synced successfully or not
* made the link class name in the homepage more descriptive
* fixed the hover issue on status box in the chat header pane
* fixed hovering issue on status box on macOS
2024-04-23 16:43:48 +05:30
Debanjum
419b044ac5
Use set, inferred max token limits wherever chat models are used (#713)
- User configured max tokens limits weren't being passed to
  `send_message_to_model_wrapper'
- One of the load offline model code paths wasn't reachable. Remove it
  to simplify code
- When max prompt size isn't set infer max tokens based on free VRAM
  on machine
- Use min of app configured max tokens, vram based max tokens and
  model context window
2024-04-23 16:42:35 +05:30
AjaySDwivedi1
abf6f963ea
Replaced reinitialize and save all button to a sync button in config.… (#701)
Replaced reinitialize and save all button to a sync button in config
2024-04-23 16:42:11 +05:30
Debanjum Singh Solanky
c39c4e4ec4 Improve prompt for online search query generation chat actor
- Allow searching github, pypi for information about Khoj
- Enable creating multiple search queries by rewording prompt
2024-04-22 01:32:11 +05:30
Debanjum Singh Solanky
175169c156 Use set, inferred max token limits wherever chat models are used
- User configured max tokens limits weren't being passed to
  `send_message_to_model_wrapper'
- One of the load offline model code paths wasn't reachable. Remove it
  to simplify code
- When max prompt size isn't set infer max tokens based on free VRAM
  on machine
- Use min of app configured max tokens, vram based max tokens and
  model context window
2024-04-20 11:23:28 +05:30
Debanjum Singh Solanky
002cd14a65 Only let agent use online search tool if connected to it 2024-04-20 11:19:48 +05:30
Debanjum Singh Solanky
75c9ebbc54 Only show uvicorn debug logs at higher verbosity levels
Don't automatically show the uvicorn logs when in_debug_mode, only
show on at least verbosity = 2, i.e when start khoj with -vv flag
2024-04-20 11:18:01 +05:30
sabaimran
c6d668bacf Bump gunicorn workers per server up to 2 2024-04-18 11:32:51 +05:30
sabaimran
c9a8abafa4
Merge pull request #710 from khoj-ai/add-run-with-process-lock-and-fix-edge-cases
Extract run with process lock logic into func. Use it to re-index content
2024-04-17 01:29:02 -07:00
sabaimran
6de4a4873a Fix image-related client unit test 2024-04-17 13:28:48 +05:30
sabaimran
3132430737 Add tests for the db lock 2024-04-17 13:22:41 +05:30
sabaimran
d11354f9c8 Remove additional references to image content config 2024-04-17 13:00:50 +05:30
sabaimran
105dbf49e4 Fix max_duration_in_seconds for the update_embeddings job 2024-04-17 13:00:18 +05:30
Debanjum Singh Solanky
8e0bae894d Extract run with process lock logic into func. Use for content reindexing 2024-04-17 12:31:19 +05:30
Debanjum Singh Solanky
e9f608174b Fix access to Khoj admin panel from non HTTPS custom domains
To access the Khoj admin panel from a non HTTPS custom domain the
`KHOJ_NO_SSL' and `KHOJ_DOMAIN' env vars need to be explictly set.

See the updated setup docs for details.

Resolves #662
2024-04-17 03:20:05 +05:30
sabaimran
46210695b6 pin version of huggingface hub explicitly to ensure relevant constants are present. Closes #708 2024-04-17 01:09:36 +05:30
sabaimran
b0059654c9 Do not create an import error if the resend module is not available 2024-04-17 01:00:22 +05:30
sabaimran
f04ead7c37 Remove seting up log line for configuring image search 2024-04-17 00:45:39 +05:30
sabaimran
0208688801 Increase factor for n_ctx reduciton to 2e6 2024-04-17 00:41:36 +05:30
Debanjum Singh Solanky
1f2ffce85b Copy chat message with it's markdown formatting in Web, Desktop apps 2024-04-16 22:10:34 +05:30
sabaimran
91c8b137f1
Add a database lock for jobs that shouldn't be run by multiple workers (#706)
* Add a database lock for jobs that shouldn't be run by multiple workers

* Import relevant functions from utils.helpers
2024-04-16 21:29:27 +05:30
sabaimran
adb2e8cc5f Check if n is populated before making a comparison 2024-04-16 02:05:58 +05:30
Debanjum Singh Solanky
6707ccc463 Check before updating "chat" key in meta_log in chat history API endpoint 2024-04-15 21:06:47 +05:30
Debanjum Singh Solanky
4e7812fe55 Use Django management cmd to update inline images in DB to/from WebP/PNG
This provides Khoj server admins more control on migrating their S3
images to WebP format from PNG
2024-04-15 20:19:49 +05:30
Debanjum Singh Solanky
7fab8d6586 Only use chat messages count in history API endpoint when set by client 2024-04-15 19:12:57 +05:30
Debanjum
6b3ef61dd2
Improve Chat Page Load Perf, Offline Chat Perf and Miscellaneous Fixes (#703)
### Store Generated Images as WebP 
- 78bac4ae Add migration script to convert PNG to WebP references in database
- c6e84436 Update clients to support rendering webp images inline
- d21f22ff Store Khoj generated images as webp instead of png for faster loading

### Lazy Fetch Chat Messages to Improve Time, Data to First Render
This is especially helpful for long conversations with lots of images
- 128829c4 Render latest msgs on chat session load. Fetch, render rest as they near viewport
- 9e558577 Support getting latest N chat messages via chat history API

### Intelligently set Context Window of Offline Chat to Improve Performance
- 4977b551 Use offline chat prompt config to set context window of loaded chat model

### Fixes
- 148923c1 Fix to raise error on hitting rate limit during Github indexing
- b8bc6bee Always remove loading animation on Desktop app if can't login to server
- 38250705 Fix `get_user_photo` to only return photo, not user name from DB

### Miscellaneous Improvements
- 689202e0 Update recommended CMAKE flag to enable using CUDA on linux in Docs
- b820daf3 Makes logs less noisy
2024-04-15 18:34:29 +05:30
Debanjum Singh Solanky
a352940dfd Use Django management command to update images URL in DB to WebP
This provides Khoj server admins more control on migrating their S3
images to WebP format from PNG
2024-04-15 17:53:41 +05:30
Debanjum Singh Solanky
7d8e8eb0cf Use Enum to type text-to-image intent of Khoj chat response 2024-04-15 17:53:40 +05:30
Debanjum Singh Solanky
128829c477 Show latest msgs on chat session load. Fetch rest as they near viewport
- Reduces time to first render when loading long chat sessions
- Limits size of first page load, when loading long chat sessions

These performance improvements are maximally felt for large chat
sessions with lots of images generated by Khoj

Updated web and desktop app to support these changes for now
2024-04-15 16:10:56 +05:30
Debanjum Singh Solanky
9e5585776c Support getting latest N chat messages via chat history API
Get latest N if N > 0, else return all messages except latest N from
the conversation
2024-04-15 15:32:32 +05:30
Debanjum Singh Solanky
e5ff85f6fb Start fetching khoj css before icons to reduce time with no styling
This should reduce frequency of page load jitter when icons are loaded
before style is applied
2024-04-15 15:32:32 +05:30