Commit graph

2645 commits

Author SHA1 Message Date
Debanjum Singh Solanky
a1e5195c8b Save separate user message time from Khoj response time in chat logs
Previously user message time was being stored the same as Khoj
response time in conversation logs.
2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
5133b6e73b Minor improvements to styling the config page 2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
648f1a5c71 Suffix chat response element vars with "El" in chat.html of web, desktop apps 2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
98d0ffecf1 Add section in settings page to view, delete your scheduled tasks 2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
423d61796d Add API endpoints to get and delete user scheduled tasks 2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
af0972c539 Make scheduled jobs persistent and work in multiple worker setups
- Store scheduled job state in Postgres so job schedules persist
  across app restarts
- Use Process Locks to only allow single worker to process a given job
  type. This prevents duplicating job runs across all workers
2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
fcf878e1f3 Add new operation Scheduled Job to Operation enum of ProcessLock 2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
c28d7d3414 Add basic chat actor test to infer scheduled queries 2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
c11742f443 Add chat actor to schedule run query for user at specified times
- Detect when user intends to schedule a task, aka reminder
  Add new output mode: reminder. Add example of selecting the reminder
  output mode
- Extract schedule time (as cron timestring) and inferred query to run
  from user message
- Use APScheduler to call chat with inferred query at scheduled time
- Handle reminder scheduling from both websocket and http chat requests

- Support constructing scheduled task using chat history as context
  Pass chat history to scheduled query generator for improved context
  for scheduled task generation
2024-05-01 08:28:59 +05:30
Debanjum Singh Solanky
9e068fad4f Handle null ref, when refresh conversation from db in websocket chat 2024-04-30 14:19:07 +05:30
sabaimran
37879a7850 Release Khoj version 1.11.2 2024-04-30 13:31:06 +05:30
sabaimran
93b41170d1 Refresh the conversation log from the db before addressing the next query 2024-04-30 13:27:51 +05:30
Debanjum Singh Solanky
f1545d2b2f Add, fix help link, improve title style in web ui config pages
- Align title text with icon better in all config cards
- Fix help link to github setup docs
- Fix help link to notion setup docs
2024-04-30 05:50:08 +05:30
Debanjum Singh Solanky
e6da0f9a8c Fix response type of delete client tokens API endpoint
Previously the make delete API response failed, after deleting token.
Required a page refresh to see that the API token was actually gone.

This was happening because the response type of the delete token API
endpoint isn't a string, so it failed FastAPI response validation
checks.
2024-04-30 02:46:52 +05:30
sabaimran
0f4c3518d3 Allow session cookies to be stored with a lax policy for some localhost scenarios 2024-04-29 15:48:45 +05:30
sabaimran
5beedc9734 Use Secure proxy ssl header only if no https 2024-04-29 15:33:21 +05:30
sabaimran
408f4780ce Add and update documentation for setting up khoj with an openai proxy server or offline llm 2024-04-27 20:16:32 +05:30
sabaimran
12258f02d7 Release Khoj version 1.11.1 2024-04-27 18:42:24 +05:30
sabaimran
2047b0c973
Support customization of the OpenAI base url in admin settings (#725)
- Allow self-hosted users to customize their open ai base url. This allows you to easily use a proxy service and extend support for other models.
- This also includes a migration that associates any existing openai chat model configuration with an openai processor configuration
- Make changing model a paid/subscriber feature
- Removes usage of langchain's OpenAI wrapper for better control over parsing input/output
2024-04-27 18:24:35 +05:30
sabaimran
49834e3b00 Add a hero image for the og:image meta tag 2024-04-27 17:07:21 +05:30
sabaimran
138f12f957 Fix indentation and revert first run message link styling to all links 2024-04-27 09:56:58 +05:30
Debanjum Singh Solanky
4395ed8065 Improve extract_questions func. Set message role to user, not assistant
Previous behavior of passing message with role = "assistant was
reducing instruction following quality of the model
2024-04-26 11:55:22 +05:30
Debanjum Singh Solanky
346499f12c Fix, improve args being passed to chat_completion args
- Allow passing completion args through completion_with_backoff
- Pass model_kwargs in a separate arg to simplify this
- Pass model in `model_name' kwarg from the send_message_to_model func
  `model_name' kwarg is used by langchain, not `model' kwarg
2024-04-26 11:55:22 +05:30
sabaimran
d8f2eac6e0 Release Khoj version 1.11.0 2024-04-25 17:24:59 +05:30
Debanjum Singh Solanky
1842017393 Skip trying to index deleted files, folders from Desktop app
Previously app would crash on startup if desktop app was told to
index a file that had been deleted afterwards
2024-04-25 15:23:05 +05:30
Debanjum
17a06f152c
Support Llama 3 and Improve Offline Chat Actors (#724)
- Add support for Llama 3 in Khoj offline mode
- Make chat actors generate valid json with more local models
- Fix offline chat actor tests
2024-04-25 14:00:56 +05:30
Debanjum
220e5516ab
Make Search Models More Configurable. Upgrade Default Cross-Encoder (#722)
- Upgrade default cross-encoder to mixedbread ai's mxbai-rerank-xsmall
- Support more embedding models by making query, docs encoding configurable
2024-04-25 13:55:49 +05:30
Debanjum Singh Solanky
cf08eaf786 Add comments explaining each field in the search model config in DB 2024-04-25 13:54:13 +05:30
Debanjum
4ee5ac7c20
Fix Chat UI and Indexing on Desktop App (#723)
- Make valid file extension checking case insensitive on Desktop app
- Skip indexing non-existent folders on Desktop app
- Pass auth headers to fix lazy load of chat messages on Desktop app
- Set chat-message height to height of content in web, desktop
2024-04-24 18:49:03 +05:30
Debanjum Singh Solanky
89ef23de50 Upgrade gunicorn and make it only a production dependency 2024-04-24 11:28:55 +05:30
Debanjum Singh Solanky
799efb5974 Create DB migration to add new fields and change default cross-encoder 2024-04-24 09:50:34 +05:30
Debanjum Singh Solanky
ec41482324 Upgrade default cross-encoder to mixedbread ai's mxbai-rerank-xsmall
Previous cross-encoder model was a few years old, newer models should
have improved in quality. Model size increases by 50% compared to
previous for better performance, at least on benchmarks
2024-04-24 09:50:09 +05:30
Debanjum Singh Solanky
7eaf9367fe Support more embedding models by making query, docs encoding configurable
Most newer, better embeddings models add a query, docs prefix when
encoding. Previously Khoj admins couldn't configure these, so it
wasn't possible to use these newer models.

This change allows configuring the kwargs passed to the query, docs
encoders by updating the search config in the database.
2024-04-24 09:49:17 +05:30
Debanjum Singh Solanky
f2db8d7d99 Fix offline chat actor tests
Do not check for original q in extracted questions. Since this was
removed in a previous commit
2024-04-24 09:40:00 +05:30
Debanjum Singh Solanky
4f7237b158 Make chat actors generate valid json with more local models
Improve tool, online search, webpage links, docs search chat actor
prompts. Ensure works with hermes-2-pro and llama-3.

Be more specific about generating JSON and not saying anything else.
2024-04-24 09:40:00 +05:30
Debanjum Singh Solanky
a2e4e4bede Add support for Llama 3 in Khoj offline mode
- Improve extract question prompts to explicitly request JSON list
- Use llama-3 chat format if HF repo_id mentions llama-3. The
  llama-cpp-python logic for detecting when to use llama-3 chat format
  isn't robust enough currently
2024-04-24 09:40:00 +05:30
Debanjum Singh Solanky
8e77b3dc82 Fix infer_max_tokens func when configured_max_tokens is set to None 2024-04-24 09:36:29 +05:30
Debanjum Singh Solanky
8196ab62f9 Make valid file extension checking case insensitive on Desktop app 2024-04-24 09:35:20 +05:30
Debanjum Singh Solanky
5def14e3bb Skip indexing non-existent folders on Desktop app 2024-04-24 09:35:20 +05:30
Debanjum Singh Solanky
cd05f262a6 Pass auth headers to fix lazy load of chat messages on Desktop app 2024-04-24 09:35:20 +05:30
Debanjum Singh Solanky
4d5d3e6433 Set chat-message height to height of content in web, desktop
In some cases, especially with image generation requests, this was
causing the chat messages to overlap in the chat UI
2024-04-24 09:35:20 +05:30
sabaimran
60658a8037 Get rid of enable flag for the offline chat processor config
- Default, assume that offline chat is enabled if there is an offline chat model option configured
2024-04-23 23:08:29 +05:30
sabaimran
ac474fce38 Ensure that the tokenizer and max prompt size are used the wrapper method 2024-04-23 21:22:23 +05:30
Olatoyan George
ad59180fb8
Added indication in the desktop UI for back-end connectivity (#711)
* Changed the styling of the link that takes a user to the settings page into a button
* added an indicator that shows if a user is connected to the server or not
* made a class name more descriptive and also made the text in first run message more intuitive
* changed the command to install dependencies in the README.md
* changed the class name of the first run message text to be more descriptive
* added icons in the desktop UI that shows if a file is synced successfully or not
* made the link class name in the homepage more descriptive
* fixed the hover issue on status box in the chat header pane
* fixed hovering issue on status box on macOS
2024-04-23 16:43:48 +05:30
Debanjum
419b044ac5
Use set, inferred max token limits wherever chat models are used (#713)
- User configured max tokens limits weren't being passed to
  `send_message_to_model_wrapper'
- One of the load offline model code paths wasn't reachable. Remove it
  to simplify code
- When max prompt size isn't set infer max tokens based on free VRAM
  on machine
- Use min of app configured max tokens, vram based max tokens and
  model context window
2024-04-23 16:42:35 +05:30
AjaySDwivedi1
abf6f963ea
Replaced reinitialize and save all button to a sync button in config.… (#701)
Replaced reinitialize and save all button to a sync button in config
2024-04-23 16:42:11 +05:30
Debanjum Singh Solanky
c39c4e4ec4 Improve prompt for online search query generation chat actor
- Allow searching github, pypi for information about Khoj
- Enable creating multiple search queries by rewording prompt
2024-04-22 01:32:11 +05:30
Debanjum Singh Solanky
175169c156 Use set, inferred max token limits wherever chat models are used
- User configured max tokens limits weren't being passed to
  `send_message_to_model_wrapper'
- One of the load offline model code paths wasn't reachable. Remove it
  to simplify code
- When max prompt size isn't set infer max tokens based on free VRAM
  on machine
- Use min of app configured max tokens, vram based max tokens and
  model context window
2024-04-20 11:23:28 +05:30
Debanjum Singh Solanky
002cd14a65 Only let agent use online search tool if connected to it 2024-04-20 11:19:48 +05:30
Debanjum Singh Solanky
75c9ebbc54 Only show uvicorn debug logs at higher verbosity levels
Don't automatically show the uvicorn logs when in_debug_mode, only
show on at least verbosity = 2, i.e when start khoj with -vv flag
2024-04-20 11:18:01 +05:30