Commit graph

4142 commits

Author SHA1 Message Date
Debanjum
aad7528d1b Render slash commands popup below chat input text area on home page 2024-10-28 02:06:04 -07:00
Debanjum
3e17ab438a
Separate notes, online context from user message sent to chat models (#950)
Overview
---
- Put context into separate user message before sending to chat model.
  This should improve model response quality and truncation logic in code
- Pass online context from chat history to chat model for response.
  This should improve response speed when previous online context can be reused
- Improve format of notes, online context passed to chat models in prompt.
  This should improve model response quality

Details
---
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.

This will improve
- Models ability to differentiate data from user query.
  That should improve response quality and reduce prompt injection
  probability
- Make truncation logic simpler and more robust
  When context window hit, can simply pop messages to auto truncate
  context in order of context, user, assistant message for each
  conversation turn in history until reach current user query

  The complex, brittle logic to extract user query from context in
  last user message isn't required.
2024-10-28 02:03:18 -07:00
Debanjum
8ddd70f3a9 Put context into separate message before sending to offline chat model
Align context passed to offline chat model with other chat models

- Pass context in separate message for better separation between user
  query and the shared context
- Pass filename in context
- Add online results for webpage conversation command
2024-10-28 00:22:21 -07:00
Debanjum
ee0789eb3d Mark context messages with user role as context role isn't being used
Context role was added to allow change message truncation order based
on context role as well.

Revert it for now since currently this is not currently being done.
2024-10-28 00:04:14 -07:00
Debanjum
4e39088f5b Make agent name in home page carousel not text wrap on mobile 2024-10-27 23:03:53 -07:00
Debanjum
94074b7007 Focus chat input on toggle research mode. v-align it with send button 2024-10-27 22:54:55 -07:00
sabaimran
a691ce4aa6 Batch entries into smaller groups to process 2024-10-27 20:43:41 -07:00
sabaimran
2924909692 Add a research mode toggle to the chat input area 2024-10-27 16:37:40 -07:00
sabaimran
68499e253b Auto-collapse train of thought, show after chat response in history 2024-10-27 15:48:13 -07:00
sabaimran
101ea6efb1 Add research mode as a slash command, remove from default path 2024-10-27 15:47:44 -07:00
sabaimran
0bd78791ca Let user exit from command mode with esc, click out, etc. 2024-10-27 15:01:49 -07:00
sabaimran
a121d67b10 Persist the train of thought in the conversation history 2024-10-26 23:46:15 -07:00
sabaimran
9e8ac7f89e Fix input/output mismatches in the /summarize command 2024-10-26 16:37:58 -07:00
sabaimran
e4285941d1 Use the advanced chat model if the user is subscribed 2024-10-26 16:00:54 -07:00
sabaimran
33e48aa27e Merge branch 'add-prompt-tracer-for-observability' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-26 14:09:00 -07:00
sabaimran
fd71a4b086 Add better exception handling in the prompt trace logic, use default value from parameters 2024-10-26 14:08:00 -07:00
Debanjum
3e5b5ec122 Encourage model to read webpages more often after online search
Previously model would rarely read webpages after webpage search. Need
the model to webpages more regularly for deeper research and to stop
getting stuck in repetitive online search loops
2024-10-26 10:49:09 -07:00
Debanjum
bf96d81943 Format online results as YAML to pass it in more readable form to model
Previous passing of online results as json dump in prompts was less
readable for humans, and I'm guessing less readable for
models (trained on human data) as well?
2024-10-26 10:49:09 -07:00
Debanjum
3e97ebf0c7 Unescape special characters in prompt traces for better readability 2024-10-26 10:49:09 -07:00
Debanjum
8af9dc3ee1 Unescape special characters in prompt traces for better readability 2024-10-26 10:45:42 -07:00
Debanjum Singh Solanky
0f3927e810 Send gathered references to client after code results calculated 2024-10-26 05:59:10 -07:00
Debanjum Singh Solanky
f04f871a72 Merge branch 'add-prompt-tracer-for-observability' of github.com:khoj-ai/khoj into features/advanced-reasoning
- Start from this branches src/khoj/routers/api_chat.py
    Add tracer to all old and new chat actors that don't have it set
    when they are called.
  - Update the new chat actors like apick next tool etc to use tracer too
2024-10-26 05:56:13 -07:00
Debanjum Singh Solanky
ddc6ccde2d Merge branch 'master' into features/advanced-reasoning
- Conflicts:
  Combine both sides of the conflict in all 3 files below
  - src/khoj/processor/conversation/utils.py
  - src/khoj/routers/helpers.py
  - src/khoj/utils/helpers.py
2024-10-26 05:15:51 -07:00
Debanjum Singh Solanky
ea0712424b Commit conversation traces using user, chat, message branch hierarchy
- Message train of thought forks and merges from its conversation branch
- Conversation branches from user branch
- User branches from root commit on the main branch

- Weave chat tracer metadata from api endpoint through all chat actors
  and commit it to the prompt trace
2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
a3022b7556 Allow Offline Chat model calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
eb6424f14d Allow Anthropic API calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
6fcd6a5659 Allow Gemini API calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
384f394336 Allow OpenAI API calling functions to save conversation traces 2024-10-26 04:59:21 -07:00
Debanjum Singh Solanky
10c8fd3b2a Save conversation traces to git for visualization 2024-10-26 04:59:19 -07:00
sabaimran
7e0a692d16 Release Khoj version 1.27.1 2024-10-25 15:23:07 -07:00
sabaimran
b257fa1884 Add a None check before doing a DT comparison when getting subscription type 2024-10-25 15:22:48 -07:00
sabaimran
0f6f282c30 Release Khoj version 1.27.0 2024-10-25 14:11:14 -07:00
sabaimran
479e156168 Add to the ConversationCommand.Image description to LLM 2024-10-25 09:14:32 -07:00
sabaimran
a11b5293fb Add uploaded images to research mode, code slash command, include code references 2024-10-24 23:56:24 -07:00
sabaimran
5acf40c440 Clean up summarization code paths
Use assumption of summarization response being a str
2024-10-24 23:56:24 -07:00
sabaimran
12b32a3d04 Resolve merge conflicts 2024-10-24 23:43:55 -07:00
Debanjum
adee5a3e20
Give Vision to Anthropic models in Khoj (#948)
### Major
- Give Vision to Anthropic models in Khoj

### Minor
- Reuse logic to format messages for chat with anthropic models
- Make the get image from url function more versatile and reusable
- Encourage output mode chat actor to output only json and nothing else
2024-10-24 18:02:38 -07:00
Debanjum Singh Solanky
01d740debd Return typed image from image_with_url function for readability 2024-10-24 17:58:46 -07:00
Debanjum Singh Solanky
37317e321d Dedupe user location passed in image, diagram generation prompts 2024-10-24 01:03:29 -07:00
Debanjum Singh Solanky
2a32836d1a Log more descriptive error when image gen fails with Replicate 2024-10-24 01:03:29 -07:00
sabaimran
30f9225021 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-23 19:15:51 -07:00
sabaimran
5120597d4e
Remove user customized search model (#946)
- Use a single standard search model across the server. There's diminishing benefits for having multiple user-customizable search models. 
- We may want to add server-level customization for specific tasks
- Store the search model used to generate a given entry on the `Entry` object
- Remove user-facing APIs and view
- Add a management command for migrating the default search model on the server

In a future PR (after running the migration), we'll also remove the `UserSearchModelConfig`
2024-10-23 17:38:37 -07:00
Debanjum Singh Solanky
8d588e0765 Encourage output mode chat actor to output only json and nothing else
Latest claude model wanted to say more than just give the json output.
The updated prompt encourages the model to ouput just json. This is
similar to what is already being done for other prompts
2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
abad5348a0 Give Vision to Anthropic models in Khoj 2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
6fd50a5956 Reuse logic to format messages for chat with anthropic models 2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
82eac5a043 Make the get image from url function more versatile and reusable
It was previously added under the google utils. Now it can be used by
other conversation processors as well.

The updated function
- can get both base64 encoded and PIL formatted images from url
- will return the media type of the image as well in response
2024-10-23 17:19:20 -07:00
sabaimran
f3ce47b445
Create explicit flow to enable the free trial (#944)
* Create explicit flow to enable the free trial

The current design is confusing. It obfuscates the fact that the user is on a free trial. This design will make the opt-in explicit and more intuitive.

* Use the Subscription Type enum instead of hardcoded strings everywhere

* Use length of free trial in the frontend code as well
2024-10-23 15:29:23 -07:00
Debanjum Singh Solanky
bc059eeb0b Merge branch 'master' into put-retrieved-context-in-separate-chatml-message 2024-10-23 12:55:18 -07:00
Debanjum Singh Solanky
3b978b9b67 Fix chat history construction when generating chatml msgs with context 2024-10-23 12:55:12 -07:00
sabaimran
c5e91c346a Fix Docker desktop link for Linux 2024-10-23 11:24:54 -07:00