Commit graph

3782 commits

Author SHA1 Message Date
sabaimran
559601dd0a Do not exit if/else loop in research loop when notes not found 2024-10-31 13:51:10 -07:00
sabaimran
a13760640c Only show trash can when turnId is present 2024-10-31 13:19:16 -07:00
sabaimran
8d1ecb9bd8 Add optional brew steps for docker install 2024-10-31 12:41:53 -07:00
Debanjum
adca6cbe9d Merge branch 'master' into add-prompt-tracer-for-observability 2024-10-31 02:28:34 -07:00
Debanjum
e17dc9f7b5 Put train of thought ui before Khoj response on web app 2024-10-31 02:24:53 -07:00
Debanjum
e8e6ead39f Fix deleting new messages generated after conversation load 2024-10-30 20:56:38 -07:00
Debanjum
cb90abc660 Resolve train of thought component needs unique key id error on web app 2024-10-30 14:00:21 -07:00
Debanjum
ca5a6831b6 Add ability to delete messages from the web app 2024-10-30 14:00:21 -07:00
Debanjum
ba15686682 Store turn id with each chat message. Expose API to delete chat turn
Each chat turn is a user query, khoj response message pair
2024-10-30 14:00:21 -07:00
Debanjum
f64f5b3b6e Handle add/delete file filter operation on non-existent conversation 2024-10-30 14:00:21 -07:00
Debanjum
b3a63017b5 Support setting seed for reproducible LLM response generation
Anthropic models do not support seed. But offline, gemini and openai
models do. Use these to debug and test Khoj via KHOJ_LLM_SEED env var
2024-10-30 14:00:21 -07:00
Debanjum
d44e68ba01 Improve handling embedding model config from admin interface
- Allow server to start if loading embedding model fails with an error.
  This allows fixing the embedding model config via admin panel.

  Previously server failed to start if embedding model was configured
  incorrectly. This prevented fixing the model config via admin panel.

- Convert boolean string in config json to actual booleans when passed
  via admin panel as json before passing to model, query configs

- Only create default model if no search model configured by admin.
  Return first created search model if its been configured by admin.
2024-10-30 14:00:21 -07:00
Debanjum
358a6ce95d Defer turning cursor color to selected agents color for later
Capability exists but idea needs to be investigated further
2024-10-30 14:00:21 -07:00
Debanjum
2ac840e3f2 Make cursor in chat input take on selected agent color 2024-10-30 14:00:21 -07:00
Debanjum
1448b8b3fc Use 3rd person for user in research prompt to reduce person confusion
Models were getting a bit confused about who is search for who's
information. Using third person to explicitly call out on who's behalf
these searches are running seems to perform better across
models (gemini's, gpt etc.), even if the role of the message is user.
2024-10-30 13:49:48 -07:00
Debanjum
b8c6989677 Separate example from actual question in extract question prompt 2024-10-30 13:49:48 -07:00
Debanjum
86ffd7a7a2 Handle \n, dedupe json cleaning into single function for reusability
Use placeholder for newline in json object values until json parsed
and values extracted. This is useful when research mode models outputs
multi-line codeblocks in queries etc.
2024-10-30 13:49:48 -07:00
Debanjum
83ca820abe Encourage Anthropic models to output json object using { prefill
Anthropic API doesn't have ability to enforce response with valid json
object, unlike all the other model types.

While the model will usually adhere to json output instructions.

This step is meant to more strongly encourage it to just output json
object when response_type of json_object is requested.
2024-10-30 13:49:48 -07:00
Debanjum
dc8e89b5de Pass tool AIs iteration history as chat history for better context
Separate conversation history with user from the conversation history
between the tool AIs and the researcher AI.

Tools AIs don't need top level conversation history, that context is
meant for the researcher AI.

The invoked tool AIs need previous attempts at using the tool in this
research runs iteration history to better tune their next run.

Or at least that is the hypothesis to break the models looping.
2024-10-30 13:49:48 -07:00
Debanjum
d865994062 Rename code tool arg previous_iteration_history' to context' 2024-10-30 13:49:48 -07:00
Debanjum
06aeca2670 Make researcher, docs search AIs ask more diverse retrieval questions
Models weren't generating a diverse enough set of questions. They'd do
minor variations on the original query. What is required is asking
queries from a bunch of different lenses to retrieve the requisite
information.

This prompt updates shows the AIs the breadth of questions to by
example and instruction. Seem like performance improved based on vibes
2024-10-30 13:49:48 -07:00
Debanjum
01881dc7a2 Revert "Make extract question prompt in 1st person wrt user as its a user message"
This reverts commit 6d3602798aa1b95a30c557576fd4f93ddef2ae76.
2024-10-30 13:49:48 -07:00
Debanjum
3e695df198 Make extract question prompt in 1st person wrt user as its a user message
Divide Example from Actual chat history section in prompt
2024-10-30 13:49:48 -07:00
Debanjum
a3751d6a04 Make extract relevant information system prompt work for any document
Previously it was too strongly tuned for extracting information from
only webpages. This shouldn't be necessary
2024-10-30 13:49:48 -07:00
Debanjum
a39e747d07 Improve passing user name in pick next research tool prompt 2024-10-30 13:49:48 -07:00
Debanjum
deff512baa Improve research mode prompts to reduce looping, increase webpage reads 2024-10-30 13:49:48 -07:00
Debanjum
d3184ae39a Simplify storing and displaying document results in research mode
- Mention count of notes and files disovered
- Store query associated with each compiled reference retrieved for
  easier referencing
2024-10-30 13:49:48 -07:00
Debanjum
8bd94bf855 Do not use a message branch if no msg id provided to prompt tracer 2024-10-30 13:49:48 -07:00
sabaimran
b63fbc5345 Add a simple badget to the dropdown menu that shows subscription status 2024-10-30 13:00:16 -07:00
sabaimran
82f3d79064 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-30 11:32:10 -07:00
sabaimran
2b2564257e Handle subscription case where it's set to trial, but renewal_date is not set. set the renewal_date for LENGTH_OF_FREE_TRIAL days from subscription creation. 2024-10-30 11:05:31 -07:00
Debanjum
9935d4db0b Do not use a message branch if no msg id provided to prompt tracer 2024-10-28 17:50:27 -07:00
Debanjum
d184498038 Pass context in separate message from user query to research chat actor 2024-10-28 15:37:28 -07:00
Debanjum
d75ce4a9e3 Format online, notes, code context with YAML to be legibile for LLM 2024-10-28 15:37:28 -07:00
sabaimran
5bea0c705b Use break-words in the train of thought for better formatting 2024-10-28 15:36:06 -07:00
sabaimran
1f1b182461 Automatically carry over research mode from home page to chat
- Improve mobile friendliness with new research mode toggle, since chat input area is now taking up more space
- Remove clunky title from the suggestion card
- Fix fk lookup error for agent.creator
2024-10-28 15:29:24 -07:00
sabaimran
ebaed53069 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-28 12:39:00 -07:00
sabaimran
889dbd738a Add keyword diagram to diagram output mode description 2024-10-28 12:20:46 -07:00
Debanjum
50ffd7f199 Merge branch 'master' into features/advanced-reasoning 2024-10-28 04:10:59 -07:00
Debanjum
a5d0ca6e1c Use selected agent color to theme the chat input area on home page 2024-10-28 03:47:40 -07:00
Debanjum
aad7528d1b Render slash commands popup below chat input text area on home page 2024-10-28 02:06:04 -07:00
Debanjum
3e17ab438a
Separate notes, online context from user message sent to chat models (#950)
Overview
---
- Put context into separate user message before sending to chat model.
  This should improve model response quality and truncation logic in code
- Pass online context from chat history to chat model for response.
  This should improve response speed when previous online context can be reused
- Improve format of notes, online context passed to chat models in prompt.
  This should improve model response quality

Details
---
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.

This will improve
- Models ability to differentiate data from user query.
  That should improve response quality and reduce prompt injection
  probability
- Make truncation logic simpler and more robust
  When context window hit, can simply pop messages to auto truncate
  context in order of context, user, assistant message for each
  conversation turn in history until reach current user query

  The complex, brittle logic to extract user query from context in
  last user message isn't required.
2024-10-28 02:03:18 -07:00
Debanjum
8ddd70f3a9 Put context into separate message before sending to offline chat model
Align context passed to offline chat model with other chat models

- Pass context in separate message for better separation between user
  query and the shared context
- Pass filename in context
- Add online results for webpage conversation command
2024-10-28 00:22:21 -07:00
Debanjum
ee0789eb3d Mark context messages with user role as context role isn't being used
Context role was added to allow change message truncation order based
on context role as well.

Revert it for now since currently this is not currently being done.
2024-10-28 00:04:14 -07:00
Debanjum
4e39088f5b Make agent name in home page carousel not text wrap on mobile 2024-10-27 23:03:53 -07:00
Debanjum
94074b7007 Focus chat input on toggle research mode. v-align it with send button 2024-10-27 22:54:55 -07:00
sabaimran
a691ce4aa6 Batch entries into smaller groups to process 2024-10-27 20:43:41 -07:00
sabaimran
2924909692 Add a research mode toggle to the chat input area 2024-10-27 16:37:40 -07:00
sabaimran
68499e253b Auto-collapse train of thought, show after chat response in history 2024-10-27 15:48:13 -07:00
sabaimran
101ea6efb1 Add research mode as a slash command, remove from default path 2024-10-27 15:47:44 -07:00