Commit graph

3142 commits

Author SHA1 Message Date
Debanjum
50ffd7f199 Merge branch 'master' into features/advanced-reasoning 2024-10-28 04:10:59 -07:00
Debanjum
a5d0ca6e1c Use selected agent color to theme the chat input area on home page 2024-10-28 03:47:40 -07:00
Debanjum
aad7528d1b Render slash commands popup below chat input text area on home page 2024-10-28 02:06:04 -07:00
Debanjum
3e17ab438a
Separate notes, online context from user message sent to chat models (#950)
Overview
---
- Put context into separate user message before sending to chat model.
  This should improve model response quality and truncation logic in code
- Pass online context from chat history to chat model for response.
  This should improve response speed when previous online context can be reused
- Improve format of notes, online context passed to chat models in prompt.
  This should improve model response quality

Details
---
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.

This will improve
- Models ability to differentiate data from user query.
  That should improve response quality and reduce prompt injection
  probability
- Make truncation logic simpler and more robust
  When context window hit, can simply pop messages to auto truncate
  context in order of context, user, assistant message for each
  conversation turn in history until reach current user query

  The complex, brittle logic to extract user query from context in
  last user message isn't required.
2024-10-28 02:03:18 -07:00
Debanjum
8ddd70f3a9 Put context into separate message before sending to offline chat model
Align context passed to offline chat model with other chat models

- Pass context in separate message for better separation between user
  query and the shared context
- Pass filename in context
- Add online results for webpage conversation command
2024-10-28 00:22:21 -07:00
Debanjum
ee0789eb3d Mark context messages with user role as context role isn't being used
Context role was added to allow change message truncation order based
on context role as well.

Revert it for now since currently this is not currently being done.
2024-10-28 00:04:14 -07:00
Debanjum
4e39088f5b Make agent name in home page carousel not text wrap on mobile 2024-10-27 23:03:53 -07:00
Debanjum
94074b7007 Focus chat input on toggle research mode. v-align it with send button 2024-10-27 22:54:55 -07:00
sabaimran
a691ce4aa6 Batch entries into smaller groups to process 2024-10-27 20:43:41 -07:00
sabaimran
2924909692 Add a research mode toggle to the chat input area 2024-10-27 16:37:40 -07:00
sabaimran
68499e253b Auto-collapse train of thought, show after chat response in history 2024-10-27 15:48:13 -07:00
sabaimran
101ea6efb1 Add research mode as a slash command, remove from default path 2024-10-27 15:47:44 -07:00
sabaimran
0bd78791ca Let user exit from command mode with esc, click out, etc. 2024-10-27 15:01:49 -07:00
sabaimran
a121d67b10 Persist the train of thought in the conversation history 2024-10-26 23:46:15 -07:00
sabaimran
9e8ac7f89e Fix input/output mismatches in the /summarize command 2024-10-26 16:37:58 -07:00
sabaimran
e4285941d1 Use the advanced chat model if the user is subscribed 2024-10-26 16:00:54 -07:00
sabaimran
33e48aa27e Merge branch 'add-prompt-tracer-for-observability' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-26 14:09:00 -07:00
sabaimran
fd71a4b086 Add better exception handling in the prompt trace logic, use default value from parameters 2024-10-26 14:08:00 -07:00
Debanjum
3e5b5ec122 Encourage model to read webpages more often after online search
Previously model would rarely read webpages after webpage search. Need
the model to webpages more regularly for deeper research and to stop
getting stuck in repetitive online search loops
2024-10-26 10:49:09 -07:00
Debanjum
bf96d81943 Format online results as YAML to pass it in more readable form to model
Previous passing of online results as json dump in prompts was less
readable for humans, and I'm guessing less readable for
models (trained on human data) as well?
2024-10-26 10:49:09 -07:00
Debanjum
3e97ebf0c7 Unescape special characters in prompt traces for better readability 2024-10-26 10:49:09 -07:00
Debanjum
8af9dc3ee1 Unescape special characters in prompt traces for better readability 2024-10-26 10:45:42 -07:00
Debanjum Singh Solanky
0f3927e810 Send gathered references to client after code results calculated 2024-10-26 05:59:10 -07:00
Debanjum Singh Solanky
f04f871a72 Merge branch 'add-prompt-tracer-for-observability' of github.com:khoj-ai/khoj into features/advanced-reasoning
- Start from this branches src/khoj/routers/api_chat.py
    Add tracer to all old and new chat actors that don't have it set
    when they are called.
  - Update the new chat actors like apick next tool etc to use tracer too
2024-10-26 05:56:13 -07:00
Debanjum Singh Solanky
ddc6ccde2d Merge branch 'master' into features/advanced-reasoning
- Conflicts:
  Combine both sides of the conflict in all 3 files below
  - src/khoj/processor/conversation/utils.py
  - src/khoj/routers/helpers.py
  - src/khoj/utils/helpers.py
2024-10-26 05:15:51 -07:00
Debanjum Singh Solanky
ea0712424b Commit conversation traces using user, chat, message branch hierarchy
- Message train of thought forks and merges from its conversation branch
- Conversation branches from user branch
- User branches from root commit on the main branch

- Weave chat tracer metadata from api endpoint through all chat actors
  and commit it to the prompt trace
2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
a3022b7556 Allow Offline Chat model calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
eb6424f14d Allow Anthropic API calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
6fcd6a5659 Allow Gemini API calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
384f394336 Allow OpenAI API calling functions to save conversation traces 2024-10-26 04:59:21 -07:00
Debanjum Singh Solanky
10c8fd3b2a Save conversation traces to git for visualization 2024-10-26 04:59:19 -07:00
sabaimran
7e0a692d16 Release Khoj version 1.27.1 2024-10-25 15:23:07 -07:00
sabaimran
b257fa1884 Add a None check before doing a DT comparison when getting subscription type 2024-10-25 15:22:48 -07:00
sabaimran
0f6f282c30 Release Khoj version 1.27.0 2024-10-25 14:11:14 -07:00
sabaimran
479e156168 Add to the ConversationCommand.Image description to LLM 2024-10-25 09:14:32 -07:00
sabaimran
a11b5293fb Add uploaded images to research mode, code slash command, include code references 2024-10-24 23:56:24 -07:00
sabaimran
5acf40c440 Clean up summarization code paths
Use assumption of summarization response being a str
2024-10-24 23:56:24 -07:00
sabaimran
12b32a3d04 Resolve merge conflicts 2024-10-24 23:43:55 -07:00
Debanjum
adee5a3e20
Give Vision to Anthropic models in Khoj (#948)
### Major
- Give Vision to Anthropic models in Khoj

### Minor
- Reuse logic to format messages for chat with anthropic models
- Make the get image from url function more versatile and reusable
- Encourage output mode chat actor to output only json and nothing else
2024-10-24 18:02:38 -07:00
Debanjum Singh Solanky
01d740debd Return typed image from image_with_url function for readability 2024-10-24 17:58:46 -07:00
Debanjum Singh Solanky
37317e321d Dedupe user location passed in image, diagram generation prompts 2024-10-24 01:03:29 -07:00
Debanjum Singh Solanky
2a32836d1a Log more descriptive error when image gen fails with Replicate 2024-10-24 01:03:29 -07:00
sabaimran
30f9225021 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-23 19:15:51 -07:00
sabaimran
5120597d4e
Remove user customized search model (#946)
- Use a single standard search model across the server. There's diminishing benefits for having multiple user-customizable search models. 
- We may want to add server-level customization for specific tasks
- Store the search model used to generate a given entry on the `Entry` object
- Remove user-facing APIs and view
- Add a management command for migrating the default search model on the server

In a future PR (after running the migration), we'll also remove the `UserSearchModelConfig`
2024-10-23 17:38:37 -07:00
Debanjum Singh Solanky
8d588e0765 Encourage output mode chat actor to output only json and nothing else
Latest claude model wanted to say more than just give the json output.
The updated prompt encourages the model to ouput just json. This is
similar to what is already being done for other prompts
2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
abad5348a0 Give Vision to Anthropic models in Khoj 2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
6fd50a5956 Reuse logic to format messages for chat with anthropic models 2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
82eac5a043 Make the get image from url function more versatile and reusable
It was previously added under the google utils. Now it can be used by
other conversation processors as well.

The updated function
- can get both base64 encoded and PIL formatted images from url
- will return the media type of the image as well in response
2024-10-23 17:19:20 -07:00
sabaimran
f3ce47b445
Create explicit flow to enable the free trial (#944)
* Create explicit flow to enable the free trial

The current design is confusing. It obfuscates the fact that the user is on a free trial. This design will make the opt-in explicit and more intuitive.

* Use the Subscription Type enum instead of hardcoded strings everywhere

* Use length of free trial in the frontend code as well
2024-10-23 15:29:23 -07:00
Debanjum Singh Solanky
bc059eeb0b Merge branch 'master' into put-retrieved-context-in-separate-chatml-message 2024-10-23 12:55:18 -07:00
Debanjum Singh Solanky
3b978b9b67 Fix chat history construction when generating chatml msgs with context 2024-10-23 12:55:12 -07:00
Debanjum Singh Solanky
9f2c02d9f7 Chat with the default agent by default from web app home
Had temporarily updated the default selected agent to last used.
Revert for now as
1. The previous logic was buggy. It didn't select the default agent
   even when the last used agent was the default agent. Which would
   require more work.
2. It maybe too early anyway to set the default agent to last used.
2024-10-23 03:43:57 -07:00
Debanjum Singh Solanky
218946edda Fix copying message with user images on web app
Adding div elements to message to render degraded text copied to
clipboard for messages with user uploaded images.

This change fixes that by separating message to render from message
for clipboard. It ensures differently formatted forms of the user
images are added to the two to allow proper rendering while still
having decently formatted text copied to clipboard
2024-10-23 03:41:25 -07:00
Debanjum Singh Solanky
7d9a06c8ab Merge branch 'master' into put-retrieved-context-in-separate-chatml-message 2024-10-23 00:13:38 -07:00
Debanjum Singh Solanky
2a50694089 Allow typing multi-line queries from a phone with Enter key
Add newline instead of sending message when hit Enter key on mobile
displays. As on phones shift key doesn't exist and send button is easily
clickable.

Limit hitting Enter key to send message to computers = larger display
= expected to have full fledged keyboards.
2024-10-22 21:20:22 -07:00
Debanjum Singh Solanky
a134cd835c Focus on chat input area to enter text after file uploads on web app 2024-10-22 21:19:17 -07:00
Debanjum Singh Solanky
750fbce0c2 Merge branch 'master' into improve-agent-pane-on-home-screen 2024-10-22 20:05:29 -07:00
Debanjum Singh Solanky
3be505db48 Only show type of error when image generation fails to clients
Rather than showing raw error message from the underlying service as it
could contain sensitive information
2024-10-22 20:03:20 -07:00
Debanjum Singh Solanky
b3fff43542 Sanitize user attached images. Constrain chat input width on home page
Set max combined images size to 20mb to allow multiple photos to be shared
2024-10-22 19:42:40 -07:00
Debanjum Singh Solanky
6c393800cc Merge branch 'master' into multi-image-chat-and-vision-for-gemini 2024-10-22 18:38:49 -07:00
Debanjum Singh Solanky
91bbd19333 Close the agent detail hover card when scroll on agent pane 2024-10-22 18:03:17 -07:00
Debanjum Singh Solanky
110c67f083 Improve agent pill, detail card styling. Handle null chatInputRef
- Remove border from agent detail hover card on home page
- Do not wrap long agent names in agent pills on home page
- Handle scenario where chatInputRef is null
2024-10-22 18:03:17 -07:00
Debanjum Singh Solanky
aca8bef024 Only use recent chat sessions for agent MRU. Handle null agent chats 2024-10-22 17:46:45 -07:00
sabaimran
0dad4212fa
Generate dynamic diagrams (via Excalidraw) (#940)
Add support for generating dynamic diagrams in flow with Excalidraw (https://github.com/excalidraw/excalidraw). This happens in three steps:
1. Default information collection & intent determination step.
2. Improving the overall guidance of the prompt for generating a JSON, Excalidraw-compatible declaration.
3. Generation of the diagram to output to the final UI.

Add support in the web UI.
2024-10-22 16:13:46 -07:00
sabaimran
1e993d561b Release Khoj version 1.26.4 2024-10-22 13:50:08 -07:00
Debanjum Singh Solanky
e8fb79a369 Rate limit the count and total size of images shared via API 2024-10-22 04:37:54 -07:00
Debanjum Singh Solanky
0847fb0102 Pass online context from chat history to chat model for response
Previously only notes context from chat history was included.
This change includes online context from chat history for model to use
for response generation.

This can reduce need for online lookups by reusing previous online
context for faster responses. But will increase overall response time
when not reusing past online context, as faster context buildup per
conversation.

Unsure if inclusion of context is preferrable. If not, both notes and
online context should be removed.
2024-10-22 03:09:36 -07:00
Debanjum Singh Solanky
0c52a1169a Put context into separate user message before sending to chat model
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.

This will improve

- Models ability to differentiate data from user query.
  That should improve response quality and reduce prompt injection
  probability

- Make truncation logic simpler and more robust
  When context window hit, can simply pop messages to auto truncate
  context in order of context, user, assistant message for each
  conversation turn in history until reach current user query

  The complex, brittle logic to extract user query from context in
  last user message isn't required.

Marking the context message with assistant role doesn't translate well
across chat models. E.g
- Gemini can't handle consecutive messages by role = model well
- Claude will merge consecutive messages by same role. In current
  message ordering the context message will result get merged into the
  previous assistant response. And if move context message after user
  query. The truncation logic will have to hop and skip while doing
  deletions
- GPT seems to handle consecutive roles of any type fine

Using context role = user generalizes better across chat models for
now and aligns with previous behavior.
2024-10-22 03:09:36 -07:00
Debanjum Singh Solanky
7ac241b766 Improve format of notes, online context passed to chat models in prompt
Improve separation of note snippets and show its origin file in notes
prompt to have more readable, contextualized text shared with model.

Previously the references dict was being directly passed as a string.
The documents don't look well formatted and are less intelligible.

- Passing file path along with notes snippets will help contextualize
  the notes better.
- Better formatting should help with making notes more readable by the
  chat model.
2024-10-22 03:09:36 -07:00
sabaimran
892040972f Replace user_id with server_id in telemetry 2024-10-21 20:47:52 -07:00
sabaimran
21e69b506d Release Khoj version 1.26.3 2024-10-21 08:19:05 -07:00
Debanjum Singh Solanky
9b554feb91 Show agent details card on hover on agent pill on web app home page
- Double click on agent to open edit agent card
- Focus on chat input pane when agent selected/clicked
  for quick, smooth agent switch and message flow
- Hover on agent to see agent detail card on non-mobile displays
  - Use debounce to only show when hover on card for a bit
2024-10-21 00:08:01 -07:00
Debanjum Singh Solanky
220ff1df62 Set chatInputArea forward ref from parent components for control 2024-10-21 00:02:48 -07:00
Debanjum Singh Solanky
54b92eaf73 Extract isUserSubscribed check from Agents page to make it resusable 2024-10-20 23:31:48 -07:00
Debanjum Singh Solanky
bdbe8f003e Move agent details and edit card out into reusable components on web app 2024-10-20 23:31:47 -07:00
sabaimran
59fec37943 Improve agents management, and limit agents view to private and official agents
- Default to None for the input_tools and output_modes so that they can be managed in the admin panel
- Hold off on showing off all Public Agents until we have a better experience for user profiles etc.
2024-10-20 22:24:51 -07:00
sabaimran
a979457442 Add unit tests for agents
- Add permutations of testing for with, without knowledge base. Private, public, different users.
2024-10-20 20:04:50 -07:00
sabaimran
fc70f25583 Release Khoj version 1.26.2 2024-10-20 18:03:36 -07:00
sabaimran
046de57571 Improve error handling when documents not searched with stack trace
- Stop extract OCR content from PDFs
- Only use agent knowledge base when user not provided
2024-10-20 18:03:14 -07:00
sabaimran
2b68d61fef Release Khoj version 1.26.1 2024-10-20 16:21:51 -07:00
Debanjum Singh Solanky
5fca41cc29 Show agents sorted by mru, Select mru agent by default on web app
Have get agents API return agents ordered intelligently
- Put the default agent first
- Sort used agents by most recently chatted with agent for ease of access
- Randomly shuffle the remaining unused agents for discoverability
2024-10-20 15:21:25 -07:00
Debanjum Singh Solanky
a6bfdbdbfe Show all agents in carousel on home screen agent pane of web app
This change wraps the agent pane in a scroll area with all agents shown.
It allows selecting an agent to chat with directly from the home
screen without breaking flow and having to jump to the agents page.

The previous flow was not convenient to quickly and consistently start
chat with one of your standard agents.

This was because a random subet of agents were shown on the home page.
To start chat with an agent not shown on home screen load you had to
open the agents page and initiate the conversation from there.
2024-10-20 15:21:25 -07:00
Debanjum Singh Solanky
9ffd726799 Allow making sync api requests with body from khoj.el 2024-10-20 15:16:40 -07:00
Debanjum Singh Solanky
ac51920859 Start conversation with Agents from within Emacs
Exposes a transient switch with available agents as selectable options
in the Khoj chat sub-menu.

Currently shows agent slugs instead of agent names as options. This
isn't the cleanest but gets the job done for now.

Only new conversations with a different agent can be started. Existing
conversations will continue with the original agent it was created with.
The ability to switch the conversation's agent doesn't exist on the
server yet.
2024-10-20 15:16:40 -07:00
Debanjum Singh Solanky
7646ac6779 Style user attached images as carousel on chat input area of web app 2024-10-20 00:40:08 -07:00
sabaimran
5d5bea6a5f Ensure images are reset after messages processed 2024-10-19 22:02:06 -07:00
sabaimran
1ad6e1749f Move window redirect to after relevant data is dropped in localStorage on the homage page
One limitation of this methodology is that localStorage has a limit in how much data it can take. Should add more graceful error handling here as well.
2024-10-19 20:36:13 -07:00
sabaimran
cb6b3ec1e9 Improve mode description given to LLM when determining how to respond.
Currently experiencing difficulty instruction following when an image is shared. It's more likely to try and output an image. Update to make a clearer distinction.
2024-10-19 20:35:32 -07:00
sabaimran
545259e308 Remove unused icons in chatInputArea 2024-10-19 16:54:21 -07:00
Debanjum Singh Solanky
3cc1426edf Style user attached images with fixed height, in a single row on web app 2024-10-19 16:48:36 -07:00
Debanjum Singh Solanky
58a331227d Display the attached images inside the chat input area on the web app
- Put the attached images display div inside the same parent div as
  the text area
- Keep the attachment, microphone/send message buttons aligned with
  the text area. So the attached images just show up at the top of the
  text area but everything else stays at the same horizontal height as
  before.

- This improves the UX by
  - Ensuring that the attached images do not obscure the agents pane
    above the chat input area
  - The attached images visually look like they are inside the actual
    input area, rather than floating above it. So the visual aligns
    with the semantics
2024-10-19 16:29:45 -07:00
Debanjum Singh Solanky
3e39fac455 Add vision support for Gemini models in Khoj 2024-10-19 15:47:03 -07:00
Debanjum Singh Solanky
0d6a54c10f Allow sharing multiple images as part of user query from the web app
Previously the web app only expected a single image to be shared by
the user as part of their query.

This change allows sharing multiple images from the web app.

Closes #921
2024-10-19 15:47:03 -07:00
Debanjum Singh Solanky
e2abc1a257 Handle multiple images shared in query to chat API
Previously Khoj could respond to a single shared image at a time.

This changes updates the chat API to accept multiple images shared by
the user and send it to the appropriate chat actors including the
openai response generation chat actor for getting an image aware
response
2024-10-19 14:53:33 -07:00
Debanjum Singh Solanky
d55cba8627 Pass user query for chat response when document lookup fails
Recent changes made Khoj try respond even when document lookup fails.
This change missed handling downstream effects of a failed document
lookup, as the defiltered_query was null and so the text response
didn't have the user query to respond to.

This code initializes defiltered_query to original user query to
handle that.

Also response_type wasn't being passed via
send_message_to_model_wrapper_sync unlike in the async scenario
2024-10-19 14:32:19 -07:00
Debanjum Singh Solanky
a4e6e1d5e8 Share webp images from web, desktop, obsidian app to chat with 2024-10-19 14:32:17 -07:00
sabaimran
dbd9a945b0 Re-evaluate agent private/public filtering after authenticateddata is retrieved. Update selectedAgent check logic to reflect. 2024-10-18 09:31:56 -07:00
Debanjum Singh Solanky
35015e720e Release Khoj version 1.26.0 2024-10-17 18:25:53 -07:00
Debanjum Singh Solanky
f0dcfe4777 Explicitly ask Gemini models to format their response with markdown
Otherwise it can get confused by the format of the passed context (e.g
respond in org-mode if context contains org-mode notes)
2024-10-17 18:12:47 -07:00
Debanjum Singh Solanky
2c20f49bc5 Return enabled scrapers as WebScraper objects for more ergonomic code 2024-10-17 17:44:09 -07:00
Debanjum Singh Solanky
0db52786ed Make web scraper priority configurable via admin panel
- Simplifies changing order in which web scrapers are invoked to read
  web page by just changing their priority number on the admin panel.
  Previously you'd have to delete/, re-add the scrapers to change
  their priority.

- Add help text for each scraper field to ease admin setup experience

- Friendlier env var to use Firecrawl's LLM to extract content

- Remove use of separate friendly name for scraper types.
  Reuse actual name and just make actual name better
2024-10-17 17:42:42 -07:00
Debanjum Singh Solanky
20b6f0c2f4 Access internal links directly via a simple get request
The other webpage scrapers will not work for internal webpages. Try
access those urls directly if they are visible to the Khoj server over
the network.

Only enable this by default for self-hosted, single user setups.
Otherwise ability to scan internal network would be a liability!

For use-cases where it makes sense, the Khoj server admin can
explicitly add the direct webpage scraper via the admin panel
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
d94abba2dc Fallback through enabled scrapers to reduce web page read failures
- Set up scrapers via API keys, explicitly adding them via admin panel
  or enabling only a single scraper to use via server chat settings.

- Use validation to ensure only valid scrapers added via admin panel
  Example API key is present for scrapers that require it etc.

- Modularize the read webpage functions to take api key, url as args
  Removes dependence on constants loaded in online_search. Functions
  are now mostly self contained

- Improve ability to read webpages by using the speed, success rate of
  different scrapers. Optimal configuration needs to be discovered
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
11c64791aa Allow changing perf timer log level. Info log time for webpage read 2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
c841abe13f Change webpage scraper to use via server admin panel 2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
e47922e53a Aggregate webpage extract queries to run once for each distinct webpage
This should reduce webpage read and response generation time.

Previously, we'd run separate webpage read and extract relevant
content pipes for each distinct (query, url) pair.

Now we aggregate all queries for each url to extract information from
and run the webpage read and extract relevant content pipes once for
each distinct url.

Even though the webpage content extraction pipes were previously being
in parallel. They increased response time by
1. adding more context for the response generation chat actor to
   respond from
2. and by being more susceptible to page read and extract latencies of
   the parallel jobs

The aggregated retrieval of context for all queries for a given
webpage could result in some hit to context quality. But it should
improve and reduce variability in response time, quality and costs.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
98f99fa6f8 Allow using Firecrawl to extract web page content
Set the FIRECRAWL_TO_EXTRACT environment variable to true to have
Firecrawl scrape and extract content from webpage using their LLM

This could be faster, not sure about quality as LLM used is obfuscated
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
993fd7cd2b Support using Firecrawl to read webpages
Firecrawl is open-source, self-hostable with a default hosted service
provided, similar to Jina.ai. So it can be
1. Self-hosted as part of a private Khoj cloud deployment
2. Used directly by getting an API key from the Firecrawl.dev service

This is as an alternative to Olostep and Jina.ai for reading webpages.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
731ea3779e Return data sources to use if exception in data source chat actor
Previously no value was returned if an exception got triggered when
collecting information sources to search.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
a932564169 Try respond even if web search, webpage read fails during chat
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references

This change ensures a response to the users query is attempted even
when web info retrieval fails.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
1b04b801c6 Try respond even if document search via inference endpoint fails
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references

This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
9affeb9e85 Fix to log the client app calling the chat API
- Remove unused subscribed variable from the chat API
- Unexpectedly dropped client app logging when migrated API chat to do
  advanced streaming in july
2024-10-17 15:24:43 -07:00
Debanjum Singh Solanky
c6c48cfc18 Fix arg to generate_summary_from_file and type of this_iteration 2024-10-17 13:38:48 -07:00
Debanjum Singh Solanky
884fe42602 Allow automation as an output mode supported by custom agents 2024-10-17 11:58:52 -07:00
Debanjum Singh Solanky
c5e19b37ef Use Khoj icons. Add automation & improve agent text on web login page 2024-10-17 11:58:52 -07:00
Debanjum Singh Solanky
42acc324dc Handle correctly setting file filters as array when API call fails
- Only set addedFiles to selectedFiles when selectedFiles is an array
- Only set seleectedFiles, addedFiles to API response json when
  response succeeded. Previously we set it to response json
  on errors as well. This made the variables into json objects instead
  of arrays on API call failure
- Check if selectedFiles, addedFiles are arrays before running
  operations on them. Previously the addedFiles.includes was where the
  code would fail
2024-10-17 11:58:52 -07:00
sabaimran
07ab8ab931 Update handling of gemini response with new API changes. Per documentation:
finish_reason (google.ai.generativelanguage_v1beta.types.Candidate.FinishReason):
            Optional. Output only. The reason why the
            model stopped generating tokens.
            If empty, the model has not stopped generating
            the tokens.
2024-10-17 09:00:01 -07:00
Debanjum Singh Solanky
19c65fb82b Show user uuid field in django admin panel 2024-10-15 17:59:12 -07:00
Debanjum Singh Solanky
6c5b362551 Remove deprecated GET chat API endpoint 2024-10-15 15:13:09 -07:00
Debanjum Singh Solanky
931c56182e Fix default chat model to use user model if no server chat model set
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set

These assume no server chat model settings is configured
2024-10-15 15:13:09 -07:00
Debanjum Singh Solanky
feb6d65ef8 Merge branch 'master' into features/advanced-reasoning 2024-10-15 09:37:56 -07:00
Debanjum Singh Solanky
336c6c3689 Show tool to use decision for next iteration in train of thought 2024-10-15 01:12:18 -07:00
Debanjum Singh Solanky
81fb65fa0a Return data sources to use if exception in data source chat actor
Previously no value was returned if an exception got triggered when
collecting information sources to search.
2024-10-14 18:20:20 -07:00
Debanjum Singh Solanky
3c93f07b3f Try respond even if web search, webpage read fails during chat
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references

This change ensures a response to the users query is attempted even
when web info retrieval fails.
2024-10-14 18:13:26 -07:00
Debanjum Singh Solanky
07ab7ebf07 Try respond even if document search via inference endpoint fails
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references

This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
2024-10-14 18:13:26 -07:00
Debanjum Singh Solanky
d6206aa80c Remove deprecated GET chat API endpoint 2024-10-14 18:13:26 -07:00
Debanjum Singh Solanky
263eee4351 Fix default chat model to use user model if no server chat model set
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set

These assume no server chat model settings is configured
2024-10-14 18:13:26 -07:00
sabaimran
81aa1b5589 Update some edge cases and usability of create agent flow
- Use the slug to determine which agent to PATCH
- Make the agent creation form multi-step to streamline the process
2024-10-14 14:07:31 -07:00
Debanjum Singh Solanky
abcd11cfc0 Merge branch 'master' into features/advanced-reasoning 2024-10-13 03:06:23 -07:00
Debanjum Singh Solanky
9356e66b94 Fix default chat model to use user model if no server chat model set
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set

These assume no server chat model settings is configured
2024-10-13 03:02:29 -07:00
Debanjum Singh Solanky
9314f0a398 Fix default chat configs to use user model if no server chat model set
Post merge cleanup in advanced reasoning to fallback to user chat
model if no server chat model defined for advanced and default
2024-10-13 02:59:10 -07:00
Debanjum Singh Solanky
a2200466b7 Merge branch 'master' into features/advanced-reasoning 2024-10-12 21:01:22 -07:00
Debanjum
c66c571396
Simplify switching chat model when self-hosting (#934)
# Overview
- Default to use user chat models for train of thought when no server chat settings created by admins
- Default to not create server chat settings on first run

# Details
This change simplifies switching chat models for self-hosted setups 
by just changing the chat model on the user settings page.

It falls back to use the user chat model for train of thought 
if server chat settings have not been created on the admin panel.

Server chat settings, when set, controls the chat model used 
for Khoj's train of thought and the default user chat model.

Previously a self-hosted user had to update
1. the server chat settings in the admin panel and
2. their own user chat model in the user settings panel

to completely switch to a different chat model 
for both train of thought & response generation respectively

You can still set server chat settings via the admin panel 
to use a different chat model for train of thought vs response generation. 
But this is only useful for advanced, multi-user setups.
2024-10-12 19:58:05 -07:00
Debanjum Singh Solanky
90888a1099 Log when new user created via magic link or whatsapp as well 2024-10-12 19:56:01 -07:00
Debanjum Singh Solanky
8222c6629d Remove unused subscribed argument to read_webpage function 2024-10-12 10:45:39 -07:00
Debanjum Singh Solanky
9daaae0fdb Render inline any image files output by code in message
Update regex to also include any links to code generated images that
aren't explicitly meant to be displayed inline. This allows folks to
download the image (unlike the fake link that doesn't work created by
model)
2024-10-12 10:34:57 -07:00
Debanjum Singh Solanky
20d495c43a Update the iterative chat director prompt to generalize across chat models
These prompts work across o1 and standard openai model. Works with
anthropic and google models as well
2024-10-12 10:34:57 -07:00
sabaimran
eb4d598d0f Eliminate the drawer component from the Agents view 2024-10-10 20:40:59 -07:00
sabaimran
0a1c3e4f41 Release Khoj version 1.25.0 2024-10-10 18:07:30 -07:00
sabaimran
01a58b71a5 Skip image, code generation if in research mode 2024-10-10 18:06:29 -07:00
Debanjum Singh Solanky
1b13d069f5 Pass data collected from various sources to code tool in normal flow too 2024-10-10 05:19:27 -07:00
Debanjum Singh Solanky
f462d34547 Render images files output by code interpreter in message on web app 2024-10-10 05:17:53 -07:00
Debanjum Singh Solanky
564491e164 Extract date filters quoted with non-ascii quotes in query 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
6a8fd9bf33 Reorder embeddings search arguments based on argument importance 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
0eacc0b2b0 Use consistent name for user, planner to not miss current user query
Previously Khoj would start answering the previous query. This maybe
because the prompt uses User for prompt in chat history but was using
Q for current user prompt.
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
284c8c331b Increase default max iterations for research chat director to 5 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
1e390325d2 Let research chat director decide which webpage to read, if any
Make webpages to read automatically on search_online configurable via
a argument.

Set it to default to 1, so other callers of the function
are unaffected.

But iterative chat director can still decide which, if
any, webpages to read based on the online search it performs
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
5a699a52d2 Improve webpage summarization prompt to better extract links, excerpts
This change allows the iterative director to dive deeper into its
research as the data extracted contains relevant links from the webpage

Previous summarization prompt didn't extract relevant links from the
webpage which limited further explorations from webpages
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
61df1d5db8 Pass previous iteration results to code interpreter chat actors
This improves the code interpreter chat actors abilitiy to generate
code with data collected during the previous iterations
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
9e7025b330 Set python interpret sandbox url via environment variable 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
2dc5804571 Extract defilter query into conversation utils for reuse 2024-10-10 04:45:00 -07:00
sabaimran
e69a8382f2 Add a code icon for code-related train of thought 2024-10-09 23:56:57 -07:00
sabaimran
536422a40c Include code snippets in the reference panel 2024-10-09 23:54:11 -07:00
Debanjum Singh Solanky
8d33c764b7 Allow iterative chat director to use python interpreter as a tool 2024-10-09 23:38:20 -07:00
Debanjum Singh Solanky
b373073f47 Show executed code in web app chat message references 2024-10-09 22:13:18 -07:00
Debanjum Singh Solanky
a98f97ed5e Refactor Run Code tool into separate module and modularize code functions
Move construct_chat_history and ChatEvent enum into conversation.utils
and move send_message_to_model_wrapper to conversation.helper to
modularize code. And start thinning out the bloated routers.helper

- conversation.util components are shared functions that conversation
  child packages can use.
- conversation.helper components can't be imported by conversation
  packages but it can use these child packages

This division allows better modularity while avoiding circular
import dependencies
2024-10-09 22:13:17 -07:00
Debanjum Singh Solanky
8044733201 Give Khoj ability to run python code as a tool triggered via chat API
Create python code executing chat actor
- The chat actor generate python code within sandbox constraints
- Run the generated python code in the cohere terrarium, pyodide
  based sandbox accessible at sandbox url
2024-10-09 21:37:22 -07:00
Debanjum Singh Solanky
4d33239af6 Improve prompts for the iterative chat director 2024-10-09 21:23:18 -07:00
Debanjum Singh Solanky
6ad85e2275 Fix to continue showing retrieved documents in train of thought 2024-10-09 21:20:22 -07:00
sabaimran
a6f6e4f418 Fix notes references and passage of user query in the chat flow 2024-10-09 20:34:20 -07:00
Debanjum Singh Solanky
ec248efd31 Allow iterative chat director to do notes search 2024-10-09 19:04:59 -07:00
Debanjum Singh Solanky
a6905a9f0c Pass background context to iterating chat director 2024-10-09 19:04:59 -07:00
sabaimran
028b6e6379 Fix yield for scraping direct web page 2024-10-09 18:14:08 -07:00
sabaimran
717d9da8d8 Handle when summarize result is not present, rename variable in for loop from query 2024-10-09 17:57:08 -07:00
sabaimran
03544efde2 Ignore typing of the result dict for online, web page scrape 2024-10-09 17:48:24 -07:00
sabaimran
ab81b01fcb Fix typing of direct_web_pages and remove the deprecated chat API 2024-10-09 17:46:28 -07:00
sabaimran
5b8d663cf1 Add intermediate summarization of results when planning with o1 2024-10-09 17:40:56 -07:00
sabaimran
7b288a1179 Clean up the function planning prompt a little bit 2024-10-09 16:59:20 -07:00
sabaimran
f71e4969d3 Skip summarize while it's broken, and snip some other parts of the workflow while under construction 2024-10-09 16:40:06 -07:00
sabaimran
f7e6f99a32 add typing for extract document references 2024-10-09 16:05:34 -07:00
sabaimran
6960fb097c update types of prev iterations response 2024-10-09 16:04:39 -07:00
sabaimran
4978360852 Fix type of previous_iterations 2024-10-09 16:02:41 -07:00
sabaimran
46ef205a75 Add additional type annotations for compiled_references et al 2024-10-09 16:01:52 -07:00
sabaimran
4fbaef10e9 Correct usage of the summarize function 2024-10-09 15:58:05 -07:00
sabaimran
c91678078d Correct the usage of query passed to summarize function 2024-10-09 15:55:55 -07:00
sabaimran
f867d5ed72 Working prototype of meta-level chain of reasoning and execution
- Create a more dynamic reasoning agent that can evaluate information and understand what it doesn't know, making moves to get that information
- Lots of hacks and code that needs to be reversed later on before submission
2024-10-09 15:54:25 -07:00
Debanjum Singh Solanky
05fb0f14d3 Use user chat models for train of thought when no server chat settings
Update chat actors to use user's chat model for train of thought. This
requires passing the user info as argument to all the chat actors.

Whether the user is subscribed or not can be inferred from the user
info being passed, so it doesn't need to be passed as a separate
argument to chat actor functions

Let send_message_to_model function infer chat model instead of passing
it as an argument from some chat actors. Better if this logic can be
done in a single place.
2024-10-09 00:07:08 -07:00
Debanjum Singh Solanky
ec0c79217f Do not set server chat settings on first run
Server chat settings can be set for advanced self-hosted or multi-user
cloud setups. They are not necessary anymore as we fallback to use the
users chat model for train of thought now
2024-10-09 00:07:08 -07:00
Debanjum Singh Solanky
a9009ea774 Default to use user chat model if server chat settings not defined
Fallback to use user chat model for train of thought if server chat
settings not defined.

This simplifies switching chat models for single-user, self-hosted
setups by just changing the chat model on the user settings page.

Server chat settings, when set, controls the default user chat model
and the chat model that is used for Khoj's train of thought.

Previously a self-hosted user had to update both the server chat
settings in the admin panel and their own user chat model in the user
settings panel to explicitly switch to a different chat model (i.e to
switch to a new model for both train of thought & response generation)

You can still set server chat settings to use a different chat
model for train of thought and response generation. But this is only
necessary for advanced self-hosted or cloud hosted setups of Khoj.
2024-10-09 00:07:08 -07:00
Debanjum Singh Solanky
9a056383e0 Reduce size of start chat and edit buttons on agent card in web app 2024-10-09 00:00:32 -07:00
Debanjum Singh Solanky
dc7f22f76c Mention no. of docs in agents knowledge base in its badge hover text 2024-10-08 23:51:00 -07:00
Debanjum Singh Solanky
13fb22f7e7 Update agent form data shown in edit card after save operaton on web app
Previously you had to refresh the page to see the updated data on
reopening the agents edit card after a save operation.

Now you see the latest saved agent data on reopening the agents edit
card. This should avoid confusion on whether the data was saved
correctly
2024-10-08 23:26:04 -07:00
Debanjum Singh Solanky
dd770cf1b9 Start chat with public and protected agents when shared via link 2024-10-08 22:10:07 -07:00
Debanjum Singh Solanky
80212c50fd Use default agent in others chats with an agent if agent made private
If a public or protected agent is made private. Other users who were
having conversation with that agent will have to carry on their
conversation using default agent instead
2024-10-08 22:08:38 -07:00
Debanjum Singh Solanky
d628f89ce9 Prefetch agents related database models 2024-10-08 21:59:15 -07:00
Debanjum Singh Solanky
8de67c5d4d Fallback to use general command if no tool selected by agent 2024-10-08 19:48:02 -07:00
Debanjum Singh Solanky
b80c4bcfdd Improve agent command descriptions 2024-10-08 19:47:51 -07:00
Debanjum Singh Solanky
67d0e59eac Pass chat history to the summarize chat actor 2024-10-08 18:44:52 -07:00
Debanjum Singh Solanky
7e3090060b Encourage Gemini to output more verbose responses 2024-10-08 18:41:43 -07:00
Debanjum Singh Solanky
bbbdba3093 Time embedding model load for better visibility into app startup time
Loading the embeddings model, even locally seems to be taking much
longer. Use timer to track visibility into embedding, cross-encoder
model load times
2024-10-08 18:41:43 -07:00
Debanjum Singh Solanky
516472a8d5 Switch default tokenizer to tiktoken as more widely used
The tiktoken BPE based tokenizers seem more widely used these days.

Fallback to gpt-4o tiktoken tokenizer to count tokens for context
stuffing
2024-10-08 18:41:43 -07:00
Debanjum Singh Solanky
2b8f7f3efb Reuse a single func to format conversation for Gemini
This deduplicates code and prevents logic from deviating across gemini
chat actors
2024-10-08 18:41:42 -07:00
Debanjum Singh Solanky
452e360175 Do not use max prompt size to limit Gemini max output tokens
We should start disambiguating the the max input from output size. Max
prompt size should only be used for the max input context to an LLM.

If required max_output_tokens should be set as a separate new field
2024-10-08 15:30:08 -07:00
Debanjum Singh Solanky
bdc36fec5d Remove unnecessary whitespace indent from personality context 2024-10-08 15:30:08 -07:00
sabaimran
3daa3c003d When tool selection is not done successfully with an agent, return all agent tools as options 2024-10-08 15:03:58 -07:00
sabaimran
ad716ca58d Delete associated entries with an agent when it is deleted 2024-10-08 15:00:21 -07:00
sabaimran
f7fc6dbdc8 Limit agent creation and modification to subscribed users 2024-10-08 14:59:57 -07:00
sabaimran
c7638a783e Dynamically update added files when upload in agent creation 2024-10-07 21:54:11 -07:00
sabaimran
e10a0571ff Only check the prompt safety if the agent is not private 2024-10-07 21:42:14 -07:00
sabaimran
f700d5bddb Add summarization capability with agent knowledge base 2024-10-07 21:20:23 -07:00
sabaimran
df3dc33e96 Show reference icon and domain side by side 2024-10-07 20:28:48 -07:00
sabaimran
59e55f981f Reset agent to default when continuing with deceased agent 2024-10-07 20:28:33 -07:00
sabaimran
874776024a Handle chat history rendering when agent is deceased 2024-10-07 20:28:10 -07:00
sabaimran
f232c2b059 Allow user to chat with agent knowledge base if general mode 2024-10-07 19:55:33 -07:00
sabaimran
c00654ae58 Update default agent settings 2024-10-07 18:11:24 -07:00
sabaimran
3d0e183bea Add more log lines when encountering rate limiting 2024-10-07 14:36:12 -07:00
sabaimran
e4a8a69bc8 Add a subtle check mark when the copy button is selected 2024-10-07 09:41:03 -07:00
sabaimran
405c047c0c
Include agent personality through subtasks and support custom agents (#916)
Currently, the personality of the agent is only included in the final response that it returns to the user. Historically, this was because models were quite bad at navigating the additional context of personality, and there was a bias towards having more control over certain operations (e.g., tool selection, question extraction).

Going forward, it should be more approachable to have prompts included in the sub tasks that Khoj runs in order to response to a given query. Make this possible in this PR. This also sets us up for agent creation becoming available soon.

Create custom agents in #928

Agents are useful insofar as you can personalize them to fulfill specific subtasks you need to accomplish. In this PR, we add support for using custom agents that can be configured with a custom system prompt (aka persona) and knowledge base (from your own indexed documents). Once created, private agents can be accessible only to the creator, and protected agents can be accessible via a direct link.

Custom tool selection for agents in #930

Expose the functionality to select which tools a given agent has access to. By default, they have all. Can limit both information sources and output modes.
Add new tools to the agent modification form
2024-10-07 00:21:55 -07:00
sabaimran
d4ffeca90a Fix notion indexing with manually set token 2024-10-05 09:13:16 -07:00
sabaimran
29a422b6bc Remove the single dollar sign delimeters from katex rendering 2024-10-04 12:24:19 -07:00
Debanjum Singh Solanky
e217cb5840 Suggest notification type automation on Automation page of web app 2024-10-03 16:36:23 -07:00
sabaimran
27c7e54695 Release Khoj version 1.24.1 2024-10-03 13:21:10 -07:00
Debanjum
4a1cb50da3
Make Online Search Location Aware (#929)
## Overview
Add user country code as context for doing online search with serper.dev API.
This should find more user relevant results from online searches by Khoj

## Details
### Major
- Default to using system clock to infer user timezone on js clients
- Infer country from timezone when only timezone received by chat API
- Localize online search results to user country when location available

### Minor
- Add `__str__` func to `LocationData` class to deduplicate location string generation
2024-10-03 12:33:47 -07:00
sabaimran
cb4052e333 Bump up rate limit for subscribed users and add an option to create new conversation in the POST request 2024-10-03 12:31:58 -07:00
sabaimran
7a5cd06162
Improve the login page (#931)
* Init version of improved login page
* Use split screen view, add a gradient
2024-10-02 14:26:46 -07:00
Debanjum Singh Solanky
852662f946 Use requestAnimationFrame for synced scroll on chat in web app
Make all the scroll actions just use requestAnimationFrame instead of
setTimeout. It better aligns with browser rendering loop, so better
for UX changes than setTimeout
2024-09-30 23:21:10 -07:00
sabaimran
57b4f844b7 Fail app start if initalization fails 2024-09-30 17:30:06 -07:00
Debanjum Singh Solanky
04aef362e2 Default to using system clock to infer user timezone on js clients
Using system clock to infer user timezone on clients makes Khoj
more robust to provide location aware responses.

Previously only ip based location was used to infer timezone via API.
This didn't provide any decent fallback when calls to ipapi failed or
Khoj was being run in offline mode
2024-09-30 07:08:12 -07:00
Debanjum Singh Solanky
344f3c60ba Infer country from timezone when only tz received by chat API
Timezone is easier to infer using clients system clock. This can be
used to infer user country name, country code, even if ip based
location cannot be inferred.

This makes using location data to contextualize Khoj's responses more
robust. For example, online search results are retrieved for user's
country, even if call to ipapi.co for ip based location fails
2024-09-30 07:08:11 -07:00
Debanjum Singh Solanky
1fed842fcc Localize online search results to user country when location available
Get country code to server chat api from i.p location check on clients.
Use country code to get country specific online search results via Serper.dev API
2024-09-30 07:08:11 -07:00
Debanjum Singh Solanky
eb86f6fc42 Add __str__ func to LocationData class to dedupe location string gen
Previously the location string from location data was being generated
wherever it was being used.

By adding a __str__ representation to LocationData class, we can
dedupe and simplify the code to get the location string
2024-09-30 07:08:11 -07:00
sabaimran
1dfc89e79f Store conversation ID for new conversations as a string, not UUID 2024-09-29 18:07:08 -07:00
sabaimran
d92a349292 Improve image generation tool description 2024-09-29 16:20:25 -07:00
Debanjum Singh Solanky
dd44933515 Release Khoj version 1.24.0 2024-09-29 04:56:11 -07:00
Debanjum Singh Solanky
e767b6eba3 Update Documentation with flags to enable GPU on Khoj pip install
- Use tabs for GPU/CPU type khoj being install on
- Update CMAKE flags to use to install Khoj with correct GPU support
  Previous flags used DLLAMA, this has been updated to use DGGML now
  in llama.cpp
2024-09-29 04:06:35 -07:00
Debanjum Singh Solanky
936bc64b82 Render images to take full width of chat message div
Remove unnecessary "Inferred Query" heading prefix to image generation prompt
used by Khoj. The inferred query in chat message has a heading of it's
own, so avoid two headings for the image prompt
2024-09-28 23:45:56 -07:00
Debanjum Singh Solanky
4efa7d4464 Upgrade the Next.js web app package dependency 2024-09-28 23:45:56 -07:00
Debanjum Singh Solanky
b3cb417796 Fix spelling of Manage Context in Side Panel of Web App 2024-09-28 23:45:56 -07:00
sabaimran
676ff5fa69 Fix setting title on new conversations, add the action menu 2024-09-28 23:43:27 -07:00
Shantanu Sakpal
65d5e03f7f
Reduce tooltip popup delay duration for Create Agent button on Web app (#926)
The problem was the tool tip was visible on hover, but it was slow, so before the tool tip popped up, the user would click on the button and this stopped the tool tip from popping up.

So i reduced the popup delay to 10ms. now as soon as user hovers over the button, they will see that its a feature coming soon!
2024-09-28 23:01:40 -07:00
Shantanu Sakpal
be8de1a1bd
Only Auto Scroll when at Page Bottom and Add Button to Scroll to Page Bottom on Web App (#923)
Improve Scrolling on Chat page of Web app

- Details
  1. Only auto scroll Khoj's streamed response when scroll is near bottom of page
      Allows scrolling to other messages in conversation while Khoj is formulating and streaming its response
  2. Add button to scroll to bottom of the chat page
  3. Scroll to most recent conversation turn on conversation first load
      It's a better default to anchor to most recent conversation turn (i.e most recent user message)
  4. Smooth scroll when Khoj's chat response is streamed
      Previously the scroll would jitter during response streaming
  5. Anchor scroll position when fetch and render older messages in conversation
      Allow users to keep their scroll position when older messages are fetched from server and rendered

Resolves #758
2024-09-28 22:54:34 -07:00
sabaimran
06777e1660
Convert the default conversation id to a uuid, plus other fixes (#918)
* Update the conversation_id primary key field to be a uuid

- update associated API endpoints
- this is to improve the overall application health, by obfuscating some information about the internal database
- conversation_id type is now implicitly a string, rather than an int
- ensure automations are also migrated in place, such that the conversation_ids they're pointing to are now mapped to the new IDs

* Update client-side API calls to correctly query with a string field

* Allow modifying of conversation properties from the chat title

* Improve drag and drop file experience for chat input area

* Use a phosphor icon for the copy to clipboard experience for code snippets

* Update conversation_id parameter to be a str type

* If django_apscheduler is not in the environment, skip the migration script

* Fix create automation flow by storing conversation id as string

The new UUID used for conversation id can't be directly serialized.
Convert to string for serializing it for later execution

---------

Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-09-24 14:12:50 -07:00
Debanjum Singh Solanky
0c936cecc0 Release Khoj version 1.23.3 2024-09-24 12:44:09 -07:00
Debanjum Singh Solanky
61c6e742d5 Truncate chat context to max tokens for offline, openai chat actors too 2024-09-24 12:42:32 -07:00
sabaimran
e306e6ca94 Fix file paths used for pypi wheel building 2024-09-22 12:42:08 -07:00
Debanjum Singh Solanky
2033f5168e Modularize chat models initialization with a reusable function
The chat model initialize interaction flow is fairly similar across
the chat model providers.

This should simplify adding new chat model providers and reduce
chances of bugs in the interactive chat model initialization flow.
2024-09-21 14:06:40 -07:00
Debanjum Singh Solanky
91c76d4152 Intelligently initialize a decent default set of chat model options
Given the LLM landscape is rapidly changing, providing a good default
set of options should help reduce decision fatigue to get started

Improve initialization flow during first run
- Set Google, Anthropic Chat models too
  Previously only Offline, Openai chat models could be set during init

- Add multiple chat models for each LLM provider
  Interactively set a comma separated list of models for each provider

- Auto add default chat models for each provider in non-interactive
  model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set

- Do not ask for max_tokens, tokenizer for offline models during
  initialization. Use better defaults inferred in code instead

- Explicitly set default chat model to use
  If unset, it implicitly defaults to using the first chat model.
  Make it explicit to reduce this confusion

Resolves #882
2024-09-19 20:32:08 -07:00
Debanjum Singh Solanky
f177723711 Add default server configuration on first run in non-interactive mode
This should configure Khoj with decent default configurations via
Docker and avoid needing to configure Khoj via admin page to start
using dockerized Khoj

Update default max prompt size set during khoj initialization
as online chat model are cheaper and offline chat models have larger
context now
2024-09-19 15:12:55 -07:00
Debanjum Singh Solanky
020167c7cf Set default openai text to image model correctly during initialization
Speech to text model was previously being set to the text to image
model previously!
2024-09-19 15:11:34 -07:00
Debanjum Singh Solanky
077b88bafa Make RapidOCR dependency optional as flaky requirements
RapidOCR depends on OpenCV which by default requires a bunch of GUI
paramters. This system package dependency set (like libgl1) is flaky

Making the RapidOCR dependency optional should allow khoj to be more
resilient to setup/dependency failures

Trade-off is that OCR for documents may not always be available and
it'll require looking at server logs to find out when this happens
2024-09-19 15:10:31 -07:00
sabaimran
0a568244fd Revert "Convert conversationId int to string before making api request to bulk update file filters"
This reverts commit c9665fb20b.

Revert "Fix handling for new conversation in agents page"

This reverts commit 3466f04992.

Revert "Add a unique_id field for identifiying conversations (#914)"

This reverts commit ece2ec2d90.
2024-09-18 20:36:57 -07:00
Debanjum Singh Solanky
bb2bd77a64 Send chat message to Khoj web app via url query param
- This allows triggering khoj chat from the browser addressbar
- So now if you add Khoj to your browser bookmark with
  - URL: https://app.khoj.dev/?q=%s
  - Keyword: khoj

- Then you can type "khoj what is the news today" to trigger Khoj to
  quickly respond to your query. This avoids having to open the Khoj web
  app before asking your question
2024-09-17 21:50:47 -07:00
Debanjum Singh Solanky
ecdbcd815e Simplify code to remove json codeblock from AI response string 2024-09-17 21:50:47 -07:00
sabaimran
e457720e8a Improve the email templates and better align with new branding 2024-09-17 11:18:25 -07:00
sabaimran
c9665fb20b Convert conversationId int to string before making api request to bulk update file filters 2024-09-16 15:45:23 -07:00
sabaimran
3466f04992 Fix handling for new conversation in agents page 2024-09-16 15:04:49 -07:00
sabaimran
ece2ec2d90
Add a unique_id field for identifiying conversations (#914)
* Add a unique_id field to the conversation object

- This helps us keep track of the unique identity of the conversation without expose the internal id
- Create three staged migrations in order to first add the field, then add unique values to pre-fill, and then set the unique constraint. Without this, it tries to initialize all the existing conversations with the same ID.

* Parse and utilize the unique_id field in the query parameters of the front-end view

- Handle the unique_id field when creating a new conversation from the home page
- Parse the id field with a lightweight parameter called v in the chat page
- Share page should not be affected, as it uses the public slug

* Fix suggested card category
2024-09-16 12:19:16 -07:00
sabaimran
e6bc7a2ba2 Fix links to log in email templates 2024-09-15 19:14:19 -07:00
Debanjum Singh Solanky
79980feb7b Release Khoj version 1.23.2 2024-09-15 03:07:26 -07:00
Debanjum Singh Solanky
575ff103cf Frame chat response error on web app in a more conversational form
Also indicate hitting dislike on the message should be enough to
convey the issue to the developers.
2024-09-15 03:00:49 -07:00
Debanjum Singh Solanky
893ae60a6a Improve handling of harmful categorized responses by Gemini
Previously Khoj would stop in the middle of response generation when
the safety filters got triggered at default thresholds. This was
confusing as it felt like a service error, not expected behavior.

Going forward Khoj will
- Only block responding to high confidence harmful content detected by
  Gemini's safety filters instead of using the default safety settings
- Show an explanatory, conversational response (w/ harm category)
  when response is terminated due to Gemini's safety filters
2024-09-15 02:17:54 -07:00
sabaimran
ec1f87a896 Release Khoj version 1.23.1 2024-09-12 22:46:39 -07:00
sabaimran
2a4416d223 Use prefetch_related for the openai_config when retrieving all chatmodeloptions async 2024-09-12 22:45:43 -07:00
sabaimran
253ca92203 Release Khoj version 1.23.0 2024-09-12 20:25:29 -07:00
Debanjum Singh Solanky
178b78f87b Show debug log, not warning when use default tokenizer for context stuffing 2024-09-12 20:21:01 -07:00
Debanjum Singh Solanky
75d3b34452 Extract image generation code into new image processor for modularity 2024-09-12 20:01:32 -07:00
Debanjum Singh Solanky
84051d7d89 Make generate better image prompt chat actor add composition details 2024-09-12 19:58:57 -07:00
Debanjum Singh Solanky
ed12f45a26 Generate vivid images with DALLE-3
It's apparently the default setting in chatgpt app according to the
openai cookbook at https://cookbook.openai.com/articles/what_is_new_with_dalle_3#examples-and-prompts
2024-09-12 19:58:57 -07:00
Debanjum Singh Solanky
1b82aea753 Support using image generation models like Flux via Replicate
Enables using any image generation model on Replicate's Predictions
API endpoints.

The server admin just needs to add text-to-image model on the
server/admin panel in organization/model_name format and input their
Replicate API key with it

Create db migration (including merge)
2024-09-12 19:58:56 -07:00
Brian Kanya
1d512b4986
Use environment variable to set sender email of auth link emails (#907)
Set sender email using `RESEND_EMAIL` environment variable for magic link sent via Resend API for authentication . It was previously hard-coded. This prevented hosting Khoj on other domains. 

Resolves #908
2024-09-12 18:48:11 -07:00
Debanjum Singh Solanky
0685a79748 Remove any markdown json codeblock in chat actors expecting json responses
Strip any json md codeblock wrapper if exists before processing
response by output mode, extract questions chat actor. This is similar
to what is already being done by other chat actors

Useful for succesfully interpreting json output in chat actors when
using non (json) schema enforceable models like o1 and gemma-2

Use conversation helper function to centralize the json md codeblock
removal code
2024-09-12 18:26:15 -07:00
Debanjum Singh Solanky
6e660d11c9 Override block display styling of links by Katex in chat messages
This happens sometimes when LLM respons contains [\[1\]] kind of links
as reference. Both markdown-it and katex apply styling.

Katex's span uses display: block which makes the rendering of these
references take up a whole line by themselves.

Override block styling of spans within an `a' element to prevent such
chat message styling issues
2024-09-12 18:22:46 -07:00
Debanjum Singh Solanky
272eae5d66 Add support for the newly released OpenAI O1 model series for preview
The O1 series doesn't seem to support streaming, stop words or
temperature, response_format currently.
2024-09-12 18:22:46 -07:00
Alexander Matyasko
9570933506
Support Google's Gemini model series (#902)
* Add functions to chat with Google's gemini model series
  * Gracefully close thread when there's an exception in the gemini llm thread
* Use enums for verifying the chat model option type
* Add a migration to add the gemini chat model type to the db model
* Fix chat model selection verification and math prompt tuning
* Fix extract questions method with gemini. Enforce json response in extract questions.
* Add standard stop sequence for Gemini chat response generation

---------

Co-authored-by: sabaimran <narmiabas@gmail.com>
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-09-12 18:17:55 -07:00
Debanjum Singh Solanky
42b727e926 Revert additional logging enabled to debug automation failures in prod
Additional logging was enabled to debug automation failures in
production since migration chat API to use POST request method (from
earlier GET).

Redirect from http to https was default to use GET instead of POST
method to call /api/chat on redirect. This has been resolved now
2024-09-12 17:56:54 -07:00
sabaimran
14a495cbb5 Release Khoj version 1.22.3 2024-09-12 12:39:04 -07:00
sabaimran
91cee2eaa8 Handle redirects when scheduling chats from automations 2024-09-12 11:36:47 -07:00
sabaimran
4555969d38 Add additional log lines 2024-09-12 10:50:36 -07:00
Debanjum Singh Solanky
2cc4a0769e Release Khoj version 1.22.2 2024-09-11 18:39:24 -07:00
Debanjum Singh Solanky
7f186be742 Fix json payload passed by automations to the new POST chat API 2024-09-11 18:35:31 -07:00
sabaimran
5038d15574 Route to config_page, not to deprecated notion_config_page, on notion callback API 2024-09-11 18:30:23 -07:00
Debanjum Singh Solanky
b61d825cbc Sanitize user attached image in chat message input pane of web app 2024-09-11 18:02:33 -07:00
Debanjum Singh Solanky
de60ad7da6 Update automations to call new POST chat API endpoint 2024-09-11 17:28:40 -07:00
Debanjum Singh Solanky
055ead550c Update desktop shortcut, web app factchecker to use new POST chat API 2024-09-11 17:28:32 -07:00
Debanjum Singh Solanky
3f51af9a96 Keep the GET chat API endpoint for a bit before deprecating it
This is to avoid breaking non-updated Khoj clients
2024-09-11 16:50:10 -07:00
Debanjum Singh Solanky
03befc9b12 Use consistent user attached image placeholder text for chat actors
Get information sources and get output mode don't actually see the
images. They just get placeholder text to indicate that the user
attached an image to their message for context
2024-09-11 16:16:55 -07:00
Debanjum Singh Solanky
04363a504c Prompt Whisper to know "Khoj" term for speech to text transcription 2024-09-11 16:16:55 -07:00
Debanjum Singh Solanky
3dcc8695b2 Improve vertical alignment of lists in chat messages on web app
- Make train of thought icons to be top aligned, next to the
  their intermediate step heading
- Add margin bottom to ordered, unordered lists in chat message,
  similar to how it is already added for paragraphs
2024-09-11 16:16:55 -07:00
Debanjum Singh Solanky
179357b28a Default to gpt-4o-mini as online chat model 2024-09-11 16:16:55 -07:00
sabaimran
ae74c6ca55 Release Khoj version 1.22.1 2024-09-11 13:03:53 -07:00
sabaimran
cd5db277f3 Fix sync to async issue when getting all valid vision configs 2024-09-11 12:57:54 -07:00
sabaimran
9b12290c17 Release Khoj version 1.22.0 2024-09-11 11:21:02 -07:00
sabaimran
2932d305b0 Simplify redundant logic for constructing structured messages with image url 2024-09-10 21:09:43 -07:00
sabaimran
07e2c49a7a Set default temperature to 0.7 in the extract_questions method 2024-09-10 21:09:21 -07:00
sabaimran
8d40fc0aef Limit vision_enabled image formatting to OpenAI APIs and send vision to extract_questions query 2024-09-10 20:08:14 -07:00
Debanjum Singh Solanky
aa31d041f3 Style list html elements by default on web app to improve readability
Previously list styling was turned off for some reason in Next.js
2024-09-10 17:45:04 -07:00
Debanjum Singh Solanky
596db603e0 Pass query params to chat API in POST body instead of URL query string
Closes #899, #678
2024-09-10 13:57:03 -07:00
Debanjum Singh Solanky
fc6345e246 Simplify setImagePath for upload from chat input area of web app 2024-09-10 09:18:54 -07:00
Raghav Tirumale
549686a7a4
Add Vision Support (#889)
# Summary of Changes
* New UI to show preview of image uploads
* ChatML message changes to support gpt-4o vision based responses on images
* AWS S3 image uploads for persistent image context in conversations
* Database changes to have `vision_enabled` option in server admin panel while configuring models
* Render previously uploaded images in the chat history, show uploaded images for pending msgs
* Pass the uploaded_image_url through to subqueries
* Allow image to render upon first message from the homepage
* Add rendering support for images to shared chat as well
* Fix some UI/functionality bugs in the share page
* Convert user attached images for chat to webp format before upload
* Use placeholder to attached image for data source, response mode actors
* Update all clients to call /api/chat as a POST instead of GET request
* Fix copying chat messages with images to clipboard

TLDR; Add vision support for openai models on Khoj via the web UI!

---------

Co-authored-by: sabaimran <narmiabas@gmail.com>
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-09-09 15:22:18 -07:00
Debanjum Singh Solanky
b553bba1d8 Release Khoj version 1.21.6 2024-09-09 14:55:36 -07:00
sabaimran
223d310ea2 CTA in welcome email 2024-09-09 14:33:27 -07:00
Debanjum Singh Solanky
7941b12d50 Toggle speak, send buttons based on chat input text entered on Desktop 2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
b5f6550de2 Move link to source code from Nav pane to About page on Desktop app 2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
77b44f6db0 Update Desktop app dependencies 2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
303d8ed64e Update Obsidian plugin package dependencies 2024-09-09 10:40:53 -07:00
sabaimran
8e6b9afeb7 Add an automation for research paper summaries 2024-09-08 11:50:49 -07:00
Debanjum
05c169bb37
Set File Types to Sync from Obsidian via Khoj Plugin Settings Page (#904)
Limit file types to sync with Khoj from Obsidian to:
- Avoid hitting per user index-able data limits, especially for folks on the Khoj cloud free tier. E.g by excluding images in Obsidian vault from being synced
- Improve context used by Khoj to generate responses
2024-09-05 22:40:30 -07:00
Husain007
4e8ead66a8
Fix URL to web, desktop settings pages on Desktop application (#903)
Update web and desktop settings URLs on desktop application from previous 'config' path to new 'settings' path
2024-09-05 14:47:43 -07:00
Debanjum Singh Solanky
bc26cf8b2f Only show updated index notice on success in Obsidian plugin
Previously it'd show indexing success notice on error and success
2024-09-04 17:52:32 -07:00
Debanjum Singh Solanky
cb425a073d Use rich text error to better guide when exceed data sync limits in Obsidian
When user exceeds data sync limits. Show error notice with
- Link to web app settings page to upgrade subscription
- Link to Khoj plugin settings in Obsidian to configure file types to
  sync from vault to Khoj
2024-09-04 17:52:32 -07:00
Debanjum Singh Solanky
19efc83455 Set File Types to Sync from Obsidian via Khoj Plugin Settings Page
Useful to limit file types to sync with Khoj. Avoids hitting indexed
data limits, especially for users on the Khoj cloud free tier

Closes #893
2024-09-04 16:09:56 -07:00
sabaimran
7216a06f5f Release Khoj version 1.21.5 2024-09-03 21:58:00 -07:00
sabaimran
895f1c8e9e Gracefully close thread when there's an exception in the anthropic llm thread. Include full stack traces. 2024-09-03 13:16:51 -07:00
sabaimran
17901406aa Gracefully close thread when there's an exception in the openai llm thread. Closes #894. 2024-09-03 13:16:51 -07:00
sabaimran
912cc0074a Use nonlocal for conversation_id when running the event_generator 2024-09-03 11:55:06 -07:00
sabaimran
591f5a522c Release Khoj version 1.21.4 2024-09-02 17:45:39 -07:00
sabaimran
9306a0bb2c Prefetch the settings and openai_config of a texttoimagemodelconfig 2024-09-02 17:35:21 -07:00
sabaimran
6eb06e8626 Downgrade rate limit to 200MB 2024-08-25 15:26:27 -07:00
sabaimran
439a2680fd Increase rate limits for data indexing 2024-08-25 15:09:30 -07:00
sabaimran
4b77325f63 Default to infinite distance when using the search API 2024-08-24 19:57:49 -07:00
sabaimran
e919d28f1c Add support for custom search model-specific thresholds 2024-08-24 19:28:26 -07:00
sabaimran
fa4d808a5f Encode uri components when sending automations data to the server 2024-08-24 18:45:50 -07:00
sabaimran
387b7c7887 Release Khoj version 1.21.3 2024-08-23 11:15:15 -07:00
Debanjum Singh Solanky
5927ca8032 Properly close chat stream iterator even if response generation fails
Previously chat stream iterator wasn't closed when response streaming
for offline chat model threw an exception.

This would require restarting the application. Now application doesn't
hang even if current response generation fails with exception
2024-08-23 02:06:26 -07:00
Debanjum Singh Solanky
238bc11a50 Fix, improve openai chat actor, director tests & online search prompt 2024-08-22 19:09:33 -07:00
Debanjum Singh Solanky
9986c183ea Default to gpt-4o-mini instead of gpt-3.5-turbo in tests, func args
GPT-4o-mini is cheaper, smarter and can hold more context than
GPT-3.5-turbo. In production, we also default to gpt-4o-mini, so makes
sense to upgrade defaults and tests to work with it
2024-08-22 19:04:49 -07:00
Debanjum Singh Solanky
8a4c20d59a Enforce json response by offline models when requested by chat actors
- Background
  Llama.cpp allows enforcing response as json object similar to OpenAI
  API. Pass expected response format to offline chat models as well.

- Overview
  Enforce json output to improve intermediate step performance by
  offline chat models. This is especially helpful when working with
  smaller models like Phi-3.5-mini and Gemma-2 2B, that do not
  consistently respond with structured output, even when requested

- Details
  Enforce json response by extract questions, infer output offline
  chat actors
  - Convert prompts to output json objects when offline chat models
    extract document search questions or infer output mode
  - Make llama.cpp enforce response as json object

- Result
  - Improve all intermediate steps by offline chat actors via json
    response enforcement
  - Avoid the manual, ad-hoc and flaky output schema enforcement and
    simplify the code
2024-08-22 18:07:44 -07:00
Debanjum Singh Solanky
ab7fb5117c Release Khoj version 1.21.2 2024-08-20 12:38:54 -07:00
Debanjum Singh Solanky
de24ffcf0d Upgrade Axios, a desktop app dependency, to version 1.7.4 2024-08-20 12:32:36 -07:00
sabaimran
1ac8de6c3a Release Khoj version 1.21.1 2024-08-20 11:55:34 -07:00
sabaimran
f6ce2fd432 Handle end of chunk logic in openai stream processor 2024-08-20 10:50:09 -07:00
sabaimran
029775420c Release Khoj version 1.21.0 2024-08-20 10:01:56 -07:00
sabaimran
4808ce778a
Merge pull request #892 from khoj-ai/upgrade-offline-chat-models-support
Upgrade offline chat model support. Default to Llama 3.1
2024-08-20 11:51:20 -05:00
Debanjum Singh Solanky
58c8068079 Upgrade default offline chat model to llama 3.1 2024-08-20 09:28:56 -07:00
sabaimran
2d9dd81e76 Re-add authenticated decorator to api_chat.py /chat endpoint 2024-08-19 05:37:18 -05:00
sabaimran
2c5350329a Remove the hashes from titles in found relevant notes 2024-08-18 22:31:15 -05:00
Debanjum Singh Solanky
acdc3f9470 Unwrap any json in md code block, when parsing chat actor responses
This is a more robust way to extract json output requested from
gemma-2 (2B, 9B) models which tend to return json in md codeblocks.

Other models should remain unaffected by this change.

Also removed request to not wrap json in codeblocks from prompts. As
code is doing the unwrapping automatically now, when present
2024-08-16 14:16:29 -05:00
Debanjum Singh Solanky
ca45fce8ac Break long links in train of thought to stay within chat page width 2024-08-16 14:16:29 -05:00
sabaimran
c0316a6b5d
Enable free tier users to have unlimited chats with the default chat model (#886)
- Allow free tier users to have unlimited chats with default chat model. It'll only be rate-limited and at the same rate as subscribed users
- In the server chat settings, replace the concept of default/summarizer models with default/advanced chat models. Use the advanced models as a default for subscribed users.
- For each `ChatModelOption' configuration, allow the admin to specify a separate value of `max_tokens' for subscribed users. This allows server admins to configure different max token limits for unsubscribed and subscribed users
- Show error message in web app when hit rate limit or other server errors
2024-08-16 12:14:44 -07:00
Debanjum
8dad9362e7
Improve search model config display for admin (#887) from aam-at/feature/improve_search_model_config_admin
Currently, the search model config display for admins only shows the id of the search model config, which is not very informative. 

The changes enhances the admin console by displaying the name of the search model config (name), as well as the bi-encoder model (bi_encoder) and cross-encoder model (cross_encoder) along the id.
2024-08-16 07:33:55 -07:00
Debanjum
2b1482d2b4
Fix indexing content from Emacs #883 from aam-at/bugfix/fix_emacs_if
Previously `force' was passed as a query param to the single indexing API. After the recent API updates, it is meant to select the API method to use (PATCH vs PATCH). Converting `force' argument to a bool fixes implementing this new behavior
2024-08-16 07:32:46 -07:00
Debanjum
0b568e204e
Add model_config for cross-encoder model (#885) from aam-at/feature/crossencoder_model_config
Add `model_config' for the cross-encoder model, so the server admin can use models which require the `trust_remote_code' argument to run locally
2024-08-16 07:32:19 -07:00
Debanjum
39e566ba91
Improve Document, Online Search to Answer Vague or Meta Questions (#870)
- Major
  - Improve doc search actor performance on vague, random or meta questions
  - Pass user's name to document and online search actors prompts

- Minor
  - Fix and improve openai chat actor tests
  - Remove unused max tokns arg to extract qs func of doc search actor
2024-08-16 06:46:13 -07:00
Debanjum Singh Solanky
27ad9b1302 Remove unused max tokns arg to extract qs func of doc search actor 2024-08-13 12:53:39 +05:30
Debanjum Singh Solanky
f75606d7f5 Improve doc search actor performance on vague, random or meta questions
- Issue
  Previously the doc search actor wouldn't extract good search queries
  to run on user's documents for broad, vague questions.

- Fix
  The updated extract questions prompt shows and tells the doc search
  actor on how to deal with such questions

  The doc search actor's temperature was also increased to support more
  creative/random questions. The previous temp of 0 was meant to
  encourage structured json output. But now with json mode, a low temp is
  not necessary to get json output
2024-08-13 12:53:39 +05:30
Debanjum Singh Solanky
3675938df6 Support passing temperature to offline chat model chat actors
- Use temperature of 0 by default for extract questions offline chat
  actor
- Use temperature of 0.2 for send_message_to_model_offline (this is
  the default temperature set by llama.cpp)
2024-08-13 12:53:00 +05:30
Shantanu Sakpal
b5bcce7f85
Cycle through chat history in chat input on Obsidian (#861)
* Add ability to cycle through the chat history in the chat input on Obsidian (similar to terminal history navigation)
* Add mod key shortcut to cycle through chat history in chat input
* Add shortcut help text in chat input placeholder

---------

Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-08-12 23:55:25 -07:00
srikary12
05c0aa3882
Support exclusion file filters (#826)
### Overview
Support exclude file filter in user search queries

### Details
- All of the exclude file filter terms need to be satisfied
- Any one of the include file filter terms should be satisfied

### Example
- **Search Query**: *what happened yesterday? -file:"tasks.org" -file:"work.md" file:"diary.org" file:"journal.org*
- **Behavior**: Query will try find relevant notes in any of `journal.org` or `diary.org` and not in `tasks.org` and not in `work.md`

### Details
* Add support for exclusion file filters
* Translate file filter to valid Django DB entry filter regex
* Exclude all files when multiple exclude file filter in query

Previously we were applying an "Or" filter, which would exclude any
file mentioned in a query with multiple exclude file filter.

This is not what we naturally mean when we ask excluding a file in a query

* Rename, rearrange, deduplicate and add file filter tests

Closes #728
---------

Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-08-12 05:41:54 -07:00
Alexander Matyasko
2d9bf14ecb Improve search model config display for admin 2024-08-11 19:13:25 +08:00
Debanjum Singh Solanky
7815e02dd4 Release Khoj version 1.20.4 2024-08-11 16:00:13 +05:30
Debanjum Singh Solanky
d951e36945 Update khoj.el package description, it had gone stale 2024-08-11 15:52:46 +05:30
Debanjum Singh Solanky
16b31c3e35 Refresh automation data shown by edit automation card after update
Previously required the automation page to be refreshed to see updates
to the automation in the edit automation card. This would be seen when
user tries to edit an automation multiple times (without a page refresh)
2024-08-11 15:52:46 +05:30
Debanjum Singh Solanky
f2f37ae444 Fix creating, editing automations that start weekly on Sunday 2024-08-11 15:52:46 +05:30
Debanjum Singh Solanky
ec9add9a51 Fix automation edit cards height. Scroll when card longer than screen 2024-08-11 15:52:46 +05:30
sabaimran
d99f03e4f3 If the list of choices in a chunk is empty, continue in openai response 2024-08-11 15:30:09 +05:30
Alexander Matyasko
f16b0f628b Fix true/false evaluation in Emacs to prevent unintended index re-indexing
Previously, the code incorrectly treated all non-nil values as true, leading to
the index being re-indexed with the force flag whenever the user selected to
update the index.
2024-08-11 17:24:11 +08:00
Alexander Matyasko
0e9e9648e6 Fix emacs if syntax 2024-08-11 17:24:11 +08:00
sabaimran
6f94a076f7 Add conversation_id parameter to the create_automation method 2024-08-11 10:45:13 +05:30
sabaimran
acb825f4f5 Bug fixes for automations
- Pass the new conversation id as kwarg for the scheduled_chat function
- For edit automations, re-use the original conversation id
- Parse images correctly for image automations
2024-08-11 10:41:43 +05:30
Debanjum Singh Solanky
5075d13902 Give visual feedback when interact with chat message feedback buttons
- Use color to provide visual feedback when hover, click on feedback
  buttons
- Use color to provide visual feedback when hover on speech, copy
  buttons click
- Add cooldown period before being able to send feedback on that message again.
  Avoids inadvertent multiple consecutive clicks on feedback buttons
2024-08-10 20:09:52 +05:30