Get information sources and get output mode don't actually see the
images. They just get placeholder text to indicate that the user
attached an image to their message for context
- Make train of thought icons to be top aligned, next to the
their intermediate step heading
- Add margin bottom to ordered, unordered lists in chat message,
similar to how it is already added for paragraphs
# Summary of Changes
* New UI to show preview of image uploads
* ChatML message changes to support gpt-4o vision based responses on images
* AWS S3 image uploads for persistent image context in conversations
* Database changes to have `vision_enabled` option in server admin panel while configuring models
* Render previously uploaded images in the chat history, show uploaded images for pending msgs
* Pass the uploaded_image_url through to subqueries
* Allow image to render upon first message from the homepage
* Add rendering support for images to shared chat as well
* Fix some UI/functionality bugs in the share page
* Convert user attached images for chat to webp format before upload
* Use placeholder to attached image for data source, response mode actors
* Update all clients to call /api/chat as a POST instead of GET request
* Fix copying chat messages with images to clipboard
TLDR; Add vision support for openai models on Khoj via the web UI!
---------
Co-authored-by: sabaimran <narmiabas@gmail.com>
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
Limit file types to sync with Khoj from Obsidian to:
- Avoid hitting per user index-able data limits, especially for folks on the Khoj cloud free tier. E.g by excluding images in Obsidian vault from being synced
- Improve context used by Khoj to generate responses
When user exceeds data sync limits. Show error notice with
- Link to web app settings page to upgrade subscription
- Link to Khoj plugin settings in Obsidian to configure file types to
sync from vault to Khoj
Previously chat stream iterator wasn't closed when response streaming
for offline chat model threw an exception.
This would require restarting the application. Now application doesn't
hang even if current response generation fails with exception
GPT-4o-mini is cheaper, smarter and can hold more context than
GPT-3.5-turbo. In production, we also default to gpt-4o-mini, so makes
sense to upgrade defaults and tests to work with it
- Background
Llama.cpp allows enforcing response as json object similar to OpenAI
API. Pass expected response format to offline chat models as well.
- Overview
Enforce json output to improve intermediate step performance by
offline chat models. This is especially helpful when working with
smaller models like Phi-3.5-mini and Gemma-2 2B, that do not
consistently respond with structured output, even when requested
- Details
Enforce json response by extract questions, infer output offline
chat actors
- Convert prompts to output json objects when offline chat models
extract document search questions or infer output mode
- Make llama.cpp enforce response as json object
- Result
- Improve all intermediate steps by offline chat actors via json
response enforcement
- Avoid the manual, ad-hoc and flaky output schema enforcement and
simplify the code
This is a more robust way to extract json output requested from
gemma-2 (2B, 9B) models which tend to return json in md codeblocks.
Other models should remain unaffected by this change.
Also removed request to not wrap json in codeblocks from prompts. As
code is doing the unwrapping automatically now, when present
- Allow free tier users to have unlimited chats with default chat model. It'll only be rate-limited and at the same rate as subscribed users
- In the server chat settings, replace the concept of default/summarizer models with default/advanced chat models. Use the advanced models as a default for subscribed users.
- For each `ChatModelOption' configuration, allow the admin to specify a separate value of `max_tokens' for subscribed users. This allows server admins to configure different max token limits for unsubscribed and subscribed users
- Show error message in web app when hit rate limit or other server errors
Currently, the search model config display for admins only shows the id of the search model config, which is not very informative.
The changes enhances the admin console by displaying the name of the search model config (name), as well as the bi-encoder model (bi_encoder) and cross-encoder model (cross_encoder) along the id.
Previously `force' was passed as a query param to the single indexing API. After the recent API updates, it is meant to select the API method to use (PATCH vs PATCH). Converting `force' argument to a bool fixes implementing this new behavior
- Major
- Improve doc search actor performance on vague, random or meta questions
- Pass user's name to document and online search actors prompts
- Minor
- Fix and improve openai chat actor tests
- Remove unused max tokns arg to extract qs func of doc search actor
- Issue
Previously the doc search actor wouldn't extract good search queries
to run on user's documents for broad, vague questions.
- Fix
The updated extract questions prompt shows and tells the doc search
actor on how to deal with such questions
The doc search actor's temperature was also increased to support more
creative/random questions. The previous temp of 0 was meant to
encourage structured json output. But now with json mode, a low temp is
not necessary to get json output
- Use temperature of 0 by default for extract questions offline chat
actor
- Use temperature of 0.2 for send_message_to_model_offline (this is
the default temperature set by llama.cpp)
* Add ability to cycle through the chat history in the chat input on Obsidian (similar to terminal history navigation)
* Add mod key shortcut to cycle through chat history in chat input
* Add shortcut help text in chat input placeholder
---------
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
### Overview
Support exclude file filter in user search queries
### Details
- All of the exclude file filter terms need to be satisfied
- Any one of the include file filter terms should be satisfied
### Example
- **Search Query**: *what happened yesterday? -file:"tasks.org" -file:"work.md" file:"diary.org" file:"journal.org*
- **Behavior**: Query will try find relevant notes in any of `journal.org` or `diary.org` and not in `tasks.org` and not in `work.md`
### Details
* Add support for exclusion file filters
* Translate file filter to valid Django DB entry filter regex
* Exclude all files when multiple exclude file filter in query
Previously we were applying an "Or" filter, which would exclude any
file mentioned in a query with multiple exclude file filter.
This is not what we naturally mean when we ask excluding a file in a query
* Rename, rearrange, deduplicate and add file filter tests
Closes#728
---------
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
Previously required the automation page to be refreshed to see updates
to the automation in the edit automation card. This would be seen when
user tries to edit an automation multiple times (without a page refresh)
Previously, the code incorrectly treated all non-nil values as true, leading to
the index being re-indexed with the force flag whenever the user selected to
update the index.
- Pass the new conversation id as kwarg for the scheduled_chat function
- For edit automations, re-use the original conversation id
- Parse images correctly for image automations
- Use color to provide visual feedback when hover, click on feedback
buttons
- Use color to provide visual feedback when hover on speech, copy
buttons click
- Add cooldown period before being able to send feedback on that message again.
Avoids inadvertent multiple consecutive clicks on feedback buttons