Latest claude model wanted to say more than just give the json output.
The updated prompt encourages the model to ouput just json. This is
similar to what is already being done for other prompts
It was previously added under the google utils. Now it can be used by
other conversation processors as well.
The updated function
- can get both base64 encoded and PIL formatted images from url
- will return the media type of the image as well in response
* Create explicit flow to enable the free trial
The current design is confusing. It obfuscates the fact that the user is on a free trial. This design will make the opt-in explicit and more intuitive.
* Use the Subscription Type enum instead of hardcoded strings everywhere
* Use length of free trial in the frontend code as well
Had temporarily updated the default selected agent to last used.
Revert for now as
1. The previous logic was buggy. It didn't select the default agent
even when the last used agent was the default agent. Which would
require more work.
2. It maybe too early anyway to set the default agent to last used.
Adding div elements to message to render degraded text copied to
clipboard for messages with user uploaded images.
This change fixes that by separating message to render from message
for clipboard. It ensures differently formatted forms of the user
images are added to the two to allow proper rendering while still
having decently formatted text copied to clipboard
Add newline instead of sending message when hit Enter key on mobile
displays. As on phones shift key doesn't exist and send button is easily
clickable.
Limit hitting Enter key to send message to computers = larger display
= expected to have full fledged keyboards.
- Remove border from agent detail hover card on home page
- Do not wrap long agent names in agent pills on home page
- Handle scenario where chatInputRef is null
Add support for generating dynamic diagrams in flow with Excalidraw (https://github.com/excalidraw/excalidraw). This happens in three steps:
1. Default information collection & intent determination step.
2. Improving the overall guidance of the prompt for generating a JSON, Excalidraw-compatible declaration.
3. Generation of the diagram to output to the final UI.
Add support in the web UI.
Previously only notes context from chat history was included.
This change includes online context from chat history for model to use
for response generation.
This can reduce need for online lookups by reusing previous online
context for faster responses. But will increase overall response time
when not reusing past online context, as faster context buildup per
conversation.
Unsure if inclusion of context is preferrable. If not, both notes and
online context should be removed.
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.
This will improve
- Models ability to differentiate data from user query.
That should improve response quality and reduce prompt injection
probability
- Make truncation logic simpler and more robust
When context window hit, can simply pop messages to auto truncate
context in order of context, user, assistant message for each
conversation turn in history until reach current user query
The complex, brittle logic to extract user query from context in
last user message isn't required.
Marking the context message with assistant role doesn't translate well
across chat models. E.g
- Gemini can't handle consecutive messages by role = model well
- Claude will merge consecutive messages by same role. In current
message ordering the context message will result get merged into the
previous assistant response. And if move context message after user
query. The truncation logic will have to hop and skip while doing
deletions
- GPT seems to handle consecutive roles of any type fine
Using context role = user generalizes better across chat models for
now and aligns with previous behavior.
Improve separation of note snippets and show its origin file in notes
prompt to have more readable, contextualized text shared with model.
Previously the references dict was being directly passed as a string.
The documents don't look well formatted and are less intelligible.
- Passing file path along with notes snippets will help contextualize
the notes better.
- Better formatting should help with making notes more readable by the
chat model.
- Double click on agent to open edit agent card
- Focus on chat input pane when agent selected/clicked
for quick, smooth agent switch and message flow
- Hover on agent to see agent detail card on non-mobile displays
- Use debounce to only show when hover on card for a bit
- Default to None for the input_tools and output_modes so that they can be managed in the admin panel
- Hold off on showing off all Public Agents until we have a better experience for user profiles etc.
Have get agents API return agents ordered intelligently
- Put the default agent first
- Sort used agents by most recently chatted with agent for ease of access
- Randomly shuffle the remaining unused agents for discoverability
This change wraps the agent pane in a scroll area with all agents shown.
It allows selecting an agent to chat with directly from the home
screen without breaking flow and having to jump to the agents page.
The previous flow was not convenient to quickly and consistently start
chat with one of your standard agents.
This was because a random subet of agents were shown on the home page.
To start chat with an agent not shown on home screen load you had to
open the agents page and initiate the conversation from there.
Exposes a transient switch with available agents as selectable options
in the Khoj chat sub-menu.
Currently shows agent slugs instead of agent names as options. This
isn't the cleanest but gets the job done for now.
Only new conversations with a different agent can be started. Existing
conversations will continue with the original agent it was created with.
The ability to switch the conversation's agent doesn't exist on the
server yet.
One limitation of this methodology is that localStorage has a limit in how much data it can take. Should add more graceful error handling here as well.
Currently experiencing difficulty instruction following when an image is shared. It's more likely to try and output an image. Update to make a clearer distinction.
- Put the attached images display div inside the same parent div as
the text area
- Keep the attachment, microphone/send message buttons aligned with
the text area. So the attached images just show up at the top of the
text area but everything else stays at the same horizontal height as
before.
- This improves the UX by
- Ensuring that the attached images do not obscure the agents pane
above the chat input area
- The attached images visually look like they are inside the actual
input area, rather than floating above it. So the visual aligns
with the semantics
Previously the web app only expected a single image to be shared by
the user as part of their query.
This change allows sharing multiple images from the web app.
Closes#921
Previously Khoj could respond to a single shared image at a time.
This changes updates the chat API to accept multiple images shared by
the user and send it to the appropriate chat actors including the
openai response generation chat actor for getting an image aware
response
Recent changes made Khoj try respond even when document lookup fails.
This change missed handling downstream effects of a failed document
lookup, as the defiltered_query was null and so the text response
didn't have the user query to respond to.
This code initializes defiltered_query to original user query to
handle that.
Also response_type wasn't being passed via
send_message_to_model_wrapper_sync unlike in the async scenario
- Simplifies changing order in which web scrapers are invoked to read
web page by just changing their priority number on the admin panel.
Previously you'd have to delete/, re-add the scrapers to change
their priority.
- Add help text for each scraper field to ease admin setup experience
- Friendlier env var to use Firecrawl's LLM to extract content
- Remove use of separate friendly name for scraper types.
Reuse actual name and just make actual name better
The other webpage scrapers will not work for internal webpages. Try
access those urls directly if they are visible to the Khoj server over
the network.
Only enable this by default for self-hosted, single user setups.
Otherwise ability to scan internal network would be a liability!
For use-cases where it makes sense, the Khoj server admin can
explicitly add the direct webpage scraper via the admin panel
- Set up scrapers via API keys, explicitly adding them via admin panel
or enabling only a single scraper to use via server chat settings.
- Use validation to ensure only valid scrapers added via admin panel
Example API key is present for scrapers that require it etc.
- Modularize the read webpage functions to take api key, url as args
Removes dependence on constants loaded in online_search. Functions
are now mostly self contained
- Improve ability to read webpages by using the speed, success rate of
different scrapers. Optimal configuration needs to be discovered
This should reduce webpage read and response generation time.
Previously, we'd run separate webpage read and extract relevant
content pipes for each distinct (query, url) pair.
Now we aggregate all queries for each url to extract information from
and run the webpage read and extract relevant content pipes once for
each distinct url.
Even though the webpage content extraction pipes were previously being
in parallel. They increased response time by
1. adding more context for the response generation chat actor to
respond from
2. and by being more susceptible to page read and extract latencies of
the parallel jobs
The aggregated retrieval of context for all queries for a given
webpage could result in some hit to context quality. But it should
improve and reduce variability in response time, quality and costs.
Set the FIRECRAWL_TO_EXTRACT environment variable to true to have
Firecrawl scrape and extract content from webpage using their LLM
This could be faster, not sure about quality as LLM used is obfuscated
Firecrawl is open-source, self-hostable with a default hosted service
provided, similar to Jina.ai. So it can be
1. Self-hosted as part of a private Khoj cloud deployment
2. Used directly by getting an API key from the Firecrawl.dev service
This is as an alternative to Olostep and Jina.ai for reading webpages.
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references
This change ensures a response to the users query is attempted even
when web info retrieval fails.
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references
This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
- Remove unused subscribed variable from the chat API
- Unexpectedly dropped client app logging when migrated API chat to do
advanced streaming in july
- Only set addedFiles to selectedFiles when selectedFiles is an array
- Only set seleectedFiles, addedFiles to API response json when
response succeeded. Previously we set it to response json
on errors as well. This made the variables into json objects instead
of arrays on API call failure
- Check if selectedFiles, addedFiles are arrays before running
operations on them. Previously the addedFiles.includes was where the
code would fail
finish_reason (google.ai.generativelanguage_v1beta.types.Candidate.FinishReason):
Optional. Output only. The reason why the
model stopped generating tokens.
If empty, the model has not stopped generating
the tokens.
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set
These assume no server chat model settings is configured
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references
This change ensures a response to the users query is attempted even
when web info retrieval fails.
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references
This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set
These assume no server chat model settings is configured
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set
These assume no server chat model settings is configured
# Overview
- Default to use user chat models for train of thought when no server chat settings created by admins
- Default to not create server chat settings on first run
# Details
This change simplifies switching chat models for self-hosted setups
by just changing the chat model on the user settings page.
It falls back to use the user chat model for train of thought
if server chat settings have not been created on the admin panel.
Server chat settings, when set, controls the chat model used
for Khoj's train of thought and the default user chat model.
Previously a self-hosted user had to update
1. the server chat settings in the admin panel and
2. their own user chat model in the user settings panel
to completely switch to a different chat model
for both train of thought & response generation respectively
You can still set server chat settings via the admin panel
to use a different chat model for train of thought vs response generation.
But this is only useful for advanced, multi-user setups.
Update regex to also include any links to code generated images that
aren't explicitly meant to be displayed inline. This allows folks to
download the image (unlike the fake link that doesn't work created by
model)
Previously Khoj would start answering the previous query. This maybe
because the prompt uses User for prompt in chat history but was using
Q for current user prompt.
Make webpages to read automatically on search_online configurable via
a argument.
Set it to default to 1, so other callers of the function
are unaffected.
But iterative chat director can still decide which, if
any, webpages to read based on the online search it performs
This change allows the iterative director to dive deeper into its
research as the data extracted contains relevant links from the webpage
Previous summarization prompt didn't extract relevant links from the
webpage which limited further explorations from webpages
Move construct_chat_history and ChatEvent enum into conversation.utils
and move send_message_to_model_wrapper to conversation.helper to
modularize code. And start thinning out the bloated routers.helper
- conversation.util components are shared functions that conversation
child packages can use.
- conversation.helper components can't be imported by conversation
packages but it can use these child packages
This division allows better modularity while avoiding circular
import dependencies
Create python code executing chat actor
- The chat actor generate python code within sandbox constraints
- Run the generated python code in the cohere terrarium, pyodide
based sandbox accessible at sandbox url
- Create a more dynamic reasoning agent that can evaluate information and understand what it doesn't know, making moves to get that information
- Lots of hacks and code that needs to be reversed later on before submission
Update chat actors to use user's chat model for train of thought. This
requires passing the user info as argument to all the chat actors.
Whether the user is subscribed or not can be inferred from the user
info being passed, so it doesn't need to be passed as a separate
argument to chat actor functions
Let send_message_to_model function infer chat model instead of passing
it as an argument from some chat actors. Better if this logic can be
done in a single place.
Server chat settings can be set for advanced self-hosted or multi-user
cloud setups. They are not necessary anymore as we fallback to use the
users chat model for train of thought now
Fallback to use user chat model for train of thought if server chat
settings not defined.
This simplifies switching chat models for single-user, self-hosted
setups by just changing the chat model on the user settings page.
Server chat settings, when set, controls the default user chat model
and the chat model that is used for Khoj's train of thought.
Previously a self-hosted user had to update both the server chat
settings in the admin panel and their own user chat model in the user
settings panel to explicitly switch to a different chat model (i.e to
switch to a new model for both train of thought & response generation)
You can still set server chat settings to use a different chat
model for train of thought and response generation. But this is only
necessary for advanced self-hosted or cloud hosted setups of Khoj.
Previously you had to refresh the page to see the updated data on
reopening the agents edit card after a save operation.
Now you see the latest saved agent data on reopening the agents edit
card. This should avoid confusion on whether the data was saved
correctly
If a public or protected agent is made private. Other users who were
having conversation with that agent will have to carry on their
conversation using default agent instead
Loading the embeddings model, even locally seems to be taking much
longer. Use timer to track visibility into embedding, cross-encoder
model load times
We should start disambiguating the the max input from output size. Max
prompt size should only be used for the max input context to an LLM.
If required max_output_tokens should be set as a separate new field
Currently, the personality of the agent is only included in the final response that it returns to the user. Historically, this was because models were quite bad at navigating the additional context of personality, and there was a bias towards having more control over certain operations (e.g., tool selection, question extraction).
Going forward, it should be more approachable to have prompts included in the sub tasks that Khoj runs in order to response to a given query. Make this possible in this PR. This also sets us up for agent creation becoming available soon.
Create custom agents in #928
Agents are useful insofar as you can personalize them to fulfill specific subtasks you need to accomplish. In this PR, we add support for using custom agents that can be configured with a custom system prompt (aka persona) and knowledge base (from your own indexed documents). Once created, private agents can be accessible only to the creator, and protected agents can be accessible via a direct link.
Custom tool selection for agents in #930
Expose the functionality to select which tools a given agent has access to. By default, they have all. Can limit both information sources and output modes.
Add new tools to the agent modification form
## Overview
Add user country code as context for doing online search with serper.dev API.
This should find more user relevant results from online searches by Khoj
## Details
### Major
- Default to using system clock to infer user timezone on js clients
- Infer country from timezone when only timezone received by chat API
- Localize online search results to user country when location available
### Minor
- Add `__str__` func to `LocationData` class to deduplicate location string generation
Make all the scroll actions just use requestAnimationFrame instead of
setTimeout. It better aligns with browser rendering loop, so better
for UX changes than setTimeout
Using system clock to infer user timezone on clients makes Khoj
more robust to provide location aware responses.
Previously only ip based location was used to infer timezone via API.
This didn't provide any decent fallback when calls to ipapi failed or
Khoj was being run in offline mode
Timezone is easier to infer using clients system clock. This can be
used to infer user country name, country code, even if ip based
location cannot be inferred.
This makes using location data to contextualize Khoj's responses more
robust. For example, online search results are retrieved for user's
country, even if call to ipapi.co for ip based location fails
Get country code to server chat api from i.p location check on clients.
Use country code to get country specific online search results via Serper.dev API
Previously the location string from location data was being generated
wherever it was being used.
By adding a __str__ representation to LocationData class, we can
dedupe and simplify the code to get the location string
- Use tabs for GPU/CPU type khoj being install on
- Update CMAKE flags to use to install Khoj with correct GPU support
Previous flags used DLLAMA, this has been updated to use DGGML now
in llama.cpp
Remove unnecessary "Inferred Query" heading prefix to image generation prompt
used by Khoj. The inferred query in chat message has a heading of it's
own, so avoid two headings for the image prompt
The problem was the tool tip was visible on hover, but it was slow, so before the tool tip popped up, the user would click on the button and this stopped the tool tip from popping up.
So i reduced the popup delay to 10ms. now as soon as user hovers over the button, they will see that its a feature coming soon!
Improve Scrolling on Chat page of Web app
- Details
1. Only auto scroll Khoj's streamed response when scroll is near bottom of page
Allows scrolling to other messages in conversation while Khoj is formulating and streaming its response
2. Add button to scroll to bottom of the chat page
3. Scroll to most recent conversation turn on conversation first load
It's a better default to anchor to most recent conversation turn (i.e most recent user message)
4. Smooth scroll when Khoj's chat response is streamed
Previously the scroll would jitter during response streaming
5. Anchor scroll position when fetch and render older messages in conversation
Allow users to keep their scroll position when older messages are fetched from server and rendered
Resolves#758
* Update the conversation_id primary key field to be a uuid
- update associated API endpoints
- this is to improve the overall application health, by obfuscating some information about the internal database
- conversation_id type is now implicitly a string, rather than an int
- ensure automations are also migrated in place, such that the conversation_ids they're pointing to are now mapped to the new IDs
* Update client-side API calls to correctly query with a string field
* Allow modifying of conversation properties from the chat title
* Improve drag and drop file experience for chat input area
* Use a phosphor icon for the copy to clipboard experience for code snippets
* Update conversation_id parameter to be a str type
* If django_apscheduler is not in the environment, skip the migration script
* Fix create automation flow by storing conversation id as string
The new UUID used for conversation id can't be directly serialized.
Convert to string for serializing it for later execution
---------
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
The chat model initialize interaction flow is fairly similar across
the chat model providers.
This should simplify adding new chat model providers and reduce
chances of bugs in the interactive chat model initialization flow.
Given the LLM landscape is rapidly changing, providing a good default
set of options should help reduce decision fatigue to get started
Improve initialization flow during first run
- Set Google, Anthropic Chat models too
Previously only Offline, Openai chat models could be set during init
- Add multiple chat models for each LLM provider
Interactively set a comma separated list of models for each provider
- Auto add default chat models for each provider in non-interactive
model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set
- Do not ask for max_tokens, tokenizer for offline models during
initialization. Use better defaults inferred in code instead
- Explicitly set default chat model to use
If unset, it implicitly defaults to using the first chat model.
Make it explicit to reduce this confusion
Resolves#882
This should configure Khoj with decent default configurations via
Docker and avoid needing to configure Khoj via admin page to start
using dockerized Khoj
Update default max prompt size set during khoj initialization
as online chat model are cheaper and offline chat models have larger
context now
RapidOCR depends on OpenCV which by default requires a bunch of GUI
paramters. This system package dependency set (like libgl1) is flaky
Making the RapidOCR dependency optional should allow khoj to be more
resilient to setup/dependency failures
Trade-off is that OCR for documents may not always be available and
it'll require looking at server logs to find out when this happens
This reverts commit c9665fb20b.
Revert "Fix handling for new conversation in agents page"
This reverts commit 3466f04992.
Revert "Add a unique_id field for identifiying conversations (#914)"
This reverts commit ece2ec2d90.
- This allows triggering khoj chat from the browser addressbar
- So now if you add Khoj to your browser bookmark with
- URL: https://app.khoj.dev/?q=%s
- Keyword: khoj
- Then you can type "khoj what is the news today" to trigger Khoj to
quickly respond to your query. This avoids having to open the Khoj web
app before asking your question
* Add a unique_id field to the conversation object
- This helps us keep track of the unique identity of the conversation without expose the internal id
- Create three staged migrations in order to first add the field, then add unique values to pre-fill, and then set the unique constraint. Without this, it tries to initialize all the existing conversations with the same ID.
* Parse and utilize the unique_id field in the query parameters of the front-end view
- Handle the unique_id field when creating a new conversation from the home page
- Parse the id field with a lightweight parameter called v in the chat page
- Share page should not be affected, as it uses the public slug
* Fix suggested card category
Previously Khoj would stop in the middle of response generation when
the safety filters got triggered at default thresholds. This was
confusing as it felt like a service error, not expected behavior.
Going forward Khoj will
- Only block responding to high confidence harmful content detected by
Gemini's safety filters instead of using the default safety settings
- Show an explanatory, conversational response (w/ harm category)
when response is terminated due to Gemini's safety filters
Enables using any image generation model on Replicate's Predictions
API endpoints.
The server admin just needs to add text-to-image model on the
server/admin panel in organization/model_name format and input their
Replicate API key with it
Create db migration (including merge)
Set sender email using `RESEND_EMAIL` environment variable for magic link sent via Resend API for authentication . It was previously hard-coded. This prevented hosting Khoj on other domains.
Resolves#908
Strip any json md codeblock wrapper if exists before processing
response by output mode, extract questions chat actor. This is similar
to what is already being done by other chat actors
Useful for succesfully interpreting json output in chat actors when
using non (json) schema enforceable models like o1 and gemma-2
Use conversation helper function to centralize the json md codeblock
removal code
This happens sometimes when LLM respons contains [\[1\]] kind of links
as reference. Both markdown-it and katex apply styling.
Katex's span uses display: block which makes the rendering of these
references take up a whole line by themselves.
Override block styling of spans within an `a' element to prevent such
chat message styling issues
* Add functions to chat with Google's gemini model series
* Gracefully close thread when there's an exception in the gemini llm thread
* Use enums for verifying the chat model option type
* Add a migration to add the gemini chat model type to the db model
* Fix chat model selection verification and math prompt tuning
* Fix extract questions method with gemini. Enforce json response in extract questions.
* Add standard stop sequence for Gemini chat response generation
---------
Co-authored-by: sabaimran <narmiabas@gmail.com>
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
Additional logging was enabled to debug automation failures in
production since migration chat API to use POST request method (from
earlier GET).
Redirect from http to https was default to use GET instead of POST
method to call /api/chat on redirect. This has been resolved now
Get information sources and get output mode don't actually see the
images. They just get placeholder text to indicate that the user
attached an image to their message for context
- Make train of thought icons to be top aligned, next to the
their intermediate step heading
- Add margin bottom to ordered, unordered lists in chat message,
similar to how it is already added for paragraphs
# Summary of Changes
* New UI to show preview of image uploads
* ChatML message changes to support gpt-4o vision based responses on images
* AWS S3 image uploads for persistent image context in conversations
* Database changes to have `vision_enabled` option in server admin panel while configuring models
* Render previously uploaded images in the chat history, show uploaded images for pending msgs
* Pass the uploaded_image_url through to subqueries
* Allow image to render upon first message from the homepage
* Add rendering support for images to shared chat as well
* Fix some UI/functionality bugs in the share page
* Convert user attached images for chat to webp format before upload
* Use placeholder to attached image for data source, response mode actors
* Update all clients to call /api/chat as a POST instead of GET request
* Fix copying chat messages with images to clipboard
TLDR; Add vision support for openai models on Khoj via the web UI!
---------
Co-authored-by: sabaimran <narmiabas@gmail.com>
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
Limit file types to sync with Khoj from Obsidian to:
- Avoid hitting per user index-able data limits, especially for folks on the Khoj cloud free tier. E.g by excluding images in Obsidian vault from being synced
- Improve context used by Khoj to generate responses
When user exceeds data sync limits. Show error notice with
- Link to web app settings page to upgrade subscription
- Link to Khoj plugin settings in Obsidian to configure file types to
sync from vault to Khoj
Previously chat stream iterator wasn't closed when response streaming
for offline chat model threw an exception.
This would require restarting the application. Now application doesn't
hang even if current response generation fails with exception
GPT-4o-mini is cheaper, smarter and can hold more context than
GPT-3.5-turbo. In production, we also default to gpt-4o-mini, so makes
sense to upgrade defaults and tests to work with it
- Background
Llama.cpp allows enforcing response as json object similar to OpenAI
API. Pass expected response format to offline chat models as well.
- Overview
Enforce json output to improve intermediate step performance by
offline chat models. This is especially helpful when working with
smaller models like Phi-3.5-mini and Gemma-2 2B, that do not
consistently respond with structured output, even when requested
- Details
Enforce json response by extract questions, infer output offline
chat actors
- Convert prompts to output json objects when offline chat models
extract document search questions or infer output mode
- Make llama.cpp enforce response as json object
- Result
- Improve all intermediate steps by offline chat actors via json
response enforcement
- Avoid the manual, ad-hoc and flaky output schema enforcement and
simplify the code
This is a more robust way to extract json output requested from
gemma-2 (2B, 9B) models which tend to return json in md codeblocks.
Other models should remain unaffected by this change.
Also removed request to not wrap json in codeblocks from prompts. As
code is doing the unwrapping automatically now, when present
- Allow free tier users to have unlimited chats with default chat model. It'll only be rate-limited and at the same rate as subscribed users
- In the server chat settings, replace the concept of default/summarizer models with default/advanced chat models. Use the advanced models as a default for subscribed users.
- For each `ChatModelOption' configuration, allow the admin to specify a separate value of `max_tokens' for subscribed users. This allows server admins to configure different max token limits for unsubscribed and subscribed users
- Show error message in web app when hit rate limit or other server errors
Currently, the search model config display for admins only shows the id of the search model config, which is not very informative.
The changes enhances the admin console by displaying the name of the search model config (name), as well as the bi-encoder model (bi_encoder) and cross-encoder model (cross_encoder) along the id.
Previously `force' was passed as a query param to the single indexing API. After the recent API updates, it is meant to select the API method to use (PATCH vs PATCH). Converting `force' argument to a bool fixes implementing this new behavior
- Major
- Improve doc search actor performance on vague, random or meta questions
- Pass user's name to document and online search actors prompts
- Minor
- Fix and improve openai chat actor tests
- Remove unused max tokns arg to extract qs func of doc search actor
- Issue
Previously the doc search actor wouldn't extract good search queries
to run on user's documents for broad, vague questions.
- Fix
The updated extract questions prompt shows and tells the doc search
actor on how to deal with such questions
The doc search actor's temperature was also increased to support more
creative/random questions. The previous temp of 0 was meant to
encourage structured json output. But now with json mode, a low temp is
not necessary to get json output
- Use temperature of 0 by default for extract questions offline chat
actor
- Use temperature of 0.2 for send_message_to_model_offline (this is
the default temperature set by llama.cpp)
* Add ability to cycle through the chat history in the chat input on Obsidian (similar to terminal history navigation)
* Add mod key shortcut to cycle through chat history in chat input
* Add shortcut help text in chat input placeholder
---------
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
### Overview
Support exclude file filter in user search queries
### Details
- All of the exclude file filter terms need to be satisfied
- Any one of the include file filter terms should be satisfied
### Example
- **Search Query**: *what happened yesterday? -file:"tasks.org" -file:"work.md" file:"diary.org" file:"journal.org*
- **Behavior**: Query will try find relevant notes in any of `journal.org` or `diary.org` and not in `tasks.org` and not in `work.md`
### Details
* Add support for exclusion file filters
* Translate file filter to valid Django DB entry filter regex
* Exclude all files when multiple exclude file filter in query
Previously we were applying an "Or" filter, which would exclude any
file mentioned in a query with multiple exclude file filter.
This is not what we naturally mean when we ask excluding a file in a query
* Rename, rearrange, deduplicate and add file filter tests
Closes#728
---------
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
Previously required the automation page to be refreshed to see updates
to the automation in the edit automation card. This would be seen when
user tries to edit an automation multiple times (without a page refresh)
Previously, the code incorrectly treated all non-nil values as true, leading to
the index being re-indexed with the force flag whenever the user selected to
update the index.
- Pass the new conversation id as kwarg for the scheduled_chat function
- For edit automations, re-use the original conversation id
- Parse images correctly for image automations
- Use color to provide visual feedback when hover, click on feedback
buttons
- Use color to provide visual feedback when hover on speech, copy
buttons click
- Add cooldown period before being able to send feedback on that message again.
Avoids inadvertent multiple consecutive clicks on feedback buttons
- Since the .gitignore will ignore any of the assets in the src/ folder when building the package wheel, we need to output the static assets to another folder just for the python pypi package. Use /compiled for this.
- Auto focus on email input on login screen for smoother login experience
- Use file icon associated with search page results. Improve search bar
- Show logged in user's email in nav menu for context
- Use previous icons with eyes for search, agents and automations items in nav menu
Hierarchical documents like org-mode, markdown have their ancestry
shown in first line. Remove it to show cleaner, deduplicated reference
text from org-mode, markdown files
Utilize chat footer space more efficiently. This is especially useful
on small screens
- Send button is anyway only enabled when there is text in chat input
- Otherwise voice message button is better to show by default
- Remove invalid call to styles.main
- Remove unnecessary top padding above side pane to keep side pane at
consistent position across web app
- Use same pageLayout styles and styling structure on agent like
automation
- Vertically center automation section and page title on it's row
- Fix applying flex vs grid with tailwind
- Remove x axis footer padding on small screens to preserve space,
keep equal spacing between footer items
- Add 1rem margin to buttons to not have overlap in boundary
- Add 1rem y-axis padding to chat footer to not have focus boundary
leave the footer boundary on smaller screens
Installing Khoj as PWA was supported in previous web UX as well. This
just adds link to the existing webmanifest to continue support for
installing Khoj as PWA with new web UX
Previously the rename wasn't updating the chat session title. We'd
have to refresh the page or side pane to get latest chat session names
after rename action.
Previously the footer's right border wasn't visible on small screens
due to usage of w-full
Use mr-1 on send button instead of px-1 on chat input parent to
eualize chat footer buttons spacing
- Show informative toast messages on copy, delete of API keys
- Onle show API keys card in non anonymous mode. API keys aren't
required (and is disabled on server side) in anon mode. Not showing
card at all in anon mode reduces chance of unnecessary confusion
Style profile pircture button on nav menu
- Use primary colored ring around subscribed user profile on nav menu
- Use gray colored ring around non-subscribed user profile on nav menu
- Use upper case initial as profile pic for user with no profile pic
- Click anywhere on nav menu item to trigger action
Previously the actual clickable area was smaller than the width of
the nav menu item
- Move the nav menu into the chat history side panel component, so that they both show up on one line
- Update all pages to use it with the new formatting
- in mobile, present the sidebar button, home button, and profile button evenly centered in the middle
- Pass userConfig from Home as prop to chatBodyData component with
loading state
- Pass loading state of userConfig to allow components to handle
rendering dependent elements once it is loaded
Use updated format for HTTP streamed responses from the Khoj server in the new chat UX
Remove references to the websocket connected field, as websocket use has been deprecated
Otherwise the Khoj's chat response is filling up in between the
streamed message and already rendered references section at the bottom
of the message
Define OnlineContext type to simplify typing online context param
across other interfaces and functions
- There were some state mismatches in configuring a whatsapp number. This commit fixes those issues and uses an external library for phone number validation
- Note that the SSR for next doesn't support rendering on the client-side, so it'll only update it one big chunk
- Fix unique key error in the chatmessage history for incoming messages
- Remove websocket value usage in the chat history side panel
- Remove other websocket code from the chat page