Improve separation of note snippets and show its origin file in notes
prompt to have more readable, contextualized text shared with model.
Previously the references dict was being directly passed as a string.
The documents don't look well formatted and are less intelligible.
- Passing file path along with notes snippets will help contextualize
the notes better.
- Better formatting should help with making notes more readable by the
chat model.
- Double click on agent to open edit agent card
- Focus on chat input pane when agent selected/clicked
for quick, smooth agent switch and message flow
- Hover on agent to see agent detail card on non-mobile displays
- Use debounce to only show when hover on card for a bit
- Default to None for the input_tools and output_modes so that they can be managed in the admin panel
- Hold off on showing off all Public Agents until we have a better experience for user profiles etc.
Have get agents API return agents ordered intelligently
- Put the default agent first
- Sort used agents by most recently chatted with agent for ease of access
- Randomly shuffle the remaining unused agents for discoverability
This change wraps the agent pane in a scroll area with all agents shown.
It allows selecting an agent to chat with directly from the home
screen without breaking flow and having to jump to the agents page.
The previous flow was not convenient to quickly and consistently start
chat with one of your standard agents.
This was because a random subet of agents were shown on the home page.
To start chat with an agent not shown on home screen load you had to
open the agents page and initiate the conversation from there.
Exposes a transient switch with available agents as selectable options
in the Khoj chat sub-menu.
Currently shows agent slugs instead of agent names as options. This
isn't the cleanest but gets the job done for now.
Only new conversations with a different agent can be started. Existing
conversations will continue with the original agent it was created with.
The ability to switch the conversation's agent doesn't exist on the
server yet.
One limitation of this methodology is that localStorage has a limit in how much data it can take. Should add more graceful error handling here as well.
Currently experiencing difficulty instruction following when an image is shared. It's more likely to try and output an image. Update to make a clearer distinction.
- Put the attached images display div inside the same parent div as
the text area
- Keep the attachment, microphone/send message buttons aligned with
the text area. So the attached images just show up at the top of the
text area but everything else stays at the same horizontal height as
before.
- This improves the UX by
- Ensuring that the attached images do not obscure the agents pane
above the chat input area
- The attached images visually look like they are inside the actual
input area, rather than floating above it. So the visual aligns
with the semantics
Previously the web app only expected a single image to be shared by
the user as part of their query.
This change allows sharing multiple images from the web app.
Closes#921
Previously Khoj could respond to a single shared image at a time.
This changes updates the chat API to accept multiple images shared by
the user and send it to the appropriate chat actors including the
openai response generation chat actor for getting an image aware
response
Recent changes made Khoj try respond even when document lookup fails.
This change missed handling downstream effects of a failed document
lookup, as the defiltered_query was null and so the text response
didn't have the user query to respond to.
This code initializes defiltered_query to original user query to
handle that.
Also response_type wasn't being passed via
send_message_to_model_wrapper_sync unlike in the async scenario
## Overview
### New
- Support using Firecrawl(https://firecrawl.dev) to read web pages
- Add, switch and re-prioritize web page reader(s) to use via the admin panel
### Speed
- Improve response speed by aggregating web page read, extract queries to run only once for each web page
### Response Resilience
- Fallback through enabled web page readers until web page read
- Enable reading web pages on the internal network for self-hosted Khoj running in anonymous mode
- Try respond even if web search, web page read fails during chat
- Try respond even if document search via inference endpoint fails
### Fix
- Return data sources to use if exception in data source chat actor
## Details
### Configure web page readers to use
- Only the web scraper set in Server Chat Settings via the Django admin panel, if set
- Otherwise use the web scrapers added via the Django admin panel (in order of priority), if set
- Otherwise, use all the web scrapers enabled by settings API keys via environment variables (e.g `FIRECRAWL_API_KEY', `JINA_API_KEY' env vars set), if set
- Otherwise, use Jina to web scrape if no scrapers explicitly defined
For self-hosted setups running in anonymous-mode, the ability to directly read webpages is also enabled by default. This is especially useful for reading webpages in your internal network that the other web page readers will not be able to access.
### Aggregate webpage extract queries to run once for each distinct web page
Previously, we'd run separate webpage read and extract relevant
content pipes for each distinct (query, url) pair.
Now we aggregate all queries for each url to extract information from
and run the webpage read and extract relevant content pipes once for
each distinct URL.
Even though the webpage content extraction pipes were previously being
run in parallel. They increased the response time by
1. adding more ~duplicate context for the response generation step to read
2. being more susceptible to variability in web page read latency of the parallel jobs
The aggregated retrieval of context for all queries for a given
webpage could result in some hit to context quality. But it should
improve and reduce variability in response time, quality and costs.
This should especially help with speed and quality of online search
for offline or low context chat models.
- Simplifies changing order in which web scrapers are invoked to read
web page by just changing their priority number on the admin panel.
Previously you'd have to delete/, re-add the scrapers to change
their priority.
- Add help text for each scraper field to ease admin setup experience
- Friendlier env var to use Firecrawl's LLM to extract content
- Remove use of separate friendly name for scraper types.
Reuse actual name and just make actual name better
The other webpage scrapers will not work for internal webpages. Try
access those urls directly if they are visible to the Khoj server over
the network.
Only enable this by default for self-hosted, single user setups.
Otherwise ability to scan internal network would be a liability!
For use-cases where it makes sense, the Khoj server admin can
explicitly add the direct webpage scraper via the admin panel
- Set up scrapers via API keys, explicitly adding them via admin panel
or enabling only a single scraper to use via server chat settings.
- Use validation to ensure only valid scrapers added via admin panel
Example API key is present for scrapers that require it etc.
- Modularize the read webpage functions to take api key, url as args
Removes dependence on constants loaded in online_search. Functions
are now mostly self contained
- Improve ability to read webpages by using the speed, success rate of
different scrapers. Optimal configuration needs to be discovered
This should reduce webpage read and response generation time.
Previously, we'd run separate webpage read and extract relevant
content pipes for each distinct (query, url) pair.
Now we aggregate all queries for each url to extract information from
and run the webpage read and extract relevant content pipes once for
each distinct url.
Even though the webpage content extraction pipes were previously being
in parallel. They increased response time by
1. adding more context for the response generation chat actor to
respond from
2. and by being more susceptible to page read and extract latencies of
the parallel jobs
The aggregated retrieval of context for all queries for a given
webpage could result in some hit to context quality. But it should
improve and reduce variability in response time, quality and costs.
Set the FIRECRAWL_TO_EXTRACT environment variable to true to have
Firecrawl scrape and extract content from webpage using their LLM
This could be faster, not sure about quality as LLM used is obfuscated
Firecrawl is open-source, self-hostable with a default hosted service
provided, similar to Jina.ai. So it can be
1. Self-hosted as part of a private Khoj cloud deployment
2. Used directly by getting an API key from the Firecrawl.dev service
This is as an alternative to Olostep and Jina.ai for reading webpages.
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references
This change ensures a response to the users query is attempted even
when web info retrieval fails.
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references
This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
- Remove unused subscribed variable from the chat API
- Unexpectedly dropped client app logging when migrated API chat to do
advanced streaming in july