- Expose ability to modify search model via Django admin interface
- Previously the bi_encoder and cross_encoder models to use were set
in code
- Now it's user configurable but with a default config generated by
default
### Overview
Prepare Khoj with multi-user, db support for Khoj Cloud release
### Details
- Add first run experience to configure Khoj via khoj CLI
- Improve Web app settings page: Move files data into content section card. Move content index update button(s) to content section
- Improve OpenAI chat prompts
- Push more general information for OpenAI models into system prompt
- Make it more aware of it's current capabilities
- Weaken asking follow-up questions
- Rate-limit calls to the chat API
- Add back search results quality threshold
- Normalize quality score definitions across cross_encoder, encoder to distance metric
- Remove reference to deprecated button
- Await result of the search query
- Fixed Langchain issue by allowing the Docker image to rebuild with a later package version
- During the migration, the confidence score stopped being used. It
was being passed down from API to some point and went unused
- Remove score thresholding for images as image search confidence
score different from text search model distance score
- Default score threshold of 0.15 is experimentally determined by
manually looking at search results vs distance for a few queries
- Use distance instead of confidence as metric for search result quality
Previously we'd moved text search to a distance metric from a
confidence score.
Now convert even cross encoder, image search scores to distance metric
for consistent results sorting
Remove the Results Count button from the web app. It's hanging weirdly
with not much context to its purpose.
Reintroduce it in the Search card when created under the Features section
Reduce user confusion by joining config update with index updation for
each content type.
So only a single click required to configure any content type instead
of two clicks on two separate pages
- Notes prompt doesn't need to be so tuned to question answering. User
could just want to talk about life. The notes need to be used to
response to those, not necessarily only retrieve answers from notes
- System and notes prompts were forcing asking follow-up questions a
little too much. Reduce strength of follow-up question asking
The Chat models sometime output reference notes directly in the chat
body in unformatted form, specifically as Notes:\n['. Prevent that.
Reference notes are shown in clean, formatted form anyway