Do not need response generator to stuff compiled references in chat
stream using "### compiled references:" separator.
References are now sent to clients as structured json while streaming
Previously loading animation would be at top of message. Moving it to
bottom is more intuitve and easier to track.
Remove white-space: pre from list elements. It was adding too much y
axis padding to chat messages (and train of thought)
- Details
Only return notes refs, online refs, inferred queries and generated
response in non-streaming mode. Do not return train of throught and
other status messages
Incorporate missing logic from old chat API router into new one.
- Motivation
So we can halve chat API code by getting rid of the duplicate logic
for the websocket router
The deduplicated code:
- Avoids inadvertant logic drift between the 2 routers
- Improves dev velocity
- Overview
Use simpler HTTP Streaming Response to send status messages, alongside
response and references from server to clients via API.
Update web client to use the streamed response to show train of thought,
stream response and render references.
- Motivation
This should allow other Khoj clients to pass auth headers and recieve
Khoj's train of thought messages from server over simple HTTP
streaming API.
It'll also eventually deduplicate chat logic across /websocket and
/chat API endpoints and help maintainability and dev velocity
- Details
- Pass references as a separate streaming message type for simpler
parsing. Remove passing "### compiled references" altogether once
the original /api/chat API is deprecated/merged with the new one
and clients have been updated to consume the references using this
new mechanism
- Save message to conversation even if client disconnects. This is
done by not breaking out of the async iterator that is sending the
llm response. As the save conversation is called at the end of the
iteration
- Handle parsing chunked json responses as a valid json on client.
This requires additional logic on client side but makes the client
more robust to server chunking json response such that each chunk
isn't itself necessarily a valid json.
- Convert functions in SSE API path into async generators using yields
- Validate image generation, online, notes lookup and general paths of
chat request are handled fine by the web client and server API
- Deprecate khoj-assistant pypi package. Use more accurate and
succinct pypi project name, khoj
- Update references to sye khoj pypi package in docs and code instead
of the legacy khoj-assistant pypi package
- Update pypi workflow to publish to both khoj, khoj-assistant for now
- Update stale python 3.9 support mentioned in our pyproject. Can't
support python 3.9 as depend on latest django which support >=3.10
- Major
- Ask for prompt in prose
- Remove seed from SD3 image generation to improve diversity of output
for a given prompt
Otherwise for conversations with similar sounding
prompts, the images would be almost exactly the same. This maybe
another indicator of SD3's inability to capture detailed
instructions
- Consistently use "prompt" wording instead of "query" in improved
image generation prompts.
Previously a mix of those terms were being used, which could confuse
the chat model
- Minor
- Add day of week to prompt
- Remove 2-5 sentence limit on instructions to SD3. It seems to be
able to follow longer instructions just with less fidelity than
DALLE. And the 2-5 sentence instruction limit wasn't being adhered to
- Improve ability to edit, improve the image based on follow-up
instructions by the user
- Align prompts for DALLE and SD3. Only difference is to wrap text to
be rendered in quotes for SD3. This improves it's ability to render
requested text. DALLE cannot render text as well or consistently
- Because we're using a FastAPI api framework with a Django ORM, we're running into some interesting conditions around connection pooling and clean-up. We're ending up with a large pile-up of open, stale connections to the DB recurringly when the server has been running for a while. To mitigate this problem, given starlette and django run in different python threads, add a middleware that will go and call the connection clean up method in each of the threads.
- Issue
The Khoj docker build would fail with `ImportError: libGL.so.1: cannot open shared object file: No such file or directory`. This was required by the Khoj RapidOCR python package dependency.
- Fix
A minimal set of system packages have been added to resolve this issue.
- Transcribe on holding Ctrl+s keyboard shortcut
- Transcribe on holding the transcribe button pressed via mouse too
- Make the transcribe button robust to inadvertent touches by using timeout
- Do not transcribe, trigger auto-send on silences. Silence detection
is super rudimentary, just blocks standard emanations by whisper
when no speech
### Fix
- Fix degrade in speed when indexing large files
- Resolve org-mode indexing bug by splitting current section only once by heading
- Improve summarization by fixing formatting of text in indexed files
### Improve
- Improve scaling user, admin flows to delete all entries for a user
- Split once by heading (=first_non_empty) to extract current section body
Otherwise child headings with same prefix as current heading will
cause the section split to go into infinite loop
- Also add check to prevent getting into recursive loop while trying
to split entry into sub sections
Adding files to the DB for summarization was slow, buggy in two ways:
- We were updating same text of modified files in DB = no of chunks
per file times
- The `" ".join(file_content)' code was breaking each character in the
file content by a space. This formats the original file content
incorrectly before storing in the DB
Because this code ran in the main file indexing path, it was slowing down
file indexing. Knowledge bases with larger files were impacted more strongly
- Delete entries by batch to improve efficiency of query at scale
- Share code to delete all user entries between it's async, sync methods
- Add indicator to show when files being deleted on web config page