- Deduplicate code to collect chat telemetry by relying on
end_llm_response event
- Log time to first token and total chat response time for latency
analysis of Khoj as an agent. Not just the latency of the LLM
- Remove duplicate timer in the image generation path
Do not need response generator to stuff compiled references in chat
stream using "### compiled references:" separator.
References are now sent to clients as structured json while streaming
## Overview
- Gemma 2 is a new open model family by Google. They've released a 9B, 29B param model. A 2B model is also expected.
- It performs really well on the Chatbot arena and shows good performance when testing within Khoj as well.
- Llama.cpp support for Gemma 2 architecture seems to have stabilized
- If Gemma 2 performs well in further testing, it can be made the default offline chat model for Khoj
- Once the 2B param model is released, the model size to download can be automatically chosen based on (V)RAM available
## Major
- Support Gemma 2 for Offline Chat
- Improve and fix chat model prompts for better, consistent context
## Minor
- Fix and improve offline chat actor, director tests
- Improve offline chat truncation to consider chat message delimiter tokens
Previously loading animation would be at top of message. Moving it to
bottom is more intuitve and easier to track.
Remove white-space: pre from list elements. It was adding too much y
axis padding to chat messages (and train of thought)
- Details
Only return notes refs, online refs, inferred queries and generated
response in non-streaming mode. Do not return train of throught and
other status messages
Incorporate missing logic from old chat API router into new one.
- Motivation
So we can halve chat API code by getting rid of the duplicate logic
for the websocket router
The deduplicated code:
- Avoids inadvertant logic drift between the 2 routers
- Improves dev velocity
- Overview
Use simpler HTTP Streaming Response to send status messages, alongside
response and references from server to clients via API.
Update web client to use the streamed response to show train of thought,
stream response and render references.
- Motivation
This should allow other Khoj clients to pass auth headers and recieve
Khoj's train of thought messages from server over simple HTTP
streaming API.
It'll also eventually deduplicate chat logic across /websocket and
/chat API endpoints and help maintainability and dev velocity
- Details
- Pass references as a separate streaming message type for simpler
parsing. Remove passing "### compiled references" altogether once
the original /api/chat API is deprecated/merged with the new one
and clients have been updated to consume the references using this
new mechanism
- Save message to conversation even if client disconnects. This is
done by not breaking out of the async iterator that is sending the
llm response. As the save conversation is called at the end of the
iteration
- Handle parsing chunked json responses as a valid json on client.
This requires additional logic on client side but makes the client
more robust to server chunking json response such that each chunk
isn't itself necessarily a valid json.
- Convert functions in SSE API path into async generators using yields
- Validate image generation, online, notes lookup and general paths of
chat request are handled fine by the web client and server API
* Update the automations UI to be a more suitable color distribution based on new designs
* Use accented colors for the metadata, update dark mode colors
* Update form to use icons as well and render more pretty inline form labels
Pull out /api/configure/phone API endpoints into /api/phone for
more concise and sufficiently explanatory API path
Refactor Flow
1. Rename /api/configure/phone -> /api/phone
Now the API to configure all the AI models is under /api/models.
This provides better organization and API hierarchy. The /configure
url segment was redundant.
- Rename POST /api/phone to PATCH /api/phone
- Rename GET /api/configure to GET /api/settings
Refactor Flow
1. Move out POST /user/name to main api.py
2. Rename /api/configure/<type>/model -> /api/model/<type>
3. Rename @api_configure to @api_mode
4. Rename file api_config.py to api_model.py
Pull out /api/configure/content API endpoints into /api/content to
allow for more logical organization of API path hierarchy
This should make the url more succinct and API request intent more
understandable by using existing HTTP method semantics along with the
path.
The /configure URL path segment was either
- redundant (e.g POST /configure/notion) or
- incorrect (e.g GET /configure/files)
Some example of naming improvements:
- GET /configure/types -> GET /content/types
- GET /configure/files -> GET /content/files
- DELETE /configure/files -> DELETE /content/files
This should also align, merge better the the content indexing API
triggered via PUT, PATCH /content
Refactor Flow
1. Rename /api/configure/types -> /api/content/types
2. Rename /api/configure -> /api
3. Move /api/content to api_content from under api_config
- Remove unused full_corpus boolean. The full_corpus=False code path
wasn't being used (accept for in a test)
- The full_corpus=True code path used was ignoring file deletion
requests sent by clients during sync. Unclear why this was done
- Added unit test to prevent regression and show file deletion by
clients during sync not ignored now
- This utilizes PUT, PATCH HTTP method semantics to remove need for
the "regenerate" query param and "/update" url suffix
- This should make the url more succinct and API request intent more
understandable by using existing HTTP method semantics
- Add day of week to system prompt of openai, anthropic, offline chat models
- Pass more context to offline chat system prompt to
- ask follow-up questions
- know where to find information about khoj (itself)
- Fix output mode selection prompt. Log error if model does not select
valid option from list of valid output modes provided
- Use consistent names for question, answers passed to
extract_questions_offline prompt
- Log which model extracts question, what the offline chat model sees
as context. Similar to debug log shown for openai models
- Pass system message as the first user chat message as Gemma 2
doesn't support system messages
- Use gemma-2 chat format
- Pass chat model name to generic, extract questions chat actors
Used to figure out chat template to use for model
For generic chat actor argument was anyway available but not being
passed, which is confusing
### Major
- Reuse get config data logic across config pages on web client
- Make config api endpoint urls and response fields consistent
- Rename API path /api/config to /api/configure
- Move Web, Desktop client settings page to be under `/settings` from the previous `/config` url path
### Minor
- Pass isMobileWidth prop to SidePanel via chat share interface
- Turn prettier off instead of throwing error for now
- Do no explicitly add line-clamp plugin as it's in Tailwind by default
- Update references to the settings page to use new url across docs
and code
- Rename desktop and web settings page to settigns.html instead of
config[ure].html
- Use name, id for every [search|chat|voice|pain]_model_option
- Rename current_model_state field to more intuitive enabled_content_source
- Update references to the update fields in config.html
- Deprecate khoj-assistant pypi package. Use more accurate and
succinct pypi project name, khoj
- Update references to sye khoj pypi package in docs and code instead
of the legacy khoj-assistant pypi package
- Update pypi workflow to publish to both khoj, khoj-assistant for now
- Update stale python 3.9 support mentioned in our pyproject. Can't
support python 3.9 as depend on latest django which support >=3.10