Overview
- The default django admin panel UI looks pretty dated and didn't
have any Khoj specific branding
- Used the Unfold Django admin panel theme for a modern look
- Used the Khoj logo and name in Admin panel title, headings, favicons
Details:
All models shown on Admin panel need to inherit from unfold's
ModelAdmin to get styling applied. So
- Make all models on Admin panel inherit from unfold's ModelAdmin
- Subclassed UserAdmin to inherit from unfold's ModelAdmin
- Deregistered the unused Auth Group model from the Admin panel
We can add it back when its actually used. Avoid confusion for now
- Explicitly register DjangoJobExecution on admin panel and again make
it inherit from the unfold.admin.ModelAdmin
- Make the desktop app mainly a file-syncing client for users who have lots of documents that they need to share with Khoj. This is because the web app provides a fairly robust chat client which can be used by anyone on their computer.
- The chat client in the desktop app had significantly drifted from our current brand / them, and didn't provide enough value add to update. Later, we will make it easier to install the existing web app as a desktop PWA.
- Use bubblewrap generated splash screen, notification icons from
1200x1200 high res khoj icon in assets.khoj.dev.
- Discard bubblewrap generated launcher icons.
- Fix orientation to respect phone orientation settings. "any" was not.
- Add 512, 192 Khoj maskable icons to web app manifest for android rendering
- Add id, categories etc suggested by pwabuilder
- Use higher quality icon images for splash screen than what
bubblewrap creates by default
Encode api key in header, POST request body and GET query param for
search as utf-8 to avoid the multibyte char in request issue when
making API calls from khoj.el to khoj server.
Resolves#935
- Added it to the Django migrations so that it auto-triggers when someone updates their server and starts it up again for the first time. This will require that they update their clients as well in order to view/consume image content.
- Remove server-side references in the code that allow to parse the text-to-image intent as it will no longer be necessary, given the chat logs will be migrated
## Objective
Improve build speed and size of khoj docker images
## Changes
### Improve docker image build speeds
- Decouple web app and server build steps
- Build the web app and server in parallel
- Cache docker layers for reuse across dockerize github workflow runs
- Split Docker build layers for improved cacheability (e.g separate `yarn install` and `yarn build` steps)
### Reduce size of khoj docker images
- Use an up-to-date `.dockerignore` to exclude unnecessary directories
- Do not installing cuda python packages for cpu builds
### Improve web app builds
- Use consistent mechanism to get fonts for web app
- Make tailwind extensions production instead of dev dependencies
- Make next.js create production builds for the web app (via `NODE_ENV=production` env var)
- Track, return cost and usage metrics in chat api response
Track input, output token usage and cost of interactions with
openai, anthropic and google chat models for each call to the khoj chat api
- Collect, display and store costs & accuracy of eval run currently in progress
This provides more insight into eval runs during execution
instead of having to wait until the eval run completes.
- Previously when settings list became long the dropdown height would
overflow screen height. Now it's max height is clamped and y-scroll
- Previously the dropdown content would take width of content. This
would mean the menu could sometimes be less wide than the button. It
felt strange. Now dropdown content is at least width of parent button
- Track input, output token usage and cost for interactions
via chat api with openai, anthropic and google chat models
- Get usage metadata from OpenAI using stream_options
- Handle openai proxies that do not support passing usage in response
- Add new usage, end response events returned by chat api.
- This can be optionally consumed by clients at a later point
- Update streaming clients to mark message as completed after new
end response event, not after end llm response event
- Ensure usage data from final response generation step is included
- Pass usage data after llm response complete. This allows gathering
token usage and cost for the final response generation step across
streaming and non-streaming modes
- JSON extract from LLMs is pretty decent now, so get the input tools and output modes all in one go. It'll help the model think through the full cycle of what it wants to do to handle the request holistically.
- Make slight improvements to tool selection indicators