Commit graph

3963 commits

Author SHA1 Message Date
Debanjum
8f966b11ec Release Khoj version 1.30.5 2024-11-23 20:49:05 -08:00
Debanjum
498895a47d Fix libmusl error using pre-built llama-cpp-python wheel in prod Docker 2024-11-23 20:47:41 -08:00
Debanjum
e5b211a743 Release Khoj version 1.30.4 2024-11-23 19:48:21 -08:00
Debanjum
9848d89d03 Try build docker images with github high cpu, ram runner 2024-11-23 19:09:36 -08:00
Debanjum
04bb3d6f15 Fix libmusl error using pre-built llama-cpp-python wheel via Docker
Seems like llama-cpp-python pre-built wheels need libmusl. Otherwise
you run into runtime errors on Khoj startup via Docker.
2024-11-23 18:46:44 -08:00
Debanjum
8dd2122817 Set sample size to 200 for automated eval runs as well 2024-11-23 14:48:38 -08:00
Debanjum
c4ef31d86f Release Khoj version 1.30.3
Some checks are pending
build khoj.el / build (push) Waiting to run
desktop / 🖥️ Build, Release Desktop App (push) Waiting to run
dockerize / Publish Khoj Docker Images (push) Waiting to run
build and deploy github pages for documentation / deploy (push) Waiting to run
pypi / Publish Python Package to PyPI (push) Waiting to run
test khoj.el / test (27.1) (push) Waiting to run
test khoj.el / test (27.2) (push) Waiting to run
test khoj.el / test (28.1) (push) Waiting to run
test khoj.el / test (28.2) (push) Waiting to run
test khoj.el / test (snapshot) (push) Waiting to run
2024-11-23 14:40:06 -08:00
Debanjum
15ae22bdcf Use pre-built llama-cpp-python wheel in Khoj docker images
Reduces build time and resolves FileNotFoundError 'ninja' during
llama-cpp-python local build.
2024-11-23 14:38:07 -08:00
sabaimran
4ac49ca90f Release Khoj version 1.30.2 2024-11-23 12:00:28 -08:00
Debanjum
5aa5cb1941 Add "New" section with latest updates to Readme
Some checks are pending
build and deploy github pages for documentation / deploy (push) Waiting to run
2024-11-23 01:36:50 -08:00
sabaimran
7f5bf35806 Disambiguate renewal_date type. Previously, being used as None, False, and Datetime in different places.
Some checks failed
dockerize / Publish Khoj Docker Images (push) Waiting to run
build and deploy github pages for documentation / deploy (push) Waiting to run
pypi / Publish Python Package to PyPI (push) Waiting to run
pre-commit / Setup Application and Lint (push) Has been cancelled
test / Run Tests (push) Has been cancelled
2024-11-22 12:06:20 -08:00
sabaimran
5e8c824ecc Improve the experience for finding past conversation
- add a conversation title search filter, and an agents filter, for finding conversations
- in the chat session api, return relevant agent style data
2024-11-22 12:03:01 -08:00
sabaimran
a761865724 Fix handling of customer.subscription.updated event to process new renewal end date 2024-11-22 12:03:01 -08:00
sabaimran
6a054d884b Add quicker/easier filtering on auth 2024-11-22 12:03:01 -08:00
Debanjum
b9a889ab69 Fix Khoj responses when code generated charts in response context
The current fix should improve Khoj responses when charts in response
context. It truncates code context before sharing with response chat actors.

Previously Khoj would respond with it not being able to create chart
but than have a generated chart in it's response in default mode.

The truncate code context was added to research chat actor for
decision making but it wasn't added to conversation response
generation chat actors.

When khoj generated charts with code for its response, the images in
the context would exceed context window limits.

So the truncation logic to drop all past context, including chat
history, context gathered for current response.

This would result in chat response generator 'forgetting' all for the
current response when code generated images, charts in response context.
2024-11-21 14:43:52 -08:00
Debanjum
5475a262d4 Move truncate code context func for reusability across modules
It needs to be used across routers and processors. It being in
run_code tool makes it hard to be used in other chat provider contexts
due to circular dependency issues created by
send_message_to_model_wrapper func
2024-11-21 14:27:39 -08:00
Debanjum
f434c3fab2 Fix toggling prompt tracer on/off in Khoj via PROMPTRACE_DIR env var
Previous changes to depend on just the PROMPTRACE_DIR env var instead
of KHOJ_DEBUG or verbosity flag was partial/incomplete.

This fix adds all the changes required to only depend on the
PROMPTRACE_DIR env var to enable/disable prompt tracing in Khoj.
2024-11-21 14:06:00 -08:00
Debanjum
4a40cf79c3 Add docs on how to cross-device access self-hosted khoj using tailscale 2024-11-21 11:07:18 -08:00
Debanjum
1f96c13f72 Enable starting khoj uvicorn server with ssl cert file, key for https
Pass your domain cert files via the --sslcert, --sslkey cli args.
For example, to start khoj at https://example.com, you'd run command:

KHOJ_DOMAIN=example.com khoj --sslcert example.com.crt --sslkey
example.com.key --host example.com

This sets up ssl certs directly with khoj without requiring a
reverse proxy like nginx to serve khoj behind https endpoint for
simple setups. More complex setups should, of course, still use a
reverse proxy for efficient request processing
2024-11-21 11:07:18 -08:00
sabaimran
9fea02f20f In telemetry, differentiate create_user google and email 2024-11-21 11:01:37 -08:00
sabaimran
9db885b5f7 Limit access to chat models to futurist users 2024-11-21 07:53:24 -08:00
sabaimran
7a00a07398 Add trailing slash to Ollama url in docs 2024-11-21 07:48:18 -08:00
sabaimran
3519dd76f0 Fix type of excalidraw image response 2024-11-20 19:01:13 -08:00
sabaimran
467de76fc1 Improve the image diagramming prompts and response parsing 2024-11-20 18:59:40 -08:00
Debanjum
50d8405981 Enable khoj to use terrarium code sandbox as tool in eval workflow 2024-11-20 14:19:27 -08:00
Debanjum
2203236e4c Update desktop app dependencies 2024-11-20 13:05:55 -08:00
Debanjum
409204917e Update documentation website dependencies 2024-11-20 13:05:32 -08:00
Debanjum
6f1adcfe67
Track Usage Metrics in Chat API. Track Running Cost, Accuracy in Evals (#985)
- Track, return cost and usage metrics in chat api response
  Track input, output token usage and cost of interactions with 
  openai, anthropic and google chat models for each call to the khoj chat api
- Collect, display and store costs & accuracy of eval run currently in progress
  This provides more insight into eval runs during execution 
  instead of having to wait until the eval run completes.
2024-11-20 12:59:44 -08:00
Debanjum
ffbd0ae3a5 Fix eval github workflow to run on releases, i.e on tags push 2024-11-20 12:57:42 -08:00
Debanjum
ed364fa90e Track running costs & accuracy of eval runs in progress
Collect, display and store running costs & accuracy of eval run.

This provides more insight into eval runs during execution instead of
having to wait until the eval run completes.
2024-11-20 12:40:51 -08:00
Debanjum
bbd24f1e98 Improve dropdown menus on web app setting page with scroll & min-width
- Previously when settings list became long the dropdown height would
  overflow screen height. Now it's max height is clamped and  y-scroll
- Previously the dropdown content would take width of content. This
  would mean the menu could sometimes be less wide than the button. It
  felt strange. Now dropdown content is at least width of parent button
2024-11-20 12:27:13 -08:00
Debanjum
c53c3db96b Track, return cost and usage metrics in chat api response
- Track input, output token usage and cost for interactions
  via chat api with openai, anthropic and google chat models

- Get usage metadata from OpenAI using stream_options
- Handle openai proxies that do not support passing usage in response

- Add new usage, end response  events returned by chat api.
  - This can be optionally consumed by clients at a later point
  - Update streaming clients to mark message as completed after new
    end response event, not after end llm response event
- Ensure usage data from final response generation step is included
  - Pass usage data after llm response complete. This allows gathering
    token usage and cost for the final response generation step across
    streaming and non-streaming modes
2024-11-20 12:17:58 -08:00
Debanjum
80df3bb8c4 Enable prompt tracing only when PROMPTRACE_DIR env var set
Better to decouple prompt tracing from debug mode or verbosity level
and require explicit, independent config to enable prompt tracing
2024-11-20 11:54:02 -08:00
Debanjum
9ab76ccaf1 Skip adding agent to chat metadata when chat unset to avoids null ref 2024-11-19 21:10:23 -08:00
Debanjum
4da0499cd7 Stream responses by openai's o1 model series, as api now supports it
Previously o1 models did not support streaming responses via API. Now
they seem to do
2024-11-19 21:10:23 -08:00
sabaimran
e5347dac8c Fix base image used for prod in docs 2024-11-19 15:51:27 -08:00
sabaimran
b943069577 Fix button text, and login url in self-hosted auth docs 2024-11-19 15:50:13 -08:00
sabaimran
3b5e6a9f4d Update authentication documentation 2024-11-19 15:45:47 -08:00
Debanjum
7bdc9590dd Fix handling sources, output in chat actor when is automated task
Remove unnecessary ```python prefix removal. It isn't being triggered
in json deserialize path.
2024-11-19 13:49:27 -08:00
Debanjum
0e7d611a80 Remove ```python codeblock prefix from raw json before deserialize 2024-11-19 12:53:52 -08:00
Debanjum
001c13ef43 Upgrade web app package dependencies 2024-11-19 12:53:52 -08:00
sabaimran
4f5c1eeded Update some of the open graph data for the documentation website 2024-11-19 11:14:46 -08:00
sabaimran
5134d49d71 Release Khoj version 1.30.1 2024-11-18 17:30:33 -08:00
sabaimran
8bdd0b26d3 And a connections clean up decorator to all scheduled tasks 2024-11-18 17:19:36 -08:00
Debanjum
817601872f Update default offline models enabled 2024-11-18 16:38:17 -08:00
Debanjum
45c623f95c Dedupe, organize chat actor, director tests
- Move Chat actor tests that were previously in chat director tests file
- Dedupe online, offline io selector chat actor tests
2024-11-18 16:10:50 -08:00
Debanjum
2a76c69d0d Run online, offine chat actor, director tests for any supported provider
- Previously online chat actors, director tests only worked with openai.
  This change allows running them for any supported onlnie provider
  including Google, Anthropic and Openai.

- Enable online/offline chat actor, director in two ways:
  1. Explicitly setting KHOJ_TEST_CHAT_PROVIDER environment variable to
     google, anthropic, openai, offline
  2. Implicitly by the first API key found from openai, google or anthropic.

- Default offline chat provider to use Llama 3.1 3B for faster, lower
  compute test runs
2024-11-18 15:11:37 -08:00
Debanjum
653127bf1d Improve data source, output mode selection
- Set output mode to single string. Specify output schema in prompt
  - Both thesee should encourage model to select only 1 output mode
    instead of encouraging it in prompt too many times
  - Output schema should also improve schema following in general
- Standardize variable, func name of io selector for readability
- Fix chat actors to test the io selector chat actor
- Make chat actor return sources, output separately for better
  disambiguation, at least during tests, for now
2024-11-18 15:11:37 -08:00
Debanjum
e3fd51d14b Pass user arg to create title from query in new automation flow 2024-11-18 12:58:10 -08:00
Debanjum
9e74de9b4f Improve serializing conversation JSON to print messages on console
- Handle chatml message.content with non-json serializable data like
  WebP image binary data used by Gemini models
2024-11-18 12:57:05 -08:00