Commit graph

3792 commits

Author SHA1 Message Date
Debanjum
80ee35b9b1 Wrap messages in web, obsidian UI to stay within screen when long links
Wrap long links etc. in chat messages and train of thought lists on
web app app and obsidian plugin by breaking them into newlines by word
2024-11-10 14:49:51 -08:00
Debanjum
f967bdf702 Show correct example index being currently processed in frames eval
Previously the batch start index wasn't being passed so all batches
started in parallel were showing the same processing example index

This change doesn't impact the evaluation itself, just the index shown
of the example currently being evaluated
2024-11-10 14:49:51 -08:00
Debanjum
84a8088c2b Only evaluate non-empty responses to reduce eval script latency, cost
Empty responses by Khoj will always be an incorrect response, so no
need to make call to an evaluator agent to check that
2024-11-10 14:49:51 -08:00
sabaimran
ceb29eae74 Add phone number verification and remove telemetry update call from place where authentication middleware isn't yet installed (in the middleware itself). 2024-11-09 12:25:36 -08:00
sabaimran
78630603f4 Delete the fact checker application 2024-11-08 17:27:42 -08:00
Debanjum
4cad96ded6
Add Script to Evaluate Khoj on Google's FRAMES benchmark (#955)
- Why
We need better, automated evals to measure performance shifts of Khoj
across prompt, model and capability changes.

Google's FRAMES benchmark evaluates multi-step retrieval and reasoning
capabilities of AI agents. It's a good starter benchmark to evaluate Khoj.

- Details
This PR adds an eval script to evaluate Khoj responses on the the FRAMES
benchmark prompts against the ground truth provided by it.

Script allows configuring sample size, batch size, sampling queries from the
eval dataset.

Gemini is used as an LLM Judge to auto grade Khoj responses vs ground truth 
data from the benchmark.
2024-11-06 17:52:01 -08:00
Debanjum
8679294bed Remove need to set server chat settings from use openai proxies docs
This was previously required, but now it's only usefuly for more
advanced settings, not typical for self-hosting users.

With recent updates, the user's selected chat model is used for both
Khoj's train of thought and response. This makes it easy to
switch your preferred chat model directly from the user settings
page and not have to update this in the admin panel as well.

Reflect these code changse in the docs, by removing the unnecessary
step for self-hosted users to create a server chat setting when using
an OpenAI proxy service like Ollama, LiteLLM etc.
2024-11-05 17:10:53 -08:00
Debanjum
05a93fcbed v-align attach, send buttons with chat input text area on web app
Otherwise, those buttons look off-center when images are attached to
the chat input area
2024-11-05 17:10:53 -08:00
Debanjum
b51ee644aa Fix escaping filename when normalizing in org node parser 2024-11-04 20:24:57 -08:00
Debanjum
5724d16a6f Fix passing images to anthropic chat models to extract questions 2024-11-04 20:24:57 -08:00
sabaimran
b6145df3be Handle file retrieval when agent is None 2024-11-04 16:55:22 -08:00
sabaimran
e3ca52b7cb Use .get() to get text accompanying image url, instead of subindexing 2024-11-04 16:09:16 -08:00
sabaimran
1e89baca7b Deprecate the UserSearchModelConfig and remove all references
- The server has moved to a model of standardization for the embeddings generation workflow. Remove references to the support for differentiated models.
- The migration script fo ra new model needs to be updated to accommodate full regeneration.
2024-11-04 12:24:41 -08:00
Debanjum
1ccbf72752 Use logger instead of print to track eval 2024-11-04 00:40:26 -08:00
sabaimran
99c1d2831a Release Khoj version 1.28.3 2024-11-02 12:23:11 -07:00
sabaimran
075b4ecf15 Call subscription_to_state with sync_to_async wrapper when getting user subscription state
- This is needed in case the renewal_date is not set and we need to reset it for the user
2024-11-02 12:22:35 -07:00
sabaimran
ec44cbe1e7 Release Khoj version 1.28.2 2024-11-02 07:53:51 -07:00
Debanjum
791eb205f6 Run prompt batches in parallel for faster eval runs 2024-11-02 04:58:03 -07:00
Debanjum
96904e0769 Add script to evaluate khoj on Google's FRAMES benchmark
Google's FRAMES benchmark evaluates multi-step retrieval and reasoning
capabilities of an agent.

The script uses Gemini as an LLM Judge to evaluate Khoj responses to
the FRAMES benchmark prompts against the ground truth provided by it.
2024-11-02 04:57:42 -07:00
Debanjum
31b5fde163 Only enable prompt tracer if git python is installed 2024-11-02 02:07:02 -07:00
sabaimran
5b18dc96e0 Release Khoj version 1.28.1 2024-11-01 22:51:51 -07:00
sabaimran
8d1b1bc78e Move the git python dependency into top level dependencies 2024-11-01 22:51:00 -07:00
Debanjum
e85dd59295 Release Khoj version 1.28.0 2024-11-01 19:06:59 -07:00
Debanjum
1f79a10541 Fix link to code execution feature in docs 2024-11-01 18:22:21 -07:00
Debanjum
cff8e02b60
Research Mode [Part 2]: Improve Prompts, Edit Chat Messages. Set LLM Seed for Reproducibility (#954)
- Improve chat actors and their prompts for research mode.
- Add documentation to enable the code tool when self-hosting Khoj
- Edit Chat Messages
  - Store Turn Id in each chat message. 
  - Expose API to delete chat message.
  - Expose delete chat message button to turn delete chat message from web app
- Set LLM Generation Seed for Reproducible Debugging and Testing
  - Setting seed for LLM generation is supported by Llama.cpp and OpenAI models. 
    This can (somewhat) restrain LLM output
  - Getting fixed responses for fixed inputs helps test, debug longer reasoning chains like used in advanced reasoning
2024-11-01 18:16:42 -07:00
Debanjum
14e453039d Add prompt tracing, agent personality to infer webpage urls chat actor 2024-11-01 18:12:50 -07:00
Debanjum
ab321dc518 Expect query before tool in response to give think space in research prompt 2024-11-01 17:51:41 -07:00
Debanjum
1a83bbcc94 Clean API chat router. Move FeedbackData response type to router helper 2024-11-01 17:51:41 -07:00
sabaimran
e6eb87bbb5 Merge branch 'improve-debug-reasoning-and-other-misc-fixes' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-11-01 16:48:39 -07:00
sabaimran
a213b593e8 Limit the number of urls the webscraper can extract for scraping 2024-11-01 16:48:36 -07:00
sabaimran
327fcb8f62 create defiltered query after conversation command is extracted 2024-11-01 16:48:03 -07:00
sabaimran
b79a9ec36d Clarify description of the code evaluation environment: not for document creation 2024-11-01 16:47:27 -07:00
Debanjum
9c7b36dc69 Use standard per minute rate limits across user types 2024-11-01 16:16:06 -07:00
Debanjum
ac21b10dd5 Simplify logic to get default search model. Remove unused import 2024-11-01 15:14:00 -07:00
sabaimran
2b35790165 Merge branch 'master' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-11-01 14:51:26 -07:00
Debanjum
22f3ed3f5d
Research Mode: Give Khoj the ability to perform more advanced reasoning (#952)
## Overview
Khoj can now go into research mode and use a python code interpreter. These are experimental features that are being released early for feedback and testing.

- Research mode allows Khoj to dynamically select the tools it needs to best answer the question. It is also allowed more iterations to get to a satisfactory answer. Its more dynamic train of thought is shown to improve visibility into its thinking.
- Adding ability for Khoj to use a python code interpreter is an adjacent capability. It can help Khoj do some data analysis and generate charts for you. A sandboxed python to run code is provided using [cohere-terrarium](https://github.com/cohere-ai/cohere-terrarium?tab=readme-ov-file), [pyodide](https://pyodide.org/).

## Analysis
Research mode (significantly?) improves Khoj's information retrieval for more complex queries requiring multi-step lookups but takes longer to run. It can research for longer, requiring less back-n-forth with the user to find an answer.

Research mode gives most gains when used with more advanced chat models (like o1, 4o, new claude sonnet and gemini-pro-002). Smaller models improve their response quality but tend to get into repetitive loops more often. 

## Next Steps
- Get community feedback on research mode. What works, what fails, what is confusing, what'd be cool to have.
- Tune Khoj's capabilities for longer autonomous runs and to generalize across a larger range of model sizes

## Miscellaneous Improvements
- Khoj's train of thought is saved and shown for all messages, not just the latest one
- Render charts generated by Khoj and code running using the code tool on the web app
- Align chat input color to currently selected agent color
2024-11-01 14:46:29 -07:00
sabaimran
baa939f4ce When running code, strip any code delimiters. Disable application json type specification in Gemini request. 2024-11-01 13:47:39 -07:00
sabaimran
8fd2fe162f Determine if research mode is enabled by checking the conversation commands and 'linting' them in the selection phase 2024-11-01 13:12:34 -07:00
sabaimran
cead1598b9 Don't reset research mode after completing research execution 2024-11-01 13:00:11 -07:00
Debanjum
c1c779a7ef Do not yaml format raw code results in context for LLM. It's confusing 2024-11-01 12:45:26 -07:00
sabaimran
b3dad1f393 Standardize rate limits to 1/6 ratio 2024-11-01 12:21:09 -07:00
sabaimran
23a49b6b95 Add documentation for python code execution capability 2024-11-01 12:14:33 -07:00
Debanjum
cd75151431 Do not allow auto selecting research mode as tool for now.
You are required to manually turning it on. This takes longer and
should be a high intent activity initiated by user
2024-11-01 12:07:52 -07:00
Debanjum
0b0cfb35e6 Simplify in research mode check in api_chat.
- Dedent code for readability
- Use better name for in research mode check
- Continue to remove inferred summarize command when multiple files in
  file filter even when not in research mode
- Continue to show select information source train of thought.
  It was removed by mistake earlier
2024-11-01 12:07:08 -07:00
sabaimran
ffa7f95559 Add template for a code sandbox to the docker-compose configuration 2024-11-01 11:50:58 -07:00
Debanjum
73750ef286 Merge branch 'master' into features/advanced-reasoning 2024-11-01 11:42:01 -07:00
sabaimran
1fc280db35 Handle case where infer_webpage_url returns no valid urls 2024-11-01 11:41:32 -07:00
Debanjum
1c920273dd
Add Prompt Tracer to Visualize, Analyze and Debug Khoj's Train of Thought (#951)
## Overview
Use git to capture prompt traces of khoj's train of thought. View, analyze and debug them using your favorite git client (e.g vscode, magit).

- Each commit captures an interaction with an LLM
  The commit writes the query, response and system message each to a separate file in the repo.
  The commit message captures the chat model, Khoj version and other metadata
- Each conversation turn can have multiple interactions with an LLM (e.g Khoj's train of thought)
- Each new conversation turn forks from and merges back into its conversation branch
- Each new conversation branches from the user branch
- Each new user branches from root commit on the main branch

## Usage
1. Set `KHOJ_DEBUG=true` or start khoj in very verbose mode with `khoj -vv` to turn on prompt tracing
2. Chat with Khoj as usual 
3. Open the promptrace git repo to view the generated prompt traces using your favorite git porcelain. 
   The Khoj prompt trace git repo is created at `/tmp/khoj_promptrace` by default. You can configure the prompt trace directory by setting the `PROMPTRACE_DIR`environment variable.

## Implementation
- Add utility functions to capture prompt traces using git (via `gitpython`)
- Make each model provider in Khoj commit their LLM interactions with promptrace
- Weave chat metadata from chat API through all chat actors and commit it to the prompt trace
2024-11-01 11:33:54 -07:00
sabaimran
33d36ee58c Add experimental notice to research mode tooltip 2024-11-01 11:00:27 -07:00
sabaimran
0145b2a366 Set usage limits on the research mode 2024-11-01 10:29:33 -07:00