Commit graph

3791 commits

Author SHA1 Message Date
sabaimran
ecc81e06a7 Add separate methods for docx and pdf files to just convert files to raw text, before further processing 2024-11-07 16:01:08 -08:00
sabaimran
394035136d Add an api that gets a document, and converts it to just text 2024-11-07 16:00:10 -08:00
sabaimran
3b1e8462cd Include attach files in calls to extract questions 2024-11-07 15:59:15 -08:00
sabaimran
de73cbc610 Add support for relaying attached files through backend calls to models 2024-11-07 15:58:52 -08:00
sabaimran
a0480d5f6c use fill weight for the toggle right (enabled state) for research mode 2024-11-04 22:01:09 -08:00
sabaimran
dc26da0a12 Add uploaded files in the conversation file filter for a new convo 2024-11-04 22:00:47 -08:00
sabaimran
cf0bcec0e7 Revert SKIP_TESTS flag in offline chat director tests 2024-11-04 19:06:54 -08:00
sabaimran
1f372bf2b1 Update file summarization unit tests now that multiple files are allowed 2024-11-04 17:45:54 -08:00
sabaimran
7543360210 Merge branch 'master' of github.com:khoj-ai/khoj into features/include-full-file-in-convo-with-filter 2024-11-04 16:55:48 -08:00
sabaimran
b6145df3be Handle file retrieval when agent is None 2024-11-04 16:55:22 -08:00
sabaimran
3dc9139cee Add additional handling for when file_object comes back empty 2024-11-04 16:53:07 -08:00
sabaimran
a27b8d3e54 Remove summarize condition for only 1 file filter 2024-11-04 16:51:37 -08:00
sabaimran
362bdebd02 Add methods for reading full files by name and including context
Now that models have much larger context windows, we can reasonably include full texts of certain files in the messages. Do this when an explicit file filter is set in a conversation. Do so in a separate user message in order to mitigate any confusion in the operation.

Pipe the relevant attached_files context through all methods calling into models.

We'll want to limit the file sizes for which this is used and provide more helpful UI indicators that this sort of behavior is taking place.
2024-11-04 16:37:13 -08:00
sabaimran
e3ca52b7cb Use .get() to get text accompanying image url, instead of subindexing 2024-11-04 16:09:16 -08:00
sabaimran
1e89baca7b Deprecate the UserSearchModelConfig and remove all references
- The server has moved to a model of standardization for the embeddings generation workflow. Remove references to the support for differentiated models.
- The migration script fo ra new model needs to be updated to accommodate full regeneration.
2024-11-04 12:24:41 -08:00
sabaimran
99c1d2831a Release Khoj version 1.28.3 2024-11-02 12:23:11 -07:00
sabaimran
075b4ecf15 Call subscription_to_state with sync_to_async wrapper when getting user subscription state
- This is needed in case the renewal_date is not set and we need to reset it for the user
2024-11-02 12:22:35 -07:00
sabaimran
ec44cbe1e7 Release Khoj version 1.28.2 2024-11-02 07:53:51 -07:00
Debanjum
31b5fde163 Only enable prompt tracer if git python is installed 2024-11-02 02:07:02 -07:00
sabaimran
5b18dc96e0 Release Khoj version 1.28.1 2024-11-01 22:51:51 -07:00
sabaimran
8d1b1bc78e Move the git python dependency into top level dependencies 2024-11-01 22:51:00 -07:00
Debanjum
e85dd59295 Release Khoj version 1.28.0 2024-11-01 19:06:59 -07:00
Debanjum
1f79a10541 Fix link to code execution feature in docs 2024-11-01 18:22:21 -07:00
Debanjum
cff8e02b60
Research Mode [Part 2]: Improve Prompts, Edit Chat Messages. Set LLM Seed for Reproducibility (#954)
- Improve chat actors and their prompts for research mode.
- Add documentation to enable the code tool when self-hosting Khoj
- Edit Chat Messages
  - Store Turn Id in each chat message. 
  - Expose API to delete chat message.
  - Expose delete chat message button to turn delete chat message from web app
- Set LLM Generation Seed for Reproducible Debugging and Testing
  - Setting seed for LLM generation is supported by Llama.cpp and OpenAI models. 
    This can (somewhat) restrain LLM output
  - Getting fixed responses for fixed inputs helps test, debug longer reasoning chains like used in advanced reasoning
2024-11-01 18:16:42 -07:00
Debanjum
14e453039d Add prompt tracing, agent personality to infer webpage urls chat actor 2024-11-01 18:12:50 -07:00
Debanjum
ab321dc518 Expect query before tool in response to give think space in research prompt 2024-11-01 17:51:41 -07:00
Debanjum
1a83bbcc94 Clean API chat router. Move FeedbackData response type to router helper 2024-11-01 17:51:41 -07:00
sabaimran
e6eb87bbb5 Merge branch 'improve-debug-reasoning-and-other-misc-fixes' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-11-01 16:48:39 -07:00
sabaimran
a213b593e8 Limit the number of urls the webscraper can extract for scraping 2024-11-01 16:48:36 -07:00
sabaimran
327fcb8f62 create defiltered query after conversation command is extracted 2024-11-01 16:48:03 -07:00
sabaimran
b79a9ec36d Clarify description of the code evaluation environment: not for document creation 2024-11-01 16:47:27 -07:00
Debanjum
9c7b36dc69 Use standard per minute rate limits across user types 2024-11-01 16:16:06 -07:00
Debanjum
ac21b10dd5 Simplify logic to get default search model. Remove unused import 2024-11-01 15:14:00 -07:00
sabaimran
2b35790165 Merge branch 'master' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-11-01 14:51:26 -07:00
Debanjum
22f3ed3f5d
Research Mode: Give Khoj the ability to perform more advanced reasoning (#952)
## Overview
Khoj can now go into research mode and use a python code interpreter. These are experimental features that are being released early for feedback and testing.

- Research mode allows Khoj to dynamically select the tools it needs to best answer the question. It is also allowed more iterations to get to a satisfactory answer. Its more dynamic train of thought is shown to improve visibility into its thinking.
- Adding ability for Khoj to use a python code interpreter is an adjacent capability. It can help Khoj do some data analysis and generate charts for you. A sandboxed python to run code is provided using [cohere-terrarium](https://github.com/cohere-ai/cohere-terrarium?tab=readme-ov-file), [pyodide](https://pyodide.org/).

## Analysis
Research mode (significantly?) improves Khoj's information retrieval for more complex queries requiring multi-step lookups but takes longer to run. It can research for longer, requiring less back-n-forth with the user to find an answer.

Research mode gives most gains when used with more advanced chat models (like o1, 4o, new claude sonnet and gemini-pro-002). Smaller models improve their response quality but tend to get into repetitive loops more often. 

## Next Steps
- Get community feedback on research mode. What works, what fails, what is confusing, what'd be cool to have.
- Tune Khoj's capabilities for longer autonomous runs and to generalize across a larger range of model sizes

## Miscellaneous Improvements
- Khoj's train of thought is saved and shown for all messages, not just the latest one
- Render charts generated by Khoj and code running using the code tool on the web app
- Align chat input color to currently selected agent color
2024-11-01 14:46:29 -07:00
sabaimran
baa939f4ce When running code, strip any code delimiters. Disable application json type specification in Gemini request. 2024-11-01 13:47:39 -07:00
sabaimran
8fd2fe162f Determine if research mode is enabled by checking the conversation commands and 'linting' them in the selection phase 2024-11-01 13:12:34 -07:00
sabaimran
cead1598b9 Don't reset research mode after completing research execution 2024-11-01 13:00:11 -07:00
Debanjum
c1c779a7ef Do not yaml format raw code results in context for LLM. It's confusing 2024-11-01 12:45:26 -07:00
sabaimran
b3dad1f393 Standardize rate limits to 1/6 ratio 2024-11-01 12:21:09 -07:00
sabaimran
23a49b6b95 Add documentation for python code execution capability 2024-11-01 12:14:33 -07:00
Debanjum
cd75151431 Do not allow auto selecting research mode as tool for now.
You are required to manually turning it on. This takes longer and
should be a high intent activity initiated by user
2024-11-01 12:07:52 -07:00
Debanjum
0b0cfb35e6 Simplify in research mode check in api_chat.
- Dedent code for readability
- Use better name for in research mode check
- Continue to remove inferred summarize command when multiple files in
  file filter even when not in research mode
- Continue to show select information source train of thought.
  It was removed by mistake earlier
2024-11-01 12:07:08 -07:00
sabaimran
ffa7f95559 Add template for a code sandbox to the docker-compose configuration 2024-11-01 11:50:58 -07:00
Debanjum
73750ef286 Merge branch 'master' into features/advanced-reasoning 2024-11-01 11:42:01 -07:00
sabaimran
1fc280db35 Handle case where infer_webpage_url returns no valid urls 2024-11-01 11:41:32 -07:00
Debanjum
1c920273dd
Add Prompt Tracer to Visualize, Analyze and Debug Khoj's Train of Thought (#951)
## Overview
Use git to capture prompt traces of khoj's train of thought. View, analyze and debug them using your favorite git client (e.g vscode, magit).

- Each commit captures an interaction with an LLM
  The commit writes the query, response and system message each to a separate file in the repo.
  The commit message captures the chat model, Khoj version and other metadata
- Each conversation turn can have multiple interactions with an LLM (e.g Khoj's train of thought)
- Each new conversation turn forks from and merges back into its conversation branch
- Each new conversation branches from the user branch
- Each new user branches from root commit on the main branch

## Usage
1. Set `KHOJ_DEBUG=true` or start khoj in very verbose mode with `khoj -vv` to turn on prompt tracing
2. Chat with Khoj as usual 
3. Open the promptrace git repo to view the generated prompt traces using your favorite git porcelain. 
   The Khoj prompt trace git repo is created at `/tmp/khoj_promptrace` by default. You can configure the prompt trace directory by setting the `PROMPTRACE_DIR`environment variable.

## Implementation
- Add utility functions to capture prompt traces using git (via `gitpython`)
- Make each model provider in Khoj commit their LLM interactions with promptrace
- Weave chat metadata from chat API through all chat actors and commit it to the prompt trace
2024-11-01 11:33:54 -07:00
sabaimran
33d36ee58c Add experimental notice to research mode tooltip 2024-11-01 11:00:27 -07:00
sabaimran
0145b2a366 Set usage limits on the research mode 2024-11-01 10:29:33 -07:00
sabaimran
3ea94ac972 Only include inferred-queries in chat history when present 2024-10-31 22:01:41 -07:00