Debanjum Singh Solanky
b553bba1d8
Release Khoj version 1.21.6
2024-09-09 14:55:36 -07:00
sabaimran
223d310ea2
CTA in welcome email
2024-09-09 14:33:27 -07:00
Debanjum Singh Solanky
5dea9ef323
Add selected OS tab to url in documentation to ease link sharing
2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
87c52dfd02
Update Documentation project dependencies
...
Stop wrapping Tabs in explicit mdx-code-blocks as build after upgrade
throws error
2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
7941b12d50
Toggle speak, send buttons based on chat input text entered on Desktop
2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
b5f6550de2
Move link to source code from Nav pane to About page on Desktop app
2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
77b44f6db0
Update Desktop app dependencies
2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
303d8ed64e
Update Obsidian plugin package dependencies
2024-09-09 10:40:53 -07:00
Debanjum Singh Solanky
72fbbc092c
Upgrade Django, FastAPI, Uvicorn packages
...
- Update Django to 5.0.8
- Update Uvicorn to 0.30.6
- Update FastAPI minimum versions to 0.110.0
2024-09-09 10:40:53 -07:00
sabaimran
8e6b9afeb7
Add an automation for research paper summaries
2024-09-08 11:50:49 -07:00
Debanjum
05c169bb37
Set File Types to Sync from Obsidian via Khoj Plugin Settings Page ( #904 )
...
Limit file types to sync with Khoj from Obsidian to:
- Avoid hitting per user index-able data limits, especially for folks on the Khoj cloud free tier. E.g by excluding images in Obsidian vault from being synced
- Improve context used by Khoj to generate responses
2024-09-05 22:40:30 -07:00
Husain007
4e8ead66a8
Fix URL to web, desktop settings pages on Desktop application ( #903 )
...
Update web and desktop settings URLs on desktop application from previous 'config' path to new 'settings' path
2024-09-05 14:47:43 -07:00
Debanjum Singh Solanky
bc26cf8b2f
Only show updated index notice on success in Obsidian plugin
...
Previously it'd show indexing success notice on error and success
2024-09-04 17:52:32 -07:00
Debanjum Singh Solanky
cb425a073d
Use rich text error to better guide when exceed data sync limits in Obsidian
...
When user exceeds data sync limits. Show error notice with
- Link to web app settings page to upgrade subscription
- Link to Khoj plugin settings in Obsidian to configure file types to
sync from vault to Khoj
2024-09-04 17:52:32 -07:00
Debanjum Singh Solanky
19efc83455
Set File Types to Sync from Obsidian via Khoj Plugin Settings Page
...
Useful to limit file types to sync with Khoj. Avoids hitting indexed
data limits, especially for users on the Khoj cloud free tier
Closes #893
2024-09-04 16:09:56 -07:00
sabaimran
7216a06f5f
Release Khoj version 1.21.5
2024-09-03 21:58:00 -07:00
sabaimran
895f1c8e9e
Gracefully close thread when there's an exception in the anthropic llm thread. Include full stack traces.
2024-09-03 13:16:51 -07:00
sabaimran
17901406aa
Gracefully close thread when there's an exception in the openai llm thread. Closes #894 .
2024-09-03 13:16:51 -07:00
sabaimran
6ed68b574b
Merge pull request #898 from lvnilesh/patch-1
...
Handles deprecation of version reference
2024-09-03 12:53:44 -07:00
sabaimran
912cc0074a
Use nonlocal for conversation_id when running the event_generator
2024-09-03 11:55:06 -07:00
sabaimran
591f5a522c
Release Khoj version 1.21.4
2024-09-02 17:45:39 -07:00
sabaimran
9306a0bb2c
Prefetch the settings and openai_config of a texttoimagemodelconfig
2024-09-02 17:35:21 -07:00
sabaimran
132eac0f51
Merge pull request #897 from khoj-ai/features/increase-rate-limits
...
Increase rate limits for data indexing
2024-08-25 23:39:30 -07:00
LV Nilesh
77cc1cd42f
Update docker-compose.yml
...
Handles deprecation of version reference
2024-08-25 17:05:47 -07:00
sabaimran
977001b801
Reduce the test data packet size
2024-08-25 16:14:32 -07:00
sabaimran
6eb06e8626
Downgrade rate limit to 200MB
2024-08-25 15:26:27 -07:00
sabaimran
439a2680fd
Increase rate limits for data indexing
2024-08-25 15:09:30 -07:00
sabaimran
af4e9988c4
Merge pull request #896 from khoj-ai/features/add-support-for-custom-confidence
...
Add support for custom search model-specific thresholds
2024-08-24 20:32:41 -07:00
sabaimran
4b77325f63
Default to infinite distance when using the search API
2024-08-24 19:57:49 -07:00
sabaimran
e919d28f1c
Add support for custom search model-specific thresholds
2024-08-24 19:28:26 -07:00
sabaimran
fa4d808a5f
Encode uri components when sending automations data to the server
2024-08-24 18:45:50 -07:00
sabaimran
387b7c7887
Release Khoj version 1.21.3
2024-08-23 11:15:15 -07:00
sabaimran
7b8b3a66ae
Revert django version to previous patch
2024-08-23 11:12:41 -07:00
Debanjum Singh Solanky
5927ca8032
Properly close chat stream iterator even if response generation fails
...
Previously chat stream iterator wasn't closed when response streaming
for offline chat model threw an exception.
This would require restarting the application. Now application doesn't
hang even if current response generation fails with exception
2024-08-23 02:06:26 -07:00
Debanjum Singh Solanky
bdb81260ac
Update docs to mention using Llama 3.1 and 20K max prompt size for it
...
Update stale credits to better reflect bigger open source dependencies
2024-08-22 20:27:58 -07:00
Debanjum Singh Solanky
238bc11a50
Fix, improve openai chat actor, director tests & online search prompt
2024-08-22 19:09:33 -07:00
Debanjum Singh Solanky
9986c183ea
Default to gpt-4o-mini instead of gpt-3.5-turbo in tests, func args
...
GPT-4o-mini is cheaper, smarter and can hold more context than
GPT-3.5-turbo. In production, we also default to gpt-4o-mini, so makes
sense to upgrade defaults and tests to work with it
2024-08-22 19:04:49 -07:00
Debanjum Singh Solanky
8a4c20d59a
Enforce json response by offline models when requested by chat actors
...
- Background
Llama.cpp allows enforcing response as json object similar to OpenAI
API. Pass expected response format to offline chat models as well.
- Overview
Enforce json output to improve intermediate step performance by
offline chat models. This is especially helpful when working with
smaller models like Phi-3.5-mini and Gemma-2 2B, that do not
consistently respond with structured output, even when requested
- Details
Enforce json response by extract questions, infer output offline
chat actors
- Convert prompts to output json objects when offline chat models
extract document search questions or infer output mode
- Make llama.cpp enforce response as json object
- Result
- Improve all intermediate steps by offline chat actors via json
response enforcement
- Avoid the manual, ad-hoc and flaky output schema enforcement and
simplify the code
2024-08-22 18:07:44 -07:00
Debanjum Singh Solanky
ab7fb5117c
Release Khoj version 1.21.2
2024-08-20 12:38:54 -07:00
Debanjum Singh Solanky
de24ffcf0d
Upgrade Axios, a desktop app dependency, to version 1.7.4
2024-08-20 12:32:36 -07:00
Debanjum Singh Solanky
a60baa55fb
Upgrade Django, a Khoj server dependency, to version 5.0.8
2024-08-20 12:32:00 -07:00
sabaimran
1ac8de6c3a
Release Khoj version 1.21.1
2024-08-20 11:55:34 -07:00
Debanjum Singh Solanky
5d59acd1f4
Stop pushing deprecated khoj-assistant package to pypi
...
- Also skip uploading package version to it already exists on pypi
This happens when a release is new khoj tagged release is created
2024-08-20 11:43:02 -07:00
sabaimran
f6ce2fd432
Handle end of chunk logic in openai stream processor
2024-08-20 10:50:09 -07:00
sabaimran
029775420c
Release Khoj version 1.21.0
2024-08-20 10:01:56 -07:00
sabaimran
4808ce778a
Merge pull request #892 from khoj-ai/upgrade-offline-chat-models-support
...
Upgrade offline chat model support. Default to Llama 3.1
2024-08-20 11:51:20 -05:00
Debanjum Singh Solanky
58c8068079
Upgrade default offline chat model to llama 3.1
2024-08-20 09:28:56 -07:00
sabaimran
2d9dd81e76
Re-add authenticated decorator to api_chat.py /chat endpoint
2024-08-19 05:37:18 -05:00
sabaimran
2c5350329a
Remove the hashes from titles in found relevant notes
2024-08-18 22:31:15 -05:00
Debanjum Singh Solanky
acdc3f9470
Unwrap any json in md code block, when parsing chat actor responses
...
This is a more robust way to extract json output requested from
gemma-2 (2B, 9B) models which tend to return json in md codeblocks.
Other models should remain unaffected by this change.
Also removed request to not wrap json in codeblocks from prompts. As
code is doing the unwrapping automatically now, when present
2024-08-16 14:16:29 -05:00