* Fix license in pyproject.toml. Remove unused utils.state import
* Use single debug mode check function. Disable telemetry in debug mode
- Use single logic to check if khoj is running in debug mode.
Previously there were 3 different variants of the check
- Do not log telemetry if KHOJ_DEBUG is set to true. Previously didn't
log telemetry even if KHOJ_DEBUG set to false
* Respect line breaks in user, khoj chat messages to improve formatting
* Disable Whatsapp config section on web client if Twilio not configured
Simplify Whatsapp configuration status checking js by standardizing
external input to lower case
* Disable Phone API when Twilio not setup and rate limit calls to it
- Move phone api to separate router and only enable it if Twilio enabled
- Add rate-limiting to OTP and verification calls
* Add slugs for phone rate limiting
---------
Co-authored-by: sabaimran <narmiabas@gmail.com>
* Initailize changes to incporate web scraping logic after getting SERP results
- Do some minor refactors to pass a symptom prompt to the openai model when making a query
- integrate Olostep in order to perform the webscraping
* Fix truncation error with new line, fix typing in olostep code
* Use the authorization header for the token
* Add a small hint/indicator for how to use Khojs other modalities in the welcome prompt
* Add more detailed error message if Olostep query fails
* Add unit tests which invoke Olostep in chat director
* Add test for olostep tool
* Allow users to configure phone numbers with the Khoj server
* Integration of API endpoint for updating phone number
* Add phone number association and OTP via Twilio for users connecting to WhatsApp
- When verified, store the result as such in the KhojUser object
* Add a Whatsapp.svg for configuring phone number
* Change setup hint depending on whether the user has a number already connected or not
* Add an integrity check for the intl tel js dependency
* Customize the UI based on whether the user has verified their phone number
- Update API routes to make nomenclature for phone addition and verification more straightforward (just /config/phone, etc).
- If user has not verified, prompt them for another verification code (if verification is enabled) in the configuration page
* Use the verified filter only if the user is linked to an account with an email
* Add some basic documentation for using the WhatsApp client with Khoj
* Point help text to the docs, rather than landing page info
* Update messages on various callbacks and add link to docs page to learn more about the integration
* Add support for a first party client app
- Based on a client id and client secret, allow a first party app to call into the Khoj backend with a phone number identifier
- Add migration to add phone numbers to the KhojUser object
* Add plus in front of country code when registering a phone number.
- Decrease free tier limit to 5 (from 10)
- Return a response object when handling stripe webhooks
* Fix telemetry method which references authenticated user's client app
* Add better error handling for null phone numbers, simplify logic of authenticating user
* Pull the client_secret in the API call from the authorization header
* Add a migration merge to resolve phone number and other changes
### Major
- Short-circuit API rate limiter for unauthenticated user
Calls by unauthenticated users were failing at API rate limiter as it
failed to access user info object. This is a bug.
API rate limiter should short-circuit for unauthenicated users so a
proper Forbidden response can be returned by API
Add regression test to verify that unauthenticated users get 403
response when calling the /chat API endpoint
### Minor
- Remove trailing slash to normalize khoj url in obsidian plugin settings
- Move used /api/config API controllers into separate module
- Delete unused /api/beta API endpoint
- Fix error message rendering in khoj.el, khoj obsidian chat
- Handle deprecation warnings for subscribe renew date, langchain, pydantic & logger.warn
- Ensure langchain less than 0.2.0 is used, to prevent breaking
ChatOpenAI, PyMuPDF usage due to their deprecation after 0.2.0
- Set subscription renewal date to a timezone aware datetime
- Use logger.warning instead of logger.warn as latter is deprecated
- Use `model_dump' not deprecated dict to get all configured content_types
- Honor this setting across the relevant places where embeddings are used
- Convert the VectorField object to have None for dimensions in order to make the search model easily configurable
- Allow server admin to configure offline speech to text model during
initialization
- Use offline speech to text model to transcribe audio from clients
- Set offline whisper as default speech to text model as no setup api key reqd
- Upgrade FastAPI to >= latest version. Required upgrade of FastAPI.
Earlier version didn't support wrapping common query params in class
- Use per fixture app instead of a global FastAPI app in conftest
- Upgrade minimum required Django version
- Fix no notes chat director test with updated no notes message
No notes message was updated in commit 118f1143
- Add fields to mark users as subscribed to a specific plan and
subscription renewal date in DB
- Add ability to unsubscribe a user using their email address
- Expose webhook for stripe to callback confirming payment
Major
- Ensure search results logic consistent across migration to DB, multi-user
- Manually verified search results for sample queries look the same across migration
- Flatten indexing code for better indexing progress tracking and code readability
Minor
- a4f407f Test memory leak on MPS device when generating vector embeddings
- ef24485 Improve Khoj with DB setup instructions in the Django app readme (for now)
- f212cc7 Arrange remaining text search tests in arrange, act, assert order
- 022017d Fix text search tests to test updated indexing log messages
Fix refactor bugs, CSRF token issues for use in production
* Add flags for samesite settings to enable django admin login
* Include tzdata to dependencies to work around python package issues in linux
* Use DJANGO_DEBUG flag correctly
* Fix naming of entry field when creating EntryDate objects
* Correctly retrieve openai config settings
* Fix datefilter with embeddings name for field
- Add a productionized setup for the Khoj server using `gunicorn` with multiple workers for handling requests
- Add a new Dockerfile meant for production config at `ghcr.io/khoj-ai/khoj:prod`; the existing Docker config should remain the same
- Make most routes conditional on authentication *if anonymous mode is not enabled*. If anonymous mode is enabled, it scaffolds a default user and uses that for all application interactions.
- Add a basic login page and add routes for redirecting the user if logged in
- Partition configuration for indexing local data based on user accounts
- Store indexed data in an underlying postgres db using the `pgvector` extension
- Add migrations for all relevant user data and embeddings generation. Very little performance optimization has been done for the lookup time
- Apply filters using SQL queries
- Start removing many server-level configuration settings
- Configure GitHub test actions to run during any PR. Update the test action to run in a containerized environment with a DB.
- Update the Docker image and docker-compose.yml to work with the new application design
GPT4all now supports gguf llama.cpp chat models. Latest
GPT4All (+mistral) performs much at least 3x faster.
On Macbook Pro at ~10s response start time vs 30s-120s earlier.
Mistral is also a better chat model, although it hallucinates more
than llama-2
### Overview
- Add ability to push data to index from the Emacs, Obsidian client
- Switch to standard mechanism of syncing files via HTTP multi-part/form. Previously we were streaming the data as JSON
- Benefits of new mechanism
- No manual parsing of files to send or receive on clients or server is required as most have in-built mechanisms to send multi-part/form requests
- The whole response is not required to be kept in memory to parse content as JSON. As individual files arrive they're automatically pushed to disk to conserve memory if required
- Binary files don't need to be encoded on client and decoded on server
### Code Details
### Major
- Use multi-part form to receive files to index on server
- Use multi-part form to send files to index on desktop client
- Send files to index on server from the khoj.el emacs client
- Send content for indexing on server at a regular interval from khoj.el
- Send files to index on server from the khoj obsidian client
- Update tests to test multi-part/form method of pushing files to index
#### Minor
- Put indexer API endpoint under /api path segment
- Explicitly make GET request to /config/data from khoj.el:khoj-server-configure method
- Improve emoji, message on content index updated via logger
- Don't call khoj server on khoj.el load, only once khoj invoked explicitly by user
- Improve indexing of binary files
- Let fs_syncer pass PDF files directly as binary before indexing
- Use encoding of each file set in indexer request to read file
- Add CORS policy to khoj server. Allow requests from khoj apps, obsidian & localhost
- Update indexer API endpoint URL to` index/update` from `indexer/batch`
Resolves#471#243
* Strip the incoming query from the slash conversation command before passing it to the model or for search
* Return q when content index not loaded
* Remove -n 4 from pytest ini configuration to isolate test failures
- Make `bump_version.sh' script set version for the Khoj desktop app too
- Sync Khoj desktop app authors, license, description and version with
the other interfaces and server
- Update description in packages metadata to match project subtitle on Github
- This uses existing HTTP affordance to process files
- Better handling of binary file formats as removes need to url encode/decode
- Less memory utilization than streaming json as files get
automatically written to disk once memory utilization exceeds preset limits
- No manual parsing of raw files streams required
- GPT4All integration had ceased working with 0.1.7 specification. Update to use 1.0.12. At a later date, we should also use first party support for llama v2 via gpt4all
- Update the system prompt for the extract_questions flow to add start and end date to the yesterday date filter example.
- Update all setup data in conftest.py to use new client-server indexing pattern
* Remove GPT4All dependency in pyproject.toml and use multiplatform builds in the dockerization setup in GH actions
* Move configure_search method into indexer
* Add conditional installation for gpt4all
* Add hint to go to localhost:42110 in the docs. Addresses #477
* Remove PySide, gui option from code
* Remove pyside 6 dependency from code
* Remove workflows which build desktop applications
* Update unit tests and update line in documentation
* Remove additional references to pyinstaller, gui
* Add uninstall steps to normal uninstall instructions
* Initial version - setup a file-push architecture for generating embeddings with Khoj
* Use state.host and state.port for configuring the URL for the indexer
* Fix parsing of PDF files
* Read markdown files from streamed data and update unit tests
* On application startup, load in embeddings from configurations files, rather than regenerating the corpus based on file system
* Init: refactor indexer/batch endpoint to support a generic file ingestion format
* Add features to better support indexing from files sent by the desktop client
* Initial commit with Electron application
- Adds electron app
* Add import for pymupdf, remove import for pypdf
* Allow user to configure khoj host URL
* Remove search type configuration from index.html
* Use v1 path for current indexer routes
* Store conversation command options in an Enum
* Move to slash commands instead of using @ to specify general commands
* Calculate conversation command once & pass it as arg to child funcs
* Add /notes command to respond using only knowledge base as context
This prevents the chat model to try respond using it's general world
knowledge only without any references pulled from the indexed
knowledge base
* Test general and notes slash commands in openai chat director tests
---------
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
Build the Debian package using Ubuntu 20.04 instead of 22.04 as Ubuntu 20.04 comes pre-installed with glibc_2.31 unlike Ubuntu 22.04 which uses glibc_2.35
This should reduce chances of installation errors due to regex package
being built from source for python3.11
Previously, the regex dependency of dateparser = 1.1.1 didn't have a
wheel for python 3.11. This would trigger building the regex package
from scratch which would fail for a lot of folks
* Working example with LlamaV2 running locally on my machine
- Download from huggingface
- Plug in to GPT4All
- Update prompts to fit the llama format
* Add appropriate prompts for extracting questions based on a query based on llama format
* Rename Falcon to Llama and make some improvements to the extract_questions flow
* Do further tuning to extract question prompts and unit tests
* Disable extracting questions dynamically from Llama, as results are still unreliable
* Add support for gpt4all's falcon model as an additional conversation processor
- Update the UI pages to allow the user to point to the new endpoints for GPT
- Update the internal schemas to support both GPT4 models and OpenAI
- Add unit tests benchmarking some of the Falcon performance
* Add exc_info to include stack trace in error logs for text processors
* Pull shared functions into utils.py to be used across gpt4 and gpt
* Add migration for new processor conversation schema
* Skip GPT4All actor tests due to typing issues
* Fix Obsidian processor configuration in auto-configure flow
* Rename enable_local_llm to enable_offline_chat
Khoj will soon get a generic text indexing content type. This along
with a file filter should suffice for searching through Ledger
transactions, if required.
Having a specific content type for niche use-case like ledger isn't
useful. Removing unused content types will reduce khoj code to manage.
- Previously Khoj could only support Python upto 3.10 due to pytorch.
But lots of folks had python 3.11 installed by default on their machines.
This required installing python 3.10 and dealing with virtual envs.
With Torch >= 2.0.1 now able to support python 3.11, at least one
class of installation troubles for Khoj should drop. See
https://github.com/pytorch/pytorch/issues/86566 for reference
- Preliminary testing indicates using the new torch 2.x may reduce
search time by 25% (from 80ms to 60ms on Mac M1)
- Update Docs to not require mentioning python <=3.10 required
- Update Github test workflow to run khoj tests with python 3.11 too
The Llama_Hub Github plugin is fairly limited.
The Github Rest API is well supported and can easily be extended to
index commit messages, issues, discussions, PRs etc.
- Move completion and chat_completion into helper methods under utils.py
- Add retry with exponential backoff on OpenAI exceptions using
tenacity package. This is officially suggested and used by other
popular GPT based libraries
- Use tiktoken to count tokens for chat models
- Make conversation turns to add to prompt configurable via method
argument to generate_chatml_messages_with_context method
- Chat directors are broad agents.
- Chat directors orchestrate narrow actor agents to synthesize
final response for the user
- Agents are Prompts + ML Model
- Test Chat Director Capabilities
1. [X] Answer from retrieved notes
2. [X] Answer from chat history
3. [X] Answer general questions
4. [X] Carry out multi-turn conversation
5. [X] Say don't know when answer not in provided context
6. [X] Answers that require current date awareness
This test is expected to fail as the chat is not capable of doing
this without the Search actor. But the test allows assessing chat quality
7. [X] Date-aware aggregation across multiple different notes
This test is expected to fail as the chat is not capable of doing
this without the Search actor. But the test allows assessing chat quality
8. [X] Ask clarification questions if no unambiguous answer in provided context
9. [X] Retrieve answer from chat history beyond lookback window
This test is expected to fail as the chat director is not capable
of searching chat history yet. But the test allows assessing chat quality
10. [X] Retrieve context for answer using multiple independent
searches on knowledge base
This test is expected to fail as the chat is not capable of doing
this without the Search actor. But the test allows assessing chat quality
- Mark chat quality tests, register custom mark for chat quality
- Filter unhelpful deprecation warnings from within dateparser library
- Error if tests use unregistered marks
- Set context by either including last 2 chat messages from active
session or past 2 conversation summaries from conversation logs
- Set personality in system message
- Place personality system message before last completed back & forth
This may stop ChatGPT forgetting its personality as conversation progresses given:
- The conditioning based on system role messages is light
- If system message is too far back in conversation history, the
model may forget its personality conditioning
- If system message at end of conversation, the model can think its
the start of a new conversation
- Inserting the system message before last completed back & forth should
prevent ChatGPT from assuming its the start of a new conversation
while not losing personality conditioning from the system message
- Simplfy the Khoj Chat API to for now just answer from users notes
instead of trying to infer other potential interaction types.
- This is the default expected behavior from the feature anyway
- Use the compiled text of the top 2 search results for context
- Benefits of using ChatGPT
- Better model
- 1/10th the price
- No hand rolled prompt required to make GPT provide more chatty,
assistant type responses
- Remove unneeded type ignore for mps with the latest mypy
- Stop excluding PyQT desktop GUI code from MyPy checks
- Do not warn about unused ignores. Some issue with mypy giving
different errors in different environments (venv, system and pre-commit)
- Why
- pyprojects.toml is the python standards compliant config format
- allows collating python tooling configs into single standard file
- hatch(-ling) is a new lightweight build system for python packages
- Detailed Changes
- Replace setup.py, setuptools with pyproject.toml, hatchling for
khoj python config and build
- move pytest into optional development dependencies
- add more links to khoj in the project urls section
- add topic classifiers and keywords to find khoj package
- Delete setup.py, MANIFEST.in as moved to pyproject.toml based setup
- Update pypi workflow to set python package version in pyproject.toml