This is unlike the more general chat API that combines summarization
of top search result and conversing with the OpenAI model
This should give faster summary results. As no intent categorization
API call required
- Use latest davinci model for tests
- Wrap prompt in triple quotes to improve legibilty
- `understand' method returns dictionary instead of string. Fix its test
- Fix prompt for new model to pass `chat_with_history' test
- Default to using `text-davinci-003' if conversation model not
explicitly configured by user. Stop using the older `davinci' and
`davinci-instruct' models
- Use `model' instead of `engine' as parameter.
Usage of `engine' parameter in OpenAI API is deprecated
- Init processor before search to instantiate `openai_api_key'
from `khoj.yml'. The key is used to configure search with openai models
- To use OpenAI models for search in Khoj
- Set `encoder' to name of an OpenAI model. E.g text-embedding-ada-002
- Set `encoder-type' in `khoj.yml' to `src.utils.models.OpenAI'
- Set `model-directory' to `null', as online model cannot be stored on disk
Long words (>500 characters) provide less useful context to models.
Dropping very long words allow models to create better embeddings by
passing more of the useful context from the entry to the model
- Previously `model_type' was set in the setup of each `search_type'
- All encoders were of type `SentenceTransformer'
- All cross_encoders were of type `CrossEncoder'
- Now `encoder-type' can be configured via the new `encoder_type' field
in `TextSearchConfig' under `search-type` in `khoj.yml`.
- All the specified `encoder-type' class needs is an `encode' method
that takes entries and returns embedding vectors
- Ensure all tensors are on MPS device before doing operations across them
- Background
- GPU is used by default for Khoj on MacOS now
- Needed PyTorch > 1.13.0 on Macs to use GPU, which we do now
- MPS should speed up search and indexing on MacOS
Fix usage warning for unescaped single quote in `khoj.el' docstring.
Converts usage of '<text>' into `<text>' to use the correct quote forms in generated docs
⛔ Warning (comp): khoj.el:119:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
⛔ Warning (comp): khoj.el:120:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
⛔ Warning (comp): khoj.el:121:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
⛔ Warning (comp): khoj.el:168:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
- Features
- Search using Khoj from within the Obsidian app
Allow Natural language search on your (markdown) notes in Obsidian Vault
- Show search results as rendered (instead of raw) Markdown
Improve legibility of the results
- Jump to selected note from search result in Khoj search modal
Simplify seeing result within its original note context
- Automatically configure khoj to index markdown files in current vault
Reduce khoj setup steps for plugin users by using reasonable defaults
- Code updates the markdown config in khoj.yml and triggers index update
- It can be configured by user in khoj plugin settings, if required
- Add Demo and detailed Readme for the Obsidian plugin
Ease setup and usage. Give context about capabilities
- Miscellaneous
- Trying keep a mono repo until the Khoj project is mature enough
to reduce maintainance burden
This can ease configuring khoj from the different interfaces
- Don't need to know all the (default) config used by khoj.
- Just get default config by calling the above API endpoint.
- Then modify desired portions and call POST /api/config/data to
configure khoj.
- Start khoj server (in non-GUI mode) without needing config file
already instantiated.
- But throw warning to configure khoj to use it
- This allows plugins to configure the app via the /config/data APIs
- To be used by the Khoj obsidian plugin to configure markdown content
in khoj
- Poll scheduler every minute using threading.Timer
- Use 60 seconds polling interval to avoid fork bombing
- Schedule next via the same poll scheduler
- Allow clean program interrupt by running scheduler in daemon mode
- There are 3 paths to updating/setting the index (stored in state.model)
- App start
- API
- Scheduler
- Put all updates to the index behind a lock. As multiple updates path
that could (potentially) run at the same time (via API or Scheduler)
- Remove property drawer from test entry for max_words splitting test
- Property drawer is not required for the test
- Keep minimal test case to reduce chance for confusion
- Required because entries are now split by the max_word count supported
by the ML models
- This would now result in potentially duplicate hits, entries being
returned to user
- Do deduplication after ranking to get the top ranked deduplicated
results
- The instructions suggest installing khoj-assistant via pip install.
This installs the latest tagged/release version of khoj
- To match that version user should install khoj.el from MELPA stable
instead of MELPA
- Issue
ML Models truncate entries exceeding some max token limit.
This lowers the quality of search results
- Fix
Split entries by max tokens before indexing.
This should improve searching for content in longer entries.
- Miscellaneous
- Test method to split entries by max tokens
Update readme to ask user to install khoj.el from MELPA when a
pre-release version of the main khoj app is installed. Else install
khoj.el from MELPA Stable