- Integrate with Ollama or other openai compatible APIs by simply
setting `OPENAI_API_BASE' environment variable in docker-compose etc.
- Update docs on integrating with Ollama, openai proxies on first run
- Auto populate all chat models supported by openai compatible APIs
- Auto set vision enabled for all commercial models
- Minor
- Add huggingface cache to khoj_models volume. This is where chat
models and (now) sentence transformer models are stored by default
- Reduce verbosity of yarn install of web app. Otherwise hit docker
log size limit & stops showing remaining logs after web app install
- Suggest `ollama pull <model_name>` to start it in background
This was previously required, but now it's only usefuly for more
advanced settings, not typical for self-hosting users.
With recent updates, the user's selected chat model is used for both
Khoj's train of thought and response. This makes it easy to
switch your preferred chat model directly from the user settings
page and not have to update this in the admin panel as well.
Reflect these code changse in the docs, by removing the unnecessary
step for self-hosted users to create a server chat setting when using
an OpenAI proxy service like Ollama, LiteLLM etc.
* Create explicit flow to enable the free trial
The current design is confusing. It obfuscates the fact that the user is on a free trial. This design will make the opt-in explicit and more intuitive.
* Use the Subscription Type enum instead of hardcoded strings everywhere
* Use length of free trial in the frontend code as well
- Use tabs for GPU/CPU type khoj being install on
- Update CMAKE flags to use to install Khoj with correct GPU support
Previous flags used DLLAMA, this has been updated to use DGGML now
in llama.cpp
This is an initial pass to add documentation for all the knobs
available on the Khoj Admin panel.
It should shed some light onto what each admin setting is for and how
they can be customized when self hosting.
Resolves#831
- Improve Self Hosting Docker Instructions
- Ask to Install Docker Desktop to not require separate
docker-compose install and unify the instruction across OS
- To Self Host on Windows, ask to use Docker Desktop with WSL2 backend
- Use nested Tab grouping to split Docker vs Pip Self Host Instructions
- Reduce Self Host Setup Steps in Documentation after code simplification
- First run now avoids need to configure Khoj via admin panel
- So move the chat model config steps into optional post setup
config section
- Improve Instructions to Configure chat models on First Run
- Compress configuring chat model providers into a Tab Group
- Add Documentation for Remote Access under Advanced Self Hosting
- Update references to the settings page to use new url across docs
and code
- Rename desktop and web settings page to settigns.html instead of
config[ure].html
- Deprecate khoj-assistant pypi package. Use more accurate and
succinct pypi project name, khoj
- Update references to sye khoj pypi package in docs and code instead
of the legacy khoj-assistant pypi package
- Update pypi workflow to publish to both khoj, khoj-assistant for now
- Update stale python 3.9 support mentioned in our pyproject. Can't
support python 3.9 as depend on latest django which support >=3.10
Update offline, openai chat actor, director tests to not require
Serper to run the online command tests
Update documentation for self-hosted online search to mention no setup
is required by default. But improvements can be made by using
Serper.dev or Olostep
- Add instructions for self-hosted users with info, warning boxes to
avoid, fix common issues when setting up Khoj server
- Create new Advanced Self Hosting section
- Extract Advanced Self-Hosting Sections from the Advanced Page and
move them to separate Pages under Advanced Self Hosting section
- Improve OpenAI Proxy Docs
- Put Ollama setup as a section under OpenAI API Proxy page instead
of a separate page
- Add Section to use Khoj with chat model from LM Studio
- Update LiteLLM docs to use chat model from LM Studio