diff --git a/.github/workflows/pypi.yml b/.github/workflows/pypi.yml
index 1ac735aa..92bf9276 100644
--- a/.github/workflows/pypi.yml
+++ b/.github/workflows/pypi.yml
@@ -21,16 +21,18 @@ on:
jobs:
publish:
name: Publish Python Package to PyPI
- runs-on: ubuntu-20.04
+ runs-on: ubuntu-latest
+ permissions:
+ id-token: write
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- - name: Set up Python 3.10
+ - name: Set up Python 3.11
uses: actions/setup-python@v4
with:
- python-version: '3.10'
+ python-version: '3.11'
- name: ⬇️ Install Application
run: python -m pip install --upgrade pip && pip install --upgrade .
@@ -59,6 +61,4 @@ jobs:
- name: 📦 Publish Python Package to PyPI
if: startsWith(github.ref, 'refs/tags') || github.ref == 'refs/heads/master'
- uses: pypa/gh-action-pypi-publish@v1.6.4
- with:
- password: ${{ secrets.PYPI_API_KEY }}
+ uses: pypa/gh-action-pypi-publish@v1.8.14
diff --git a/documentation/assets/img/agents_demo.gif b/documentation/assets/img/agents_demo.gif
new file mode 100644
index 00000000..2669033e
Binary files /dev/null and b/documentation/assets/img/agents_demo.gif differ
diff --git a/documentation/assets/img/dream_house.png b/documentation/assets/img/dream_house.png
new file mode 100644
index 00000000..adfc9a37
Binary files /dev/null and b/documentation/assets/img/dream_house.png differ
diff --git a/documentation/assets/img/plants_i_got.png b/documentation/assets/img/plants_i_got.png
new file mode 100644
index 00000000..72e0d193
Binary files /dev/null and b/documentation/assets/img/plants_i_got.png differ
diff --git a/documentation/assets/img/using_khoj_for_studying.gif b/documentation/assets/img/using_khoj_for_studying.gif
new file mode 100644
index 00000000..898a3710
Binary files /dev/null and b/documentation/assets/img/using_khoj_for_studying.gif differ
diff --git a/documentation/docs/features/agents.md b/documentation/docs/features/agents.md
new file mode 100644
index 00000000..249f5bde
--- /dev/null
+++ b/documentation/docs/features/agents.md
@@ -0,0 +1,15 @@
+---
+sidebar_position: 4
+---
+
+# Agents
+
+You can use agents to setup custom system prompts with Khoj. The server host can setup their own agents, which are accessible to all users. You can see ours at https://app.khoj.dev/agents.
+
+![Demo](/img/agents_demo.gif)
+
+## Creating an Agent (Self-Hosted)
+
+Go to `server/admin/database/agent` on your server and click `Add Agent` to create a new one. You have to set it to `public` in order for it to be accessible to all the users on your server. To limit access to a specific user, do not set the `public` flag and add the user in the `Creator` field.
+
+Set your custom prompt in the `personality` field.
diff --git a/documentation/docs/features/all_features.md b/documentation/docs/features/all_features.md
index c482805b..3d3b8941 100644
--- a/documentation/docs/features/all_features.md
+++ b/documentation/docs/features/all_features.md
@@ -2,7 +2,7 @@
sidebar_position: 1
---
-# Features
+# Overview
Khoj supports a variety of features, including search and chat with a wide range of data sources and interfaces.
diff --git a/documentation/docs/features/chat.md b/documentation/docs/features/chat.md
index f6581746..4c9cdcc6 100644
--- a/documentation/docs/features/chat.md
+++ b/documentation/docs/features/chat.md
@@ -14,16 +14,16 @@ You can configure Khoj to chat with you about anything. When relevant, it'll use
### Setup (Self-Hosting)
#### Offline Chat
-Offline chat stays completely private and works without internet using open-source models.
+Offline chat stays completely private and can work without internet using open-source models.
> **System Requirements**:
> - Minimum 8 GB RAM. Recommend **16Gb VRAM**
> - Minimum **5 GB of Disk** available
> - A CPU supporting [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) is required
-> - A Mac M1+ or [Vulcan supported GPU](https://vulkan.gpuinfo.org/) should significantly speed up chat response times
+> - An Nvidia, AMD GPU or a Mac M1+ machine would significantly speed up chat response times
1. Open your [Khoj offline settings](http://localhost:42110/server/admin/database/offlinechatprocessorconversationconfig/) and click *Enable* on the Offline Chat configuration.
-2. Open your [Chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/) and add a new option for the offline chat model you want to use. Make sure to use `Offline` as its type. We currently only support offline models that use the [Llama chat prompt](https://replicate.com/blog/how-to-prompt-llama#wrap-user-input-with-inst-inst-tags) format. We recommend using `mistral-7b-instruct-v0.1.Q4_0.gguf`.
+2. Open your [Chat model options settings](http://localhost:42110/server/admin/database/chatmodeloptions/) and add any [GGUF chat model](https://huggingface.co/models?library=gguf) to use for offline chat. Make sure to use `Offline` as its type. For a balanced chat model that runs well on standard consumer hardware we recommend using [Hermes-2-Pro-Mistral-7B by NousResearch](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF) by default.
:::tip[Note]
diff --git a/documentation/docs/features/image_generation.md b/documentation/docs/features/image_generation.md
new file mode 100644
index 00000000..da6af1ac
--- /dev/null
+++ b/documentation/docs/features/image_generation.md
@@ -0,0 +1,15 @@
+# Image Generation
+You can use Khoj to generate images from text prompts. You can get deeper into the details of our image generation flow in this blog post: https://blog.khoj.dev/posts/how-khoj-generates-images/.
+
+To generate images, you just need to provide a prompt to Khoj in which the image generation is in the instructions. Khoj will automatically detect the image generation intent, augment your generation prompt, and then create the image. Here are some examples:
+| Prompt | Image |
+| --- | --- |
+| Paint a picture of the plants I got last month, pixar-animation | ![plants](/img/plants_i_got.png) |
+| Create a picture of my dream house, based on my interests | ![house](/img/dream_house.png) |
+
+
+## Setup (Self-Hosting)
+
+Right now, we only support integration with OpenAI's DALL-E. You need to have an OpenAI API key to use this feature. Here's how you can set it up:
+1. Setup your OpenAI API key. See instructions [here](/get-started/setup#2-configure)
+2. Create a text to image config at http://localhost:42110/server/admin/database/texttoimagemodelconfig/. We recommend the value `dall-e-3`.
diff --git a/documentation/docs/features/voice_chat.md b/documentation/docs/features/voice_chat.md
new file mode 100644
index 00000000..370a1737
--- /dev/null
+++ b/documentation/docs/features/voice_chat.md
@@ -0,0 +1,14 @@
+# Voice
+
+You can talk to Khoj using your voice. Khoj will respond to your queries using the same models as the chat feature. You can use voice chat on the web, Desktop, and Obsidian apps. Click on the little mic icon to send your voice message to Khoj. It will send back what it heard via text. You'll have some time to edit it before sending it, if required. Try it at https://app.khoj.dev/.
+
+:::info[Voice Response]
+Khoj doesn't yet respond with voice, but it will send back a text response. Let us know if you're interested in voice responses at team at khoj.dev.
+:::
+
+## Setup (Self-Hosting)
+
+Voice chat will automatically be configured when you initialize the application. The default configuration will run locally. If you want to use the OpenAI whisper API for voice chat, you can set it up by following these steps:
+
+1. Setup your OpenAI API key. See instructions [here](/get-started/setup#2-configure).
+2. Create a new configuration at http://localhost:42110/server/admin/database/speechtotextmodeloptions/. We recommend the value `whisper-1` and model type `Openai`.
diff --git a/documentation/docs/get-started/overview.md b/documentation/docs/get-started/overview.md
index 4b571226..b0d2a51c 100644
--- a/documentation/docs/get-started/overview.md
+++ b/documentation/docs/get-started/overview.md
@@ -37,9 +37,7 @@ Welcome to the Khoj Docs! This is the best place to get setup and explore Khoj's
- [Read these instructions](/get-started/setup) to self-host a private instance of Khoj
## At a Glance
-
-
-
+![demo_chat](/img/using_khoj_for_studying.gif)
#### [Search](/features/search)
- **Natural**: Use natural language queries to quickly find relevant notes and documents.
diff --git a/documentation/docs/get-started/setup.mdx b/documentation/docs/get-started/setup.mdx
index 3b2b8db5..4aa2f960 100644
--- a/documentation/docs/get-started/setup.mdx
+++ b/documentation/docs/get-started/setup.mdx
@@ -25,6 +25,10 @@ These are the general setup instructions for self-hosted Khoj.
For Installation, you can either use Docker or install the Khoj server locally.
+:::info[Offline Model + GPU]
+If you want to use the offline chat model and you have a GPU, you should use Installation Option 2 - local setup via the Python package directly. Our Docker image doesn't currently support running the offline chat model on GPU, making inference times really slow.
+:::
+
### Installation Option 1 (Docker)
#### Prerequisites
@@ -97,6 +101,7 @@ sudo -u postgres createdb khoj --password
##### Local Server Setup
- *Make sure [python](https://realpython.com/installing-python/) and [pip](https://pip.pypa.io/en/stable/installation/) are installed on your machine*
+- Check [llama-cpp-python setup](https://python.langchain.com/docs/integrations/llms/llamacpp#installation) if you hit any llama-cpp issues with the installation
Run the following command in your terminal to install the Khoj backend.
@@ -104,17 +109,36 @@ Run the following command in your terminal to install the Khoj backend.
```shell
+# ARM/M1+ Machines
+MAKE_ARGS="-DLLAMA_METAL=on" python -m pip install khoj-assistant
+
+# Intel Machines
python -m pip install khoj-assistant
```
```shell
- py -m pip install khoj-assistant
+ # 1. (Optional) To use NVIDIA (CUDA) GPU
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
+ # 1. (Optional) To use AMD (ROCm) GPU
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on"
+ # 1. (Optional) To use VULCAN GPU
+ CMAKE_ARGS="-DLLAMA_VULKAN=on"
+
+ # 2. Install Khoj
+ py -m pip install khoj-assistant
```
```shell
-python -m pip install khoj-assistant
+ # CPU
+ python -m pip install khoj-assistant
+ # NVIDIA (CUDA) GPU
+ CMAKE_ARGS="DLLAMA_CUBLAS=on" FORCE_CMAKE=1 python -m pip install khoj-assistant
+ # AMD (ROCm) GPU
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 python -m pip install khoj-assistant
+ # VULCAN GPU
+ CMAKE_ARGS="-DLLAMA_VULKAN=on" FORCE_CMAKE=1 python -m pip install khoj-assistant
```
@@ -163,7 +187,31 @@ Khoj should now be running at http://localhost:42110. You can see the web UI in
Note: To start Khoj automatically in the background use [Task scheduler](https://www.windowscentral.com/how-create-automated-task-using-task-scheduler-windows-10) on Windows or [Cron](https://en.wikipedia.org/wiki/Cron) on Mac, Linux (e.g with `@reboot khoj`)
-### 2. Download the desktop client
+### Setup Notes
+
+Optionally, you can use Khoj with a custom domain as well. To do so, you need to set the `KHOJ_DOMAIN` environment variable to your domain (e.g., `export KHOJ_DOMAIN=my-khoj-domain.com` or add it to your `docker-compose.yml`). By default, the Khoj server you set up will not be accessible outside of `localhost` or `127.0.0.1`.
+
+:::warning[Must use an SSL certificate]
+If you're using a custom domain, you must use an SSL certificate. You can use [Let's Encrypt](https://letsencrypt.org/) to get a free SSL certificate for your domain.
+:::
+
+### 2. Configure
+1. Go to http://localhost:42110/server/admin and login with your admin credentials.
+ 1. Go to [OpenAI settings](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/) in the server admin settings to add an OpenAI processor conversation config. This is where you set your API key. Alternatively, you can go to the [offline chat settings](http://localhost:42110/server/admin/database/offlinechatprocessorconversationconfig/) and simply create a new setting with `Enabled` set to `True`.
+ 2. Go to the ChatModelOptions if you want to add additional models for chat.
+ - Set the `chat-model` field to a supported chat model[^1] of your choice. For example, you can specify `gpt-4-turbo-preview` if you're using OpenAI or `NousResearch/Hermes-2-Pro-Mistral-7B-GGUF` if you're using offline chat.
+ - Make sure to set the `model-type` field to `OpenAI` or `Offline` respectively.
+ - The `tokenizer` and `max-prompt-size` fields are optional. Set them only when using a non-standard model (i.e not mistral, gpt or llama2 model).
+1. Select files and folders to index [using the desktop client](/get-started/setup#2-download-the-desktop-client). When you click 'Save', the files will be sent to your server for indexing.
+ - Select Notion workspaces and Github repositories to index using the web interface.
+
+[^1]: Khoj, by default, can use [OpenAI GPT3.5+ chat models](https://platform.openai.com/docs/models/overview) or [GGUF chat models](https://huggingface.co/models?library=gguf). See [this section](/miscellaneous/advanced#use-openai-compatible-llm-api-server-self-hosting) to use non-standard chat models
+
+:::tip[Note]
+Using Safari on Mac? You might not be able to login to the admin panel. Try using Chrome or Firefox instead.
+:::
+
+### 3. Download the desktop client (Optional)
You can use our desktop executables to select file paths and folders to index. You can simply select the folders or files, and they'll be automatically uploaded to the server. Once you specify a file or file path, you don't need to update the configuration again; it will grab any data diffs dynamically over time.
@@ -171,22 +219,6 @@ You can use our desktop executables to select file paths and folders to index. Y
To use the desktop client, you need to go to your Khoj server's settings page (http://localhost:42110/config) and copy the API key. Then, paste it into the desktop client's settings page. Once you've done that, you can select files and folders to index. Set the desktop client settings to use `http://127.0.0.1:42110` as the host URL.
-### 3. Configure
-1. Go to http://localhost:42110/server/admin and login with your admin credentials.
- 1. Go to [OpenAI settings](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/) in the server admin settings to add an OpenAI processor conversation config. This is where you set your API key. Alternatively, you can go to the [offline chat settings](http://localhost:42110/server/admin/database/offlinechatprocessorconversationconfig/) and simply create a new setting with `Enabled` set to `True`.
- 2. Go to the ChatModelOptions if you want to add additional models for chat.
- - Set the `chat-model` field to a supported chat model[^1] of your choice. For example, you can specify `gpt-4-turbo-preview` if you're using OpenAI or `mistral-7b-instruct-v0.1.Q4_0.gguf` if you're using offline chat.
- - Make sure to set the `model-type` field to `OpenAI` or `Offline` respectively.
- - The `tokenizer` and `max-prompt-size` fields are optional. Set them only when using a non-standard model (i.e not mistral, gpt or llama2 model).
-1. Select files and folders to index [using the desktop client](/get-started/setup#2-download-the-desktop-client). When you click 'Save', the files will be sent to your server for indexing.
- - Select Notion workspaces and Github repositories to index using the web interface.
-
-[^1]: Khoj, by default, can use [OpenAI GPT3.5+ chat models](https://platform.openai.com/docs/models/overview) or [GPT4All chat models that follow Llama2 Prompt Template](https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-chat/metadata/models2.json). See [this section](/miscellaneous/advanced#use-openai-compatible-llm-api-server-self-hosting) to use non-standard chat models
-
-:::tip[Note]
-Using Safari on Mac? You might not be able to login to the admin panel. Try using Chrome or Firefox instead.
-:::
-
### 4. Install Client Plugins (Optional)
Khoj exposes a web interface to search, chat and configure by default.
diff --git a/documentation/docs/miscellaneous/credits.md b/documentation/docs/miscellaneous/credits.md
index 6f77ed41..d1c3c90c 100644
--- a/documentation/docs/miscellaneous/credits.md
+++ b/documentation/docs/miscellaneous/credits.md
@@ -10,4 +10,4 @@ Many Open Source projects are used to power Khoj. Here's a few of them:
- Charles Cave for [OrgNode Parser](http://members.optusnet.com.au/~charles57/GTD/orgnode.html)
- [Org.js](https://mooz.github.io/org-js/) to render Org-mode results on the Web interface
- [Markdown-it](https://github.com/markdown-it/markdown-it) to render Markdown results on the Web interface
-- [GPT4All](https://github.com/nomic-ai/gpt4all) to chat with local LLM
+- [Llama.cpp](https://github.com/ggerganov/llama.cpp) to chat with local LLM
diff --git a/gunicorn-config.py b/gunicorn-config.py
index bfed49e7..ea382346 100644
--- a/gunicorn-config.py
+++ b/gunicorn-config.py
@@ -1,10 +1,10 @@
import multiprocessing
bind = "0.0.0.0:42110"
-workers = 8
+workers = 1
worker_class = "uvicorn.workers.UvicornWorker"
timeout = 120
keep_alive = 60
-accesslog = "access.log"
-errorlog = "error.log"
+accesslog = "-"
+errorlog = "-"
loglevel = "debug"
diff --git a/manifest.json b/manifest.json
index a4bdc42c..11e57675 100644
--- a/manifest.json
+++ b/manifest.json
@@ -1,7 +1,7 @@
{
"id": "khoj",
"name": "Khoj",
- "version": "1.7.0",
+ "version": "1.8.0",
"minAppVersion": "0.15.0",
"description": "An AI copilot for your Second Brain",
"author": "Khoj Inc.",
diff --git a/prod.Dockerfile b/prod.Dockerfile
index 413835d0..0da5363a 100644
--- a/prod.Dockerfile
+++ b/prod.Dockerfile
@@ -1,12 +1,9 @@
-# Use Nvidia's latest Ubuntu 22.04 image as the base image
-FROM nvidia/cuda:12.2.0-devel-ubuntu22.04
+FROM ubuntu:jammy
LABEL org.opencontainers.image.source https://github.com/khoj-ai/khoj
# Install System Dependencies
RUN apt update -y && apt -y install python3-pip libsqlite3-0 ffmpeg libsm6 libxext6
-# Install Optional Dependencies
-RUN apt install vim -y
WORKDIR /app
diff --git a/pyproject.toml b/pyproject.toml
index 393d1f10..8a258580 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -7,7 +7,7 @@ name = "khoj-assistant"
description = "An AI copilot for your Second Brain"
readme = "README.md"
license = "AGPL-3.0-or-later"
-requires-python = ">=3.8"
+requires-python = ">=3.9"
authors = [
{ name = "Debanjum Singh Solanky, Saba Imran" },
]
@@ -23,8 +23,8 @@ keywords = [
"pdf",
]
classifiers = [
- "Development Status :: 4 - Beta",
- "License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
+ "Development Status :: 5 - Production/Stable",
+ "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
@@ -33,7 +33,7 @@ classifiers = [
"Topic :: Internet :: WWW/HTTP :: Indexing/Search",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Human Machine Interfaces",
- "Topic :: Text Processing :: Linguistic",
+ "Intended Audience :: Information Technology",
]
dependencies = [
"beautifulsoup4 ~= 4.12.3",
@@ -62,8 +62,7 @@ dependencies = [
"pymupdf >= 1.23.5",
"django == 4.2.10",
"authlib == 1.2.1",
- "gpt4all == 2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'",
- "gpt4all == 2.1.0; platform_system == 'Windows' or platform_system == 'Darwin'",
+ "llama-cpp-python == 0.2.56",
"itsdangerous == 2.1.2",
"httpx == 0.25.0",
"pgvector == 0.2.4",
diff --git a/src/interface/desktop/chat.html b/src/interface/desktop/chat.html
index 94cde782..f37ae562 100644
--- a/src/interface/desktop/chat.html
+++ b/src/interface/desktop/chat.html
@@ -87,7 +87,7 @@
function generateOnlineReference(reference, index) {
// Generate HTML for Chat Reference
- let title = reference.title;
+ let title = reference.title || reference.link;
let link = reference.link;
let snippet = reference.snippet;
let question = reference.question;
@@ -191,6 +191,15 @@
referenceSection.appendChild(polishedReference);
}
}
+
+ if (onlineReference.webpages && onlineReference.webpages.length > 0) {
+ numOnlineReferences += onlineReference.webpages.length;
+ for (let index in onlineReference.webpages) {
+ let reference = onlineReference.webpages[index];
+ let polishedReference = generateOnlineReference(reference, index);
+ referenceSection.appendChild(polishedReference);
+ }
+ }
}
return numOnlineReferences;
diff --git a/src/interface/desktop/package.json b/src/interface/desktop/package.json
index 75de44c9..bb1a622e 100644
--- a/src/interface/desktop/package.json
+++ b/src/interface/desktop/package.json
@@ -1,6 +1,6 @@
{
"name": "Khoj",
- "version": "1.7.0",
+ "version": "1.8.0",
"description": "An AI copilot for your Second Brain",
"author": "Saba Imran, Debanjum Singh Solanky ",
"license": "GPL-3.0-or-later",
diff --git a/src/interface/emacs/khoj.el b/src/interface/emacs/khoj.el
index a5e41868..c08d8eea 100644
--- a/src/interface/emacs/khoj.el
+++ b/src/interface/emacs/khoj.el
@@ -6,7 +6,7 @@
;; Saba Imran
;; Description: An AI copilot for your Second Brain
;; Keywords: search, chat, org-mode, outlines, markdown, pdf, image
-;; Version: 1.7.0
+;; Version: 1.8.0
;; Package-Requires: ((emacs "27.1") (transient "0.3.0") (dash "2.19.1"))
;; URL: https://github.com/khoj-ai/khoj/tree/master/src/interface/emacs
diff --git a/src/interface/obsidian/manifest.json b/src/interface/obsidian/manifest.json
index a4bdc42c..11e57675 100644
--- a/src/interface/obsidian/manifest.json
+++ b/src/interface/obsidian/manifest.json
@@ -1,7 +1,7 @@
{
"id": "khoj",
"name": "Khoj",
- "version": "1.7.0",
+ "version": "1.8.0",
"minAppVersion": "0.15.0",
"description": "An AI copilot for your Second Brain",
"author": "Khoj Inc.",
diff --git a/src/interface/obsidian/package.json b/src/interface/obsidian/package.json
index 66d4a5c5..aec31710 100644
--- a/src/interface/obsidian/package.json
+++ b/src/interface/obsidian/package.json
@@ -1,6 +1,6 @@
{
"name": "Khoj",
- "version": "1.7.0",
+ "version": "1.8.0",
"description": "An AI copilot for your Second Brain",
"author": "Debanjum Singh Solanky, Saba Imran ",
"license": "GPL-3.0-or-later",
diff --git a/src/interface/obsidian/versions.json b/src/interface/obsidian/versions.json
index 150f851e..10f042ef 100644
--- a/src/interface/obsidian/versions.json
+++ b/src/interface/obsidian/versions.json
@@ -39,5 +39,6 @@
"1.6.0": "0.15.0",
"1.6.1": "0.15.0",
"1.6.2": "0.15.0",
- "1.7.0": "0.15.0"
+ "1.7.0": "0.15.0",
+ "1.8.0": "0.15.0"
}
diff --git a/src/khoj/database/adapters/__init__.py b/src/khoj/database/adapters/__init__.py
index b939b38f..a9e246a6 100644
--- a/src/khoj/database/adapters/__init__.py
+++ b/src/khoj/database/adapters/__init__.py
@@ -43,7 +43,7 @@ from khoj.search_filter.date_filter import DateFilter
from khoj.search_filter.file_filter import FileFilter
from khoj.search_filter.word_filter import WordFilter
from khoj.utils import state
-from khoj.utils.config import GPT4AllProcessorModel
+from khoj.utils.config import OfflineChatProcessorModel
from khoj.utils.helpers import generate_random_name, is_none_or_empty
@@ -399,32 +399,26 @@ class AgentAdapters:
DEFAULT_AGENT_SLUG = "khoj"
@staticmethod
- async def aget_agent_by_id(agent_id: int):
- return await Agent.objects.filter(id=agent_id).afirst()
-
- @staticmethod
- async def aget_agent_by_slug(agent_slug: str):
- return await Agent.objects.filter(slug__iexact=agent_slug.lower()).afirst()
+ async def aget_agent_by_slug(agent_slug: str, user: KhojUser):
+ return await Agent.objects.filter(
+ (Q(slug__iexact=agent_slug.lower())) & (Q(public=True) | Q(creator=user))
+ ).afirst()
@staticmethod
def get_agent_by_slug(slug: str, user: KhojUser = None):
- agent = Agent.objects.filter(slug=slug).first()
- # Check if agent is public or created by the user
- if agent and (agent.public or agent.creator == user):
- return agent
- return None
+ if user:
+ return Agent.objects.filter((Q(slug__iexact=slug.lower())) & (Q(public=True) | Q(creator=user))).first()
+ return Agent.objects.filter(slug__iexact=slug.lower(), public=True).first()
@staticmethod
def get_all_accessible_agents(user: KhojUser = None):
- return Agent.objects.filter(Q(public=True) | Q(creator=user)).distinct().order_by("created_at")
+ if user:
+ return Agent.objects.filter(Q(public=True) | Q(creator=user)).distinct().order_by("created_at")
+ return Agent.objects.filter(public=True).order_by("created_at")
@staticmethod
async def aget_all_accessible_agents(user: KhojUser = None) -> List[Agent]:
- get_all_accessible_agents = sync_to_async(
- lambda: Agent.objects.filter(Q(public=True) | Q(creator=user)).distinct().order_by("created_at").all(),
- thread_sensitive=True,
- )
- agents = await get_all_accessible_agents()
+ agents = await sync_to_async(AgentAdapters.get_all_accessible_agents)(user)
return await sync_to_async(list)(agents)
@staticmethod
@@ -444,26 +438,29 @@ class AgentAdapters:
default_conversation_config = ConversationAdapters.get_default_conversation_config()
default_personality = prompts.personality.format(current_date="placeholder")
- if Agent.objects.filter(name=AgentAdapters.DEFAULT_AGENT_NAME).exists():
- agent = Agent.objects.filter(name=AgentAdapters.DEFAULT_AGENT_NAME).first()
- agent.tuning = default_personality
+ agent = Agent.objects.filter(name=AgentAdapters.DEFAULT_AGENT_NAME).first()
+
+ if agent:
+ agent.personality = default_personality
agent.chat_model = default_conversation_config
agent.slug = AgentAdapters.DEFAULT_AGENT_SLUG
agent.name = AgentAdapters.DEFAULT_AGENT_NAME
agent.save()
- return agent
+ else:
+ # The default agent is public and managed by the admin. It's handled a little differently than other agents.
+ agent = Agent.objects.create(
+ name=AgentAdapters.DEFAULT_AGENT_NAME,
+ public=True,
+ managed_by_admin=True,
+ chat_model=default_conversation_config,
+ personality=default_personality,
+ tools=["*"],
+ avatar=AgentAdapters.DEFAULT_AGENT_AVATAR,
+ slug=AgentAdapters.DEFAULT_AGENT_SLUG,
+ )
+ Conversation.objects.filter(agent=None).update(agent=agent)
- # The default agent is public and managed by the admin. It's handled a little differently than other agents.
- return Agent.objects.create(
- name=AgentAdapters.DEFAULT_AGENT_NAME,
- public=True,
- managed_by_admin=True,
- chat_model=default_conversation_config,
- tuning=default_personality,
- tools=["*"],
- avatar=AgentAdapters.DEFAULT_AGENT_AVATAR,
- slug=AgentAdapters.DEFAULT_AGENT_SLUG,
- )
+ return agent
@staticmethod
async def aget_default_agent():
@@ -482,9 +479,10 @@ class ConversationAdapters:
.first()
)
else:
+ agent = AgentAdapters.get_default_agent()
conversation = (
Conversation.objects.filter(user=user, client=client_application).order_by("-updated_at").first()
- ) or Conversation.objects.create(user=user, client=client_application)
+ ) or Conversation.objects.create(user=user, client=client_application, agent=agent)
return conversation
@@ -514,11 +512,12 @@ class ConversationAdapters:
user: KhojUser, client_application: ClientApplication = None, agent_slug: str = None
):
if agent_slug:
- agent = await AgentAdapters.aget_agent_by_slug(agent_slug)
+ agent = await AgentAdapters.aget_agent_by_slug(agent_slug, user)
if agent is None:
- raise HTTPException(status_code=400, detail="Invalid agent id")
+ raise HTTPException(status_code=400, detail="No such agent currently exists.")
return await Conversation.objects.acreate(user=user, client=client_application, agent=agent)
- return await Conversation.objects.acreate(user=user, client=client_application)
+ agent = await AgentAdapters.aget_default_agent()
+ return await Conversation.objects.acreate(user=user, client=client_application, agent=agent)
@staticmethod
async def aget_conversation_by_user(
@@ -706,8 +705,8 @@ class ConversationAdapters:
conversation_config = ConversationAdapters.get_default_conversation_config()
if offline_chat_config and offline_chat_config.enabled and conversation_config.model_type == "offline":
- if state.gpt4all_processor_config is None or state.gpt4all_processor_config.loaded_model is None:
- state.gpt4all_processor_config = GPT4AllProcessorModel(conversation_config.chat_model)
+ if state.offline_chat_processor_config is None or state.offline_chat_processor_config.loaded_model is None:
+ state.offline_chat_processor_config = OfflineChatProcessorModel(conversation_config.chat_model)
return conversation_config
diff --git a/src/khoj/database/migrations/0031_agent_conversation_agent.py b/src/khoj/database/migrations/0031_agent_conversation_agent.py
index e3742dbc..1d08a118 100644
--- a/src/khoj/database/migrations/0031_agent_conversation_agent.py
+++ b/src/khoj/database/migrations/0031_agent_conversation_agent.py
@@ -23,7 +23,7 @@ class Migration(migrations.Migration):
("tools", models.JSONField(default=list)),
("public", models.BooleanField(default=False)),
("managed_by_admin", models.BooleanField(default=False)),
- ("slug", models.CharField(blank=True, default=None, max_length=200, null=True)),
+ ("slug", models.CharField(max_length=200)),
(
"chat_model",
models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to="database.chatmodeloptions"),
diff --git a/src/khoj/database/migrations/0031_alter_googleuser_locale.py b/src/khoj/database/migrations/0031_alter_googleuser_locale.py
index 99c4573a..a5b4cdab 100644
--- a/src/khoj/database/migrations/0031_alter_googleuser_locale.py
+++ b/src/khoj/database/migrations/0031_alter_googleuser_locale.py
@@ -3,6 +3,18 @@
from django.db import migrations, models
+def set_default_locale(apps, schema_editor):
+ return
+
+
+def reverse_set_default_locale(apps, schema_editor):
+ GoogleUser = apps.get_model("database", "GoogleUser")
+ for user in GoogleUser.objects.all():
+ if not user.locale:
+ user.locale = "en"
+ user.save()
+
+
class Migration(migrations.Migration):
dependencies = [
("database", "0030_conversation_slug_and_title"),
@@ -14,4 +26,5 @@ class Migration(migrations.Migration):
name="locale",
field=models.CharField(blank=True, default=None, max_length=200, null=True),
),
+ migrations.RunPython(set_default_locale, reverse_set_default_locale),
]
diff --git a/src/khoj/database/migrations/0033_rename_tuning_agent_personality.py b/src/khoj/database/migrations/0033_rename_tuning_agent_personality.py
new file mode 100644
index 00000000..089c86c5
--- /dev/null
+++ b/src/khoj/database/migrations/0033_rename_tuning_agent_personality.py
@@ -0,0 +1,17 @@
+# Generated by Django 4.2.10 on 2024-03-23 16:01
+
+from django.db import migrations
+
+
+class Migration(migrations.Migration):
+ dependencies = [
+ ("database", "0032_merge_20240322_0427"),
+ ]
+
+ operations = [
+ migrations.RenameField(
+ model_name="agent",
+ old_name="tuning",
+ new_name="personality",
+ ),
+ ]
diff --git a/src/khoj/database/models/__init__.py b/src/khoj/database/models/__init__.py
index 3d9cdfc6..cff3f065 100644
--- a/src/khoj/database/models/__init__.py
+++ b/src/khoj/database/models/__init__.py
@@ -80,20 +80,22 @@ class ChatModelOptions(BaseModel):
max_prompt_size = models.IntegerField(default=None, null=True, blank=True)
tokenizer = models.CharField(max_length=200, default=None, null=True, blank=True)
- chat_model = models.CharField(max_length=200, default="mistral-7b-instruct-v0.1.Q4_0.gguf")
+ chat_model = models.CharField(max_length=200, default="NousResearch/Hermes-2-Pro-Mistral-7B-GGUF")
model_type = models.CharField(max_length=200, choices=ModelType.choices, default=ModelType.OFFLINE)
class Agent(BaseModel):
- creator = models.ForeignKey(KhojUser, on_delete=models.CASCADE, default=None, null=True, blank=True)
+ creator = models.ForeignKey(
+ KhojUser, on_delete=models.CASCADE, default=None, null=True, blank=True
+ ) # Creator will only be null when the agents are managed by admin
name = models.CharField(max_length=200)
- tuning = models.TextField()
+ personality = models.TextField()
avatar = models.URLField(max_length=400, default=None, null=True, blank=True)
tools = models.JSONField(default=list) # List of tools the agent has access to, like online search or notes search
public = models.BooleanField(default=False)
managed_by_admin = models.BooleanField(default=False)
chat_model = models.ForeignKey(ChatModelOptions, on_delete=models.CASCADE)
- slug = models.CharField(max_length=200, default=None, null=True, blank=True)
+ slug = models.CharField(max_length=200)
@receiver(pre_save, sender=Agent)
@@ -108,7 +110,10 @@ def verify_agent(sender, instance, **kwargs):
slug = instance.name.lower().replace(" ", "-")
observed_random_numbers = set()
while Agent.objects.filter(slug=slug).exists():
- random_number = choice([i for i in range(0, 10000) if i not in observed_random_numbers])
+ try:
+ random_number = choice([i for i in range(0, 1000) if i not in observed_random_numbers])
+ except IndexError:
+ raise ValidationError("Unable to generate a unique slug for the Agent. Please try again later.")
observed_random_numbers.add(random_number)
slug = f"{slug}-{random_number}"
instance.slug = slug
diff --git a/src/khoj/interface/web/agent.html b/src/khoj/interface/web/agent.html
index 89ffe279..6e6a5ef7 100644
--- a/src/khoj/interface/web/agent.html
+++ b/src/khoj/interface/web/agent.html
@@ -24,9 +24,9 @@
-