Update docs, clients and error messages to point to /configure instead of /config
2.2 KiB
Ollama
:::info This is only helpful for self-hosted users. If you're using Khoj Cloud, you're limited to our first-party models. :::
:::info Khoj natively supports local LLMs available on HuggingFace in GGUF format. Using an OpenAI API proxy with Khoj maybe useful for ease of setup, trying new models or using commercial LLMs via API. :::
Ollama allows you to run many popular open-source LLMs locally from your terminal. For folks comfortable with the terminal, Ollama's terminal based flows can ease setup and management of chat models.
Ollama exposes a local OpenAI API compatible server. This makes it possible to use chat models from Ollama to create your personal AI agents with Khoj.
Setup
- Setup Ollama: https://ollama.com/
- Start your preferred model with Ollama. For example,
ollama run llama3
- Create a new OpenAI Processor Conversation Config on your Khoj admin panel
- Name:
ollama
- Api Key:
any string
- Api Base Url:
http://localhost:11434/v1/
(default for Ollama)
- Name:
- Create a new Chat Model Option on your Khoj admin panel.
- Name:
llama3
(replace with the name of your local model) - Model Type:
Openai
- Openai Config:
<the ollama config you created in step 3>
- Max prompt size:
1000
(replace with the max prompt size of your model)
- Name:
- Create a new Server Chat Setting on your Khoj admin panel
- Default model:
<name of chat model option you created in step 4>
- Summarizer model:
<name of chat model option you created in step 4>
- Default model:
- Go to your config and select the model you just created in the chat model dropdown.
That's it! You should now be able to chat with your Ollama model from Khoj. If you want to add additional models running on Ollama, repeat step 6 for each model.