diff --git a/documentation/docs/advanced/use-openai-proxy.md b/documentation/docs/advanced/use-openai-proxy.md index e67b5800..7e52020e 100644 --- a/documentation/docs/advanced/use-openai-proxy.md +++ b/documentation/docs/advanced/use-openai-proxy.md @@ -21,17 +21,17 @@ For specific integrations, see our [Ollama](/advanced/ollama), [LMStudio](/advan ## General Setup 1. Start your preferred OpenAI API compatible app -3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel - - Name: `proxy-name` +2. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel + - Name: `any name` - Api Key: `any string` - Api Base Url: **URL of your Openai Proxy API** -4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel. +3. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel. - Name: `llama3` (replace with the name of your local model) - Model Type: `Openai` - - Openai Config: `` + - Openai Config: `` - Max prompt size: `2000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, mistral, llama3 based models* -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +4. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel + - Default model: `` + - Summarizer model: `` +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. diff --git a/documentation/docs/get-started/setup.mdx b/documentation/docs/get-started/setup.mdx index 44d2ba06..b1446f60 100644 --- a/documentation/docs/get-started/setup.mdx +++ b/documentation/docs/get-started/setup.mdx @@ -260,13 +260,13 @@ Using Ollama? See the [Ollama Integration](/advanced/ollama) section for more cu 1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know. - Add your [OpenAI API key](https://platform.openai.com/api-keys) - Give the configuration a friendly name like `OpenAI` - - (Optional) Set the API base URL. It is only relevant if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio). + - (Optional) Set the API base URL. It is only relevant if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).
![example configuration for openai processor](/img/example_openai_processor_config.png) 2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add) - Set the `chat-model` field to an [OpenAI chat model](https://platform.openai.com/docs/models). Example: `gpt-4o`. - Make sure to set the `model-type` field to `OpenAI`. - If your model supports vision, set the `vision enabled` field to `true`. This is currently only supported for OpenAI models with vision capabilities. - - The `tokenizer` and `max-prompt-size` fields are optional. Set them only if you're sure of the tokenizer or token limit for the model you're using. Contact us if you're unsure what to do here. + - The `tokenizer` and `max-prompt-size` fields are optional. Set them only if you're sure of the tokenizer or token limit for the model you're using. Contact us if you're unsure what to do here.
![example configuration for chat model options](/img/example_chatmodel_option.png)