diff --git a/documentation/docs/advanced/litellm.md b/documentation/docs/advanced/litellm.md index ccc06170..9dfaaf34 100644 --- a/documentation/docs/advanced/litellm.md +++ b/documentation/docs/advanced/litellm.md @@ -31,7 +31,4 @@ Using LiteLLM with Khoj makes it possible to turn any LLM behind an API into you - Openai Config: `` - Max prompt size: `20000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, Mistral, Llama3 based models* -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. diff --git a/documentation/docs/advanced/lmstudio.md b/documentation/docs/advanced/lmstudio.md index c08aeeec..5c5ab567 100644 --- a/documentation/docs/advanced/lmstudio.md +++ b/documentation/docs/advanced/lmstudio.md @@ -24,7 +24,4 @@ LM Studio can expose an [OpenAI API compatible server](https://lmstudio.ai/docs/ - Openai Config: `` - Max prompt size: `20000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, mistral, llama3 based models* -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. diff --git a/documentation/docs/advanced/ollama.md b/documentation/docs/advanced/ollama.md index c65da0b8..7e90f767 100644 --- a/documentation/docs/advanced/ollama.md +++ b/documentation/docs/advanced/ollama.md @@ -28,9 +28,6 @@ Ollama exposes a local [OpenAI API compatible server](https://github.com/ollama/ - Model Type: `Openai` - Openai Config: `` - Max prompt size: `20000` (replace with the max prompt size of your model) -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. That's it! You should now be able to chat with your Ollama model from Khoj. If you want to add additional models running on Ollama, repeat step 6 for each model. diff --git a/documentation/docs/advanced/use-openai-proxy.md b/documentation/docs/advanced/use-openai-proxy.md index 7e52020e..ec674767 100644 --- a/documentation/docs/advanced/use-openai-proxy.md +++ b/documentation/docs/advanced/use-openai-proxy.md @@ -31,7 +31,4 @@ For specific integrations, see our [Ollama](/advanced/ollama), [LMStudio](/advan - Openai Config: `` - Max prompt size: `2000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, mistral, llama3 based models* -4. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +4. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.