Only auto load available chat models from Ollama provider for now

Allowing models from any openai proxy service makes it too unwieldy.
And a bunch of them do not even support this endpoint.
This commit is contained in:
Debanjum 2024-12-08 18:05:14 -08:00
parent 2c934162d3
commit 3fd8614a4b

View file

@ -235,6 +235,10 @@ def initialization(interactive: bool = True):
# Get OpenAI configs with custom base URLs
custom_configs = AiModelApi.objects.exclude(api_base_url__isnull=True)
# Only enable for whitelisted provider names (i.e Ollama) for now
# TODO: This is hacky. Will be replaced with more robust solution based on provider type enum
custom_configs = custom_configs.filter(name__in=["Ollama"])
for config in custom_configs:
try:
# Create OpenAI client with custom base URL