anything-llm/server/utils/AiProviders
2024-04-16 16:25:32 -07:00
..
anthropic Handle Anthropic streamable errors () 2024-04-16 16:25:32 -07:00
azureOpenAi Stop generation button during stream-response () 2024-03-12 15:21:27 -07:00
gemini Patch Gemini/Google AI errors () 2024-03-26 17:20:12 -07:00
groq [FEAT] Groq LLM support () 2024-03-06 14:48:38 -08:00
huggingface Stop generation button during stream-response () 2024-03-12 15:21:27 -07:00
lmStudio Patch LMStudio Inference server bug integration () 2024-03-22 14:39:30 -07:00
localAi Refactor LLM chat backend () 2024-02-14 12:32:07 -08:00
mistral Refactor LLM chat backend () 2024-02-14 12:32:07 -08:00
native Stop generation button during stream-response () 2024-03-12 15:21:27 -07:00
ollama useMLock for Ollama API chats () 2024-04-02 10:43:04 -07:00
openAi Enable dynamic GPT model dropdown () 2024-04-16 14:54:39 -07:00
openRouter Bump all static model providers () 2024-04-14 12:55:21 -07:00
perplexity Bump all static model providers () 2024-04-14 12:55:21 -07:00
togetherAi Bump all static model providers () 2024-04-14 12:55:21 -07:00