anything-llm/server/utils/EmbeddingEngines
hdelossantos 304796ec59
feat: support setting maxConcurrentChunks for Generic OpenAI embedder ()
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500

* Update new field to new UI
make getting to ensure proper type and format

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-21 11:29:44 -08:00
..
azureOpenAi Adjust how text is split depending on input type () 2024-04-30 10:11:56 -07:00
cohere [FEAT] Cohere LLM and embedder support () 2024-05-02 10:35:50 -07:00
genericOpenAi feat: support setting maxConcurrentChunks for Generic OpenAI embedder () 2024-11-21 11:29:44 -08:00
liteLLM [FEAT] Add LiteLLM embedding provider support () 2024-06-06 12:43:34 -07:00
lmstudio Patch bad models endpoint path in LM Studio embedding engine () 2024-11-13 12:34:42 -08:00
localAi Bump openai package to latest () 2024-04-30 12:33:42 -07:00
mistral Mistral embedding engine support () 2024-11-21 11:05:55 -08:00
native Prevent concurrent downloads on first-doc upload () 2024-05-02 10:15:11 -07:00
ollama Ollama sequential embedding () 2024-09-06 10:06:46 -07:00
openAi Bump openai package to latest () 2024-04-30 12:33:42 -07:00
voyageAi Patch VoyageAI implementation from LC 2024-11-06 11:43:41 -08:00