mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2025-03-31 01:46:25 +00:00
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500 * Update new field to new UI make getting to ensure proper type and format --------- Co-authored-by: timothycarambat <rambat1010@gmail.com> |
||
---|---|---|
.. | ||
azureOpenAi | ||
cohere | ||
genericOpenAi | ||
liteLLM | ||
lmstudio | ||
localAi | ||
mistral | ||
native | ||
ollama | ||
openAi | ||
voyageAi |