anything-llm/server/utils/helpers
hdelossantos 304796ec59
feat: support setting maxConcurrentChunks for Generic OpenAI embedder ()
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500

* Update new field to new UI
make getting to ensure proper type and format

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-21 11:29:44 -08:00
..
admin patch admin pwd update 2024-02-06 14:39:56 -08:00
chat 1417 completion timeout () 2024-09-25 14:00:19 -07:00
camelcase.js Add support for Weaviate VectorDB () 2023-08-08 18:02:30 -07:00
customModels.js Patch bad models endpoint path in LM Studio embedding engine () 2024-11-13 12:34:42 -08:00
index.js Mistral embedding engine support () 2024-11-21 11:05:55 -08:00
portAvailabilityChecker.js [FEAT] Check port access in docker before showing a default error () 2024-04-02 10:34:50 -07:00
tiktoken.js patch text.substring bug from compressor 2024-07-22 12:53:11 -07:00
updateENV.js feat: support setting maxConcurrentChunks for Generic OpenAI embedder () 2024-11-21 11:29:44 -08:00