mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2025-03-16 15:12:22 +00:00
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500 * Update new field to new UI make getting to ensure proper type and format --------- Co-authored-by: timothycarambat <rambat1010@gmail.com> |
||
---|---|---|
.. | ||
vex | ||
.env.example | ||
docker-compose.yml | ||
docker-entrypoint.sh | ||
docker-healthcheck.sh | ||
Dockerfile | ||
HOW_TO_USE_DOCKER.md |