anything-llm/server/utils/EmbeddingEngines
Timothy Carambat 20135835d0
Ollama sequential embedding ()
* ollama: Switch from parallel to sequential chunk embedding

* throw error on empty embeddings

---------

Co-authored-by: John Blomberg <john.jb.blomberg@gmail.com>
2024-09-06 10:06:46 -07:00
..
azureOpenAi Adjust how text is split depending on input type () 2024-04-30 10:11:56 -07:00
cohere [FEAT] Cohere LLM and embedder support () 2024-05-02 10:35:50 -07:00
genericOpenAi [FEAT] Generic OpenAI embedding provider () 2024-06-21 16:27:02 -07:00
liteLLM [FEAT] Add LiteLLM embedding provider support () 2024-06-06 12:43:34 -07:00
lmstudio Adjust how text is split depending on input type () 2024-04-30 10:11:56 -07:00
localAi Bump openai package to latest () 2024-04-30 12:33:42 -07:00
native Prevent concurrent downloads on first-doc upload () 2024-05-02 10:15:11 -07:00
ollama Ollama sequential embedding () 2024-09-06 10:06:46 -07:00
openAi Bump openai package to latest () 2024-04-30 12:33:42 -07:00
voyageAi Add new Voyage AI embedding models () 2024-08-29 14:11:00 -07:00