* feat: add support for voyage-3-large and voyage-code-3 embedding models
- Add voyage-3-large and voyage-code-3 to VoyageAiOptions dropdown
- Update getMaxEmbeddingLength to support 32k context for new models
- Update .env.example with new model options
* unset env example
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Reranker WIP
* add cacheing and singleton loading
* Add field to workspaces for vectorSearchMode
Add UI for lancedb to change mode
update all search endpoints to pass in reranker prop if provider can use it
* update hint text
* When reranking, swap score to rerank score
* update optchain
* Add support for Google Generative AI (Gemini) embedder
* Add missing example in docker
Fix UI key elements in options
Add Gemini to data handling section
Patch issues with chunk handling during embedding
* remove dupe in env
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Add support for gemini authenticated models endpoint
add customModels entry
add un-authed fallback to default listing
separate models by expiermental status
resolves#2866
* add back improved logic for apiVersion decision making
* wip remove all docs clear vector db on embedder/vector db change
* purge all cached docs and remove docs from workspaces on vectordb/embedder change
* lint
* remove unneeded console log
* remove reset vector stores endpoint and move to server side updateENV with postUpdate check
* reset embed module
* remove unused import
* simplify deletion process
rescoped document deletion to be more general for speed, everything needs to be reset anyway
fixed issue where unembedded docs not in any workspaces, but cached, were not removed
* add back missing readme file
update warning text modals
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* wip hub connection page fe + backend
* lint
* implement backend for local hub items + placeholder endpoints to fetch hub app data
* fix hebrew translations
* revamp community integration flow
* change sidebar
* Auto import if id in URL param
remove preview in card screen and instead go to import flow
* get user's items + team items from hub + ui improvements to hub settings
* lint
* fix merge conflict
* refresh hook for community items
* add fallback for user items
* Disable bundle items by default on all instances
* remove translations (will complete later)
* loading skeleton
* Make community hub endpoints admin only
show visibility on items
combine import/apply for items to they are event logged for review
* improve middleware and import flow
* community hub ui updates
* Adjust importing process
* community hub to dev
* Add webscraper preload into imported plugins
* add runtime property to plugins
* Fix button status on imported skill change
show alert on skill change
Update markdown type and theme on import of agent skill
* update documentaion paths
* remove unused import
* linting
* review loading state
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500
* Update new field to new UI
make getting to ensure proper type and format
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* togetherai llama 3.2 vision models support
* remove console log
* fix listing to reflect what is on the chart
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Fix incorrect json API description.
* small edits and validity checks
* remove console.logs
* unset and recheck changes
---------
Co-authored-by: Adam <phazei@gmail.com>
Adds support for only the llama3.2 vision models on groq. This comes with many conditionals and nuances to handle as Groqs vision implemention is quite bad right now