* wip hub connection page fe + backend
* lint
* implement backend for local hub items + placeholder endpoints to fetch hub app data
* fix hebrew translations
* revamp community integration flow
* change sidebar
* Auto import if id in URL param
remove preview in card screen and instead go to import flow
* get user's items + team items from hub + ui improvements to hub settings
* lint
* fix merge conflict
* refresh hook for community items
* add fallback for user items
* Disable bundle items by default on all instances
* remove translations (will complete later)
* loading skeleton
* Make community hub endpoints admin only
show visibility on items
combine import/apply for items to they are event logged for review
* improve middleware and import flow
* community hub ui updates
* Adjust importing process
* community hub to dev
* Add webscraper preload into imported plugins
* add runtime property to plugins
* Fix button status on imported skill change
show alert on skill change
Update markdown type and theme on import of agent skill
* update documentaion paths
* remove unused import
* linting
* review loading state
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500
* Update new field to new UI
make getting to ensure proper type and format
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* togetherai llama 3.2 vision models support
* remove console log
* fix listing to reflect what is on the chart
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Fix incorrect json API description.
* small edits and validity checks
* remove console.logs
* unset and recheck changes
---------
Co-authored-by: Adam <phazei@gmail.com>
* thread creation additional params name and slug, with api
* typo fix
* Rebuild openai Swagger docs
Handle validations for fields to prevent invalid field inputs for .new
Enforce sluggification of `slug` to prevent breaking of URL structs
---------
Co-authored-by: abrakadobr <abrakadobr@gmail.com>
Adds support for only the llama3.2 vision models on groq. This comes with many conditionals and nuances to handle as Groqs vision implemention is quite bad right now
* Update OpenAI TTS config to allow a custom BaseURL
* uncheck config file
* break openai generic TTS into its own provider
* add space
* hide TTS on user msg
---------
Co-authored-by: Adam <phazei@gmail.com>
* support generic openai workspace model
* Update UI for free form input for some providers
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* set message limit per user
* remove old limit user messages + unused admin page
* fix daily message validation
* refactor message limit input
refactor canSendChat on user to a method on user model
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Refactor handleDefaultStreamResponseV2 function for better error handling
* run yarn lint
* small error handling changes
* update error handling flow and scope of vars
* add back space
---------
Co-authored-by: Roman <rrojaski@gmail.com>