Commit graph

166 commits

Author SHA1 Message Date
Timothy Carambat
c4f75feb08
Support historical message image inputs/attachments for n+1 queries ()
* Support historical message image inputs/attachments for n+1 queries

* patch gemini

* OpenRouter vision support cleanup

* xai vision history support

* Mistral logging

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2025-01-16 13:49:06 -08:00
Timothy Carambat
ad01df8790
Reranker option for RAG ()
* Reranker WIP

* add cacheing and singleton loading

* Add field to workspaces for vectorSearchMode
Add UI for lancedb to change mode
update all search endpoints to pass in reranker prop if provider can use it

* update hint text

* When reranking, swap score to rerank score

* update optchain
2025-01-02 14:27:52 -08:00
Timothy Carambat
bb5c3b7e0d
make similarityResponse object arguments and not positional ()
* make `similarityResponse` object arguments and not positional

* reuse client for qdrant
2025-01-02 12:03:26 -08:00
Chaiwat Saithongcum
fa3079bbbf
Add support for Google Generative AI (Gemini) embedder ()
* Add support for Google Generative AI (Gemini) embedder

* Add missing example in docker
Fix UI key elements in options
Add Gemini to data handling section
Patch issues with chunk handling during embedding

* remove dupe in env

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-12-31 09:29:38 -08:00
Timothy Carambat
b082c8e441
Add support for gemini authenticated models endpoint ()
* Add support for gemini authenticated models endpoint
add customModels entry
add un-authed fallback to default listing
separate models by expiermental status
resolves 

* add back improved logic for apiVersion decision making
2024-12-17 15:20:26 -08:00
Timothy Carambat
dd7c4675d3
LLM performance metric tracking ()
* WIP performance metric tracking

* fix: patch UI trying to .toFixed() null metric
Anthropic tracking migraiton
cleanup logs

* Apipie implmentation, not tested

* Cleanup Anthropic notes, Add support for AzureOpenAI tracking

* bedrock token metric tracking

* Cohere support

* feat: improve default stream handler to track for provider who are actually OpenAI compliant in usage reporting
add deepseek support

* feat: Add FireworksAI tracking reporting
fix: improve handler when usage:null is reported (why?)

* Add token reporting for GenericOpenAI

* token reporting for koboldcpp + lmstudio

* lint

* support Groq token tracking

* HF token tracking

* token tracking for togetherai

* LiteLLM token tracking

* linting + Mitral token tracking support

* XAI token metric reporting

* native provider runner

* LocalAI token tracking

* Novita token tracking

* OpenRouter token tracking

* Apipie stream metrics

* textwebgenui token tracking

* perplexity token reporting

* ollama token reporting

* lint

* put back comment

* Rip out LC ollama wrapper and use official library

* patch images with new ollama lib

* improve ollama offline message

* fix image handling in ollama llm provider

* lint

* NVIDIA NIM token tracking

* update openai compatbility responses

* UI/UX show/hide metrics on click for user preference

* update bedrock client

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-12-16 14:31:17 -08:00
Sean Hatfield
ae510619f0
Purge cached docs and remove docs from all workspaces on vectorDB/embedder changes ()
* wip remove all docs clear vector db on embedder/vector db change

* purge all cached docs and remove docs from workspaces on vectordb/embedder change

* lint

* remove unneeded console log

* remove reset vector stores endpoint and move to server side updateENV with postUpdate check

* reset embed module

* remove unused import

* simplify deletion process
rescoped document deletion to be more general for speed, everything needs to be reset anyway
fixed issue where unembedded docs not in any workspaces, but cached, were not removed

* add back missing readme file
update warning text modals

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-12-16 12:16:20 -08:00
Sean Hatfield
f651ca8628
APIPie LLM provider improvements ()
* fix apipie streaming/sort by chat models

* lint

* linting

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-12-13 14:18:02 -08:00
timothycarambat
f8e91e1ffa patch gemini-2.0-key 2024-12-11 16:52:31 -08:00
timothycarambat
69b672b625 add gemini 1206 and gemini-2.0-flash exp models
connect 
2024-12-11 09:04:29 -08:00
Timothy Carambat
a69997a715
update chat model filters for openai () 2024-12-11 08:55:10 -08:00
timothycarambat
4b09a06590 persist token window for NIM and not only on model change 2024-12-05 11:57:07 -08:00
Timothy Carambat
b2dd35fe15
Add Support for NVIDIA NIM ()
* Add Support for NVIDIA NIM

* update README

* linting
2024-12-05 10:38:23 -08:00
Sean Hatfield
05c530221b
Community hub integration ()
* wip hub connection page fe + backend

* lint

* implement backend for local hub items + placeholder endpoints to fetch hub app data

* fix hebrew translations

* revamp community integration flow

* change sidebar

* Auto import if id in URL param
remove preview in card screen and instead go to import flow

* get user's items + team items from hub + ui improvements to hub settings

* lint

* fix merge conflict

* refresh hook for community items

* add fallback for user items

* Disable bundle items by default on all instances

* remove translations (will complete later)

* loading skeleton

* Make community hub endpoints admin only
show visibility on items
combine import/apply for items to they are event logged for review

* improve middleware and import flow

* community hub ui updates

* Adjust importing process

* community hub to dev

* Add webscraper preload into imported plugins

* add runtime property to plugins

* Fix button status on imported skill change
show alert on skill change
Update markdown type and theme on import of agent skill

* update documentaion paths

* remove unused import

* linting

* review loading state

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-11-26 09:59:43 -08:00
hdelossantos
304796ec59
feat: support setting maxConcurrentChunks for Generic OpenAI embedder ()
* exposes `maxConcurrentChunks` parameter for the generic openai embedder through configuration. This allows setting a batch size for endpoints which don't support the default of 500

* Update new field to new UI
make getting to ensure proper type and format

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-21 11:29:44 -08:00
Sean Hatfield
9f38b9337b
Mistral embedding engine support ()
* add mistral embedding engine support

* remove console log + fix data handling onboarding

* update data handling description

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-11-21 11:05:55 -08:00
timothycarambat
246152c024 Add gemini-exp-1121
resolves 
2024-11-21 11:02:43 -08:00
Timothy Carambat
26e2d8cc3b
Add more expiermental models from Gemini () 2024-11-20 09:52:33 -08:00
Sean Hatfield
27b07d46b3
Patch bad models endpoint path in LM Studio embedding engine ()
* patch bad models endpoint path in lm studio embedding engine

* convert to OpenAI wrapper compatibility

* add URL force parser/validation for LMStudio connections

* remove comment

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-13 12:34:42 -08:00
timothycarambat
5aa79128f7 bump Anthropic models 2024-11-06 08:14:08 -08:00
Timothy Carambat
80565d79e0
2488 novita ai llm integration ()
* feat: add new model provider: Novita AI

* feat: finished novita AI

* fix: code lint

* remove unneeded logging

* add back log for novita stream not self closing

* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider

---------

Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-11-04 11:34:29 -08:00
Timothy Carambat
dd2756b570
add sessionToken validation connection auth for AWSbedrock () 2024-10-29 16:34:52 -07:00
Timothy Carambat
2c9cb28d5f
Simple SSO feature for login flows from external services ()
* Simple SSO feature for login flows from external services

* linting
2024-10-29 15:30:53 -07:00
Timothy Carambat
5bc96bca88
Add Grok/XAI support for LLM & agents ()
* Add Grok/XAI support for LLM & agents

* forgot files
2024-10-21 16:32:49 -07:00
Timothy Carambat
0524aadf58
Enable the ability to disable the chat history UI ()
* Enable the ability to disable the chat history UI

* forgot files
2024-10-21 13:19:19 -07:00
Timothy Carambat
3dc0f3f490
Tts open ai compatible endpoints ()
* Update OpenAI TTS config to allow a custom BaseURL

* uncheck config file

* break openai generic TTS into its own provider

* add space

* hide TTS on user msg

---------

Co-authored-by: Adam <phazei@gmail.com>
2024-10-15 21:39:31 -07:00
Sean Hatfield
fa528e0cf3
OpenAI o1 model support ()
* support openai o1 models

* Prevent O1 use for agents
getter for isO1Model;

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-10-15 19:42:13 -07:00
Timothy Carambat
bce7988683
Integrate Apipie support directly ()
resolves 
resolves 
Note: Streaming not supported
2024-10-15 12:36:06 -07:00
Sean Hatfield
5ac6020480
Tavily search web search agent support ()
* support tavily search web search agent

* lint

* remove unneeded comments
2024-10-01 14:52:57 -07:00
Sean Hatfield
7390bae6f6
Support DeepSeek ()
* add deepseek support

* lint

* update deepseek context length

* add deepseek to onboarding

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-09-26 12:55:12 -07:00
Timothy Carambat
44dddcd4af
1417 completion timeout ()
* Refactor handleDefaultStreamResponseV2 function for better error handling

* run yarn lint

* small error handling changes

* update error handling flow and scope of vars

* add back space

---------

Co-authored-by: Roman <rrojaski@gmail.com>
2024-09-25 14:00:19 -07:00
Sean Hatfield
4ebc37b4e3
Export embedded chat history ()
export embedded chat history

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-09-24 16:05:34 -07:00
Timothy Carambat
a30fa9b2ed
1943 add fireworksai support ()
* Issue : Add support for LLM provider - Fireworks AI

* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI

* class only return

---------

Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>
2024-09-16 12:10:44 -07:00
Timothy Carambat
c612239ecb
Add Gemini exp models ()
Add Gemini  models
resolves 
2024-09-11 13:03:14 -07:00
Timothy Carambat
b8b55b5899
Feature/add searchapi web browsing ()
* Add SearchApi to web browsing

* UI modifications for SearchAPI

---------

Co-authored-by: Sebastjan Prachovskij <sebastjan.prachovskij@gmail.com>
2024-09-05 10:36:46 -07:00
Timothy Carambat
fdc3add53c
Api session id support ()
* Refactor api endpoint chat handler to its own function
remove legacy `chatWithWorkspace` and cleanup `index.js`

* Add `sessionId` in dev API to partition chats logically statelessly
2024-08-21 15:25:47 -07:00
Timothy Carambat
c8fe254d45
Omit invalid response.text values and prompts ()
* Omit invalid `response.text` values and `prompts`
resolves 

* remove import
2024-08-15 14:22:27 -07:00
Timothy Carambat
99f2c25b1c
Agent Context window + context window refactor. ()
* Enable agent context windows to be accurate per provider:model

* Refactor model mapping to external file
Add token count to document length instead of char-count
refernce promptWindowLimit from AIProvider in central location

* remove unused imports
2024-08-15 12:13:28 -07:00
Timothy Carambat
d072875e43
Add piperTTS in-browser text-to-speech ()
* Add piperTTS in-browser text-to-speech

* update vite config

* Add voice default + change prod public URL

* uncheck file

* Error handling
bump package for better quality and voices

* bump package

* Remove pre-packed WASM - will not support offline first solution for docker

* attach TTSProvider telem
2024-08-07 11:09:51 -07:00
Sean Hatfield
7273c892a1
Ollama performance mode option ()
* ollama performance mode option

* Change ENV prop
Move perf setting to advanced

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-08-02 13:29:17 -07:00
RahSwe
c55ef33fce
Gemini Pro 1.5, API support for 2M context and new experimental model () 2024-08-02 10:24:31 -07:00
Timothy Carambat
38fc181238
Add multimodality support ()
* Add multimodality support

* Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal

* temp dev build

* patch bad import

* noscrolls for windows dnd

* noscrolls for windows dnd

* update README

* update README

* add multimodal check
2024-07-31 10:47:49 -07:00
Timothy Carambat
5e73dce506
Enable editing of OpenRouter stream timeout for slower connections () 2024-07-29 11:49:14 -07:00
Timothy Carambat
61e214aa8c
Add support for Groq /models endpoint ()
* Add support for Groq /models endpoint

* linting
2024-07-24 08:35:52 -07:00
Timothy Carambat
9366e69d88
Add AWS bedrock support for LLM + agents ()
add AWS bedrock support for LLM + agents
2024-07-23 16:35:37 -07:00
Timothy Carambat
76aa2a4fd4
Implement support for selecting basic keep_alive times for Ollama () 2024-07-22 14:44:47 -07:00
timothycarambat
2185753068 patch text.substring bug from compressor 2024-07-22 12:53:11 -07:00
timothycarambat
86a66ba569 change alpaca format to include citations and system prompt 2024-07-15 16:05:53 -07:00
Timothy Carambat
0b845fbb1c
Deprecate .isSafe moderation ()
Add type defs to helpers
2024-06-28 15:32:30 -07:00
Sean Hatfield
dde8bc238b
[FIX] Remove Azure URL validation ()
remove azure url validation
2024-06-25 12:10:51 -07:00