Sean Hatfield
75790e7e90
Remove native LLM option ( #3024 )
...
* remove native llm
* remove node-llama-cpp from dockerfile
* remove unneeded items from dockerfile
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2025-01-27 13:42:52 -08:00
Jason
c757c3fb5f
feat: update novita AI logo and default model ( #3037 )
2025-01-27 08:41:12 -08:00
Timothy Carambat
2ca22abc9c
Add Version to AzureOpenAI ( #3023 )
2025-01-24 13:41:37 -08:00
Sean Hatfield
48dcb22b25
Dynamic fetching of TogetherAI models ( #3017 )
...
* implement dynamic fetching of togetherai models
* implement caching for togetherai models
* update gitignore for togetherai model caching
* Remove models.json from git tracking
* Remove .cached_at from git tracking
* lint
* revert unneeded change
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2025-01-24 11:06:59 -08:00
timothycarambat
273d116586
linting
2025-01-23 16:43:18 -08:00
Sean Hatfield
57f4f46a39
Bump perplexity models ( #3014 )
...
* bump perplexity models
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2025-01-23 16:35:38 -08:00
Timothy Carambat
c4f75feb08
Support historical message image inputs/attachments for n+1 queries ( #2919 )
...
* Support historical message image inputs/attachments for n+1 queries
* patch gemini
* OpenRouter vision support cleanup
* xai vision history support
* Mistral logging
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2025-01-16 13:49:06 -08:00
Timothy Carambat
21af81085a
Add caching to Gemini /models ( #2969 )
...
rename file typo
2025-01-13 13:12:03 -08:00
timothycarambat
4b2bb529c9
enable leftover mlock setting
2024-12-28 17:48:24 -08:00
Timothy Carambat
a51de73aaa
update ollama performance mode ( #2874 )
2024-12-18 11:21:35 -08:00
Timothy Carambat
b082c8e441
Add support for gemini authenticated models endpoint ( #2868 )
...
* Add support for gemini authenticated models endpoint
add customModels entry
add un-authed fallback to default listing
separate models by expiermental status
resolves #2866
* add back improved logic for apiVersion decision making
2024-12-17 15:20:26 -08:00
Timothy Carambat
dd7c4675d3
LLM performance metric tracking ( #2825 )
...
* WIP performance metric tracking
* fix: patch UI trying to .toFixed() null metric
Anthropic tracking migraiton
cleanup logs
* Apipie implmentation, not tested
* Cleanup Anthropic notes, Add support for AzureOpenAI tracking
* bedrock token metric tracking
* Cohere support
* feat: improve default stream handler to track for provider who are actually OpenAI compliant in usage reporting
add deepseek support
* feat: Add FireworksAI tracking reporting
fix: improve handler when usage:null is reported (why?)
* Add token reporting for GenericOpenAI
* token reporting for koboldcpp + lmstudio
* lint
* support Groq token tracking
* HF token tracking
* token tracking for togetherai
* LiteLLM token tracking
* linting + Mitral token tracking support
* XAI token metric reporting
* native provider runner
* LocalAI token tracking
* Novita token tracking
* OpenRouter token tracking
* Apipie stream metrics
* textwebgenui token tracking
* perplexity token reporting
* ollama token reporting
* lint
* put back comment
* Rip out LC ollama wrapper and use official library
* patch images with new ollama lib
* improve ollama offline message
* fix image handling in ollama llm provider
* lint
* NVIDIA NIM token tracking
* update openai compatbility responses
* UI/UX show/hide metrics on click for user preference
* update bedrock client
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-12-16 14:31:17 -08:00
wolfganghuse
d145602d5a
Add attachments to GenericOpenAI prompt ( #2831 )
...
* added attachments to genericopenai prompt
* add devnote
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-12-16 12:03:51 -08:00
Sean Hatfield
f651ca8628
APIPie LLM provider improvements ( #2695 )
...
* fix apipie streaming/sort by chat models
* lint
* linting
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-12-13 14:18:02 -08:00
timothycarambat
f8e91e1ffa
patch gemini-2.0-key
2024-12-11 16:52:31 -08:00
timothycarambat
69b672b625
add gemini 1206 and gemini-2.0-flash exp models
...
connect #2788
2024-12-11 09:04:29 -08:00
Timothy Carambat
a69997a715
update chat model filters for openai ( #2803 )
2024-12-11 08:55:10 -08:00
timothycarambat
4b09a06590
persist token window for NIM and not only on model change
2024-12-05 11:57:07 -08:00
Timothy Carambat
b2dd35fe15
Add Support for NVIDIA NIM ( #2766 )
...
* Add Support for NVIDIA NIM
* update README
* linting
2024-12-05 10:38:23 -08:00
timothycarambat
62be0cd0c5
add gemini-exp-1121 to expiermental set
2024-11-22 09:36:44 -08:00
timothycarambat
246152c024
Add gemini-exp-1121
...
resolves #2657
2024-11-21 11:02:43 -08:00
Sean Hatfield
55fc9cd6b1
TogetherAI Llama 3.2 vision models support ( #2666 )
...
* togetherai llama 3.2 vision models support
* remove console log
* fix listing to reflect what is on the chart
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-21 10:42:42 -08:00
Timothy Carambat
26e2d8cc3b
Add more expiermental models from Gemini ( #2663 )
2024-11-20 09:52:33 -08:00
timothycarambat
af16332c41
remove dupe key in ModelMap
2024-11-19 20:20:28 -08:00
Sean Hatfield
e29f054706
Bump TogetherAI models ( #2645 )
...
* bump together ai models
* Run post-bump command
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-18 13:08:26 -08:00
Sean Hatfield
27b07d46b3
Patch bad models endpoint path in LM Studio embedding engine ( #2628 )
...
* patch bad models endpoint path in lm studio embedding engine
* convert to OpenAI wrapper compatibility
* add URL force parser/validation for LMStudio connections
* remove comment
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-11-13 12:34:42 -08:00
timothycarambat
5aa79128f7
bump Anthropic models
2024-11-06 08:14:08 -08:00
Timothy Carambat
80565d79e0
2488 novita ai llm integration ( #2582 )
...
* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-11-04 11:34:29 -08:00
Timothy Carambat
dd2756b570
add sessionToken
validation connection auth for AWSbedrock ( #2554 )
2024-10-29 16:34:52 -07:00
Timothy Carambat
5bc96bca88
Add Grok/XAI support for LLM & agents ( #2517 )
...
* Add Grok/XAI support for LLM & agents
* forgot files
2024-10-21 16:32:49 -07:00
Timothy Carambat
446164d7b9
Add Groq vision preview support ( #2511 )
...
Adds support for only the llama3.2 vision models on groq. This comes with many conditionals and nuances to handle as Groqs vision implemention is quite bad right now
2024-10-21 12:37:39 -07:00
Timothy Carambat
7342839e77
Passthrough agentModel for LMStudio ( #2499 )
2024-10-18 11:44:48 -07:00
Timothy Carambat
93d7ce6d34
Handle Bedrock models that cannot use system
prompts ( #2489 )
2024-10-16 12:31:04 -07:00
Sean Hatfield
fa528e0cf3
OpenAI o1 model support ( #2427 )
...
* support openai o1 models
* Prevent O1 use for agents
getter for isO1Model;
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-10-15 19:42:13 -07:00
Sean Hatfield
6674e5aab8
Support free-form input for workspace model for providers with no /models
endpoint ( #2397 )
...
* support generic openai workspace model
* Update UI for free form input for some providers
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-10-15 15:24:44 -07:00
Timothy Carambat
bce7988683
Integrate Apipie support directly ( #2470 )
...
resolves #2464
resolves #989
Note: Streaming not supported
2024-10-15 12:36:06 -07:00
a4v2d4
cadc09d71a
[FEAT] Add Llama 3.2 models to Fireworks AI's LLM selection dropdown ( #2384 )
...
Add Llama 3.2 3B and 1B models to Fireworks AI LLM selection
2024-09-28 15:30:56 -07:00
Sean Hatfield
7390bae6f6
Support DeepSeek ( #2377 )
...
* add deepseek support
* lint
* update deepseek context length
* add deepseek to onboarding
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-09-26 12:55:12 -07:00
Timothy Carambat
a781345a0d
Enable Mistral Multimodal ( #2343 )
...
* Enable Mistral Multimodal
* remove console
2024-09-21 16:17:17 -05:00
Timothy Carambat
a30fa9b2ed
1943 add fireworksai support ( #2300 )
...
* Issue #1943 : Add support for LLM provider - Fireworks AI
* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI
* class only return
---------
Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>
2024-09-16 12:10:44 -07:00
Timothy Carambat
906eb70ca1
bump Perplexity models ( #2275 )
2024-09-12 13:13:47 -07:00
Timothy Carambat
c612239ecb
Add Gemini exp
models ( #2268 )
...
Add Gemini models
resolves #2263
2024-09-11 13:03:14 -07:00
Timothy Carambat
b4651aff35
Support gpt-4o for Azure deployments ( #2182 )
2024-08-26 14:35:42 -07:00
timothycarambat
cb7cb2d976
Add 405B to perplexity
2024-08-19 12:26:22 -07:00
Timothy Carambat
99f2c25b1c
Agent Context window + context window refactor. ( #2126 )
...
* Enable agent context windows to be accurate per provider:model
* Refactor model mapping to external file
Add token count to document length instead of char-count
refernce promptWindowLimit from AIProvider in central location
* remove unused imports
2024-08-15 12:13:28 -07:00
Shahar
4365d69359
Fix TypeError by replacing this.openai.createChatCompletion with the correct function call ( #2117 )
...
fixed new api syntax
2024-08-14 14:39:48 -07:00
PyKen
a2571024a9
Add prompt window limits for gpt-4o-* models ( #2104 )
2024-08-13 09:13:36 -07:00
Timothy Carambat
f06ef6180d
add exp model to v1Beta ( #2082 )
2024-08-09 14:19:49 -07:00
Sean Hatfield
7273c892a1
Ollama performance mode option ( #2014 )
...
* ollama performance mode option
* Change ENV prop
Move perf setting to advanced
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-08-02 13:29:17 -07:00
Timothy Carambat
ba8e4e5d3e
handle OpenRouter exceptions on streaming ( #2033 )
2024-08-02 12:23:39 -07:00