anything-llm/server
timothycarambat 7bee849c65 chore: Force VectorCache to always be on;
update file picker spacing for attributes
2023-12-20 10:45:03 -08:00
..
endpoints Add ability to grab youtube transcripts via doc processor () 2023-12-18 17:17:26 -08:00
models patch: API key to localai service calls () 2023-12-11 14:18:28 -08:00
prisma Add user PFP support and context to logo () 2023-12-07 14:11:51 -08:00
storage feat: Embed on-instance Whisper model for audio/mp4 transcribing () 2023-12-15 11:20:13 -08:00
swagger AnythingLLM UI overhaul () 2023-10-23 13:10:34 -07:00
utils chore: Force VectorCache to always be on; 2023-12-20 10:45:03 -08:00
.env.example chore: Force VectorCache to always be on; 2023-12-20 10:45:03 -08:00
.gitignore AnythingLLM UI overhaul () 2023-10-23 13:10:34 -07:00
.nvmrc Implement Chroma Support () 2023-06-07 21:31:35 -07:00
index.js GitHub loader extension + extension support v1 () 2023-12-18 15:48:02 -08:00
nodemon.json Full developer api () 2023-08-23 19:15:07 -07:00
package.json [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing () 2023-12-07 14:48:27 -08:00
yarn.lock [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing () 2023-12-07 14:48:27 -08:00