mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2025-03-24 22:54:43 +00:00
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * add built-in LLM support (expiermental) * Update to progress output for embedder * move embedder selection options to component * saftey checks for modelfile * update ref * Hide selection when on hosted subdomain * update documentation hide localLlama when on hosted * saftey checks for storage of models * update dockerfile to pre-build Llama.cpp bindings * update lockfile * add langchain doc comment * remove extraneous --no-metal option * Show data handling for private LLM * persist model in memory for N+1 chats * update import update dev comment on token model size * update primary README * chore: more readme updates and remove screenshots - too much to maintain, just use the app! * remove screeshot link |
||
---|---|---|
.. | ||
apiKeys.js | ||
cacheData.js | ||
documents.js | ||
invite.js | ||
systemSettings.js | ||
telemetry.js | ||
user.js | ||
vectors.js | ||
welcomeMessages.js | ||
workspace.js | ||
workspaceChats.js | ||
workspaceUsers.js |