anything-llm/server/utils/AiProviders/ollama
Timothy Carambat dd7c4675d3
LLM performance metric tracking ()
* WIP performance metric tracking

* fix: patch UI trying to .toFixed() null metric
Anthropic tracking migraiton
cleanup logs

* Apipie implmentation, not tested

* Cleanup Anthropic notes, Add support for AzureOpenAI tracking

* bedrock token metric tracking

* Cohere support

* feat: improve default stream handler to track for provider who are actually OpenAI compliant in usage reporting
add deepseek support

* feat: Add FireworksAI tracking reporting
fix: improve handler when usage:null is reported (why?)

* Add token reporting for GenericOpenAI

* token reporting for koboldcpp + lmstudio

* lint

* support Groq token tracking

* HF token tracking

* token tracking for togetherai

* LiteLLM token tracking

* linting + Mitral token tracking support

* XAI token metric reporting

* native provider runner

* LocalAI token tracking

* Novita token tracking

* OpenRouter token tracking

* Apipie stream metrics

* textwebgenui token tracking

* perplexity token reporting

* ollama token reporting

* lint

* put back comment

* Rip out LC ollama wrapper and use official library

* patch images with new ollama lib

* improve ollama offline message

* fix image handling in ollama llm provider

* lint

* NVIDIA NIM token tracking

* update openai compatbility responses

* UI/UX show/hide metrics on click for user preference

* update bedrock client

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-12-16 14:31:17 -08:00
..
index.js LLM performance metric tracking () 2024-12-16 14:31:17 -08:00
README.md [DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM () 2024-02-21 18:42:32 -08:00

Common Issues with Ollama

If you encounter an error stating llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434 when using AnythingLLM in a Docker container, this indicates that the IP of the Host inside of the virtual docker network does not bind to port 11434 of the host system by default, due to Ollama's restriction to localhost and 127.0.0.1. To resolve this issue and ensure proper communication between the Dockerized AnythingLLM and the Ollama service, you must configure Ollama to bind to 0.0.0.0 or a specific IP address.

Setting Environment Variables on Mac

If Ollama is run as a macOS application, environment variables should be set using launchctl:

  1. For each environment variable, call launchctl setenv.
    launchctl setenv OLLAMA_HOST "0.0.0.0"
    
  2. Restart the Ollama application.

Setting Environment Variables on Linux

If Ollama is run as a systemd service, environment variables should be set using systemctl:

  1. Edit the systemd service by calling systemctl edit ollama.service. This will open an editor.
  2. For each environment variable, add a line Environment under the section [Service]:
    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
    
  3. Save and exit.
  4. Reload systemd and restart Ollama:
    systemctl daemon-reload
    systemctl restart ollama
    

Setting Environment Variables on Windows

On Windows, Ollama inherits your user and system environment variables.

  1. First, quit Ollama by clicking on it in the taskbar.
  2. Edit system environment variables from the Control Panel.
  3. Edit or create new variable(s) for your user account for OLLAMA_HOST, OLLAMA_MODELS, etc.
  4. Click OK/Apply to save.
  5. Run ollama from a new terminal window.