Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.
This document will guide you on how to use Ollama in LobeChat:
Download Ollama for macOS and unzip/install it.
Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting OLLAMA_ORIGINS
is required for cross-origin access and port listening. Use launchctl
to set the environment variable:
launchctl setenv OLLAMA_ORIGINS "*"
After setting up, restart the Ollama application.
Now, you can start conversing with the local LLM in LobeChat.
Download Ollama for Windows and install it.
Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS
is needed for cross-origin access and port listening.
On Windows, Ollama inherits your user and system environment variables.
OLLAMA_ORIGINS
for your user account, setting the value to *
.OK/Apply
to save and restart the system.Ollama
again.Now, you can start conversing with the local LLM in LobeChat.
Install using the following command:
curl -fsSL https://ollama.com/install.sh | sh
Alternatively, you can refer to the Linux manual installation guide.
Due to Ollama's default configuration, which allows local access only, additional environment variable setting OLLAMA_ORIGINS
is required for cross-origin access and port listening. If Ollama runs as a systemd service, use systemctl
to set the environment variable:
sudo systemctl edit ollama.service
:sudo systemctl edit ollama.service
Environment
under [Service]
for each environment variable:[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
systemd
and restart Ollama:sudo systemctl daemon-reload
sudo systemctl restart ollama
Now, you can start conversing with the local LLM in LobeChat.
If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:
docker pull ollama/ollama
Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS
is needed for cross-origin access and port listening.
If Ollama runs as a Docker container, you can add the environment variable to the docker run
command.
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
Now, you can start conversing with the local LLM in LobeChat.
Ollama supports various models, which you can view in the Ollama Library and choose the appropriate model based on your needs.
In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.
Once downloaded, you can start conversing.
Alternatively, you can install models by executing the following command in the terminal, using llama3 as an example:
ollama pull llama3
You can find Ollama's configuration options in Settings
-> Language Models
, where you can configure Ollama's proxy, model names, etc.
Visit Integrating with Ollama to learn how to deploy LobeChat to meet integration needs with Ollama.