This guide walks you through setting up DeepSeek locally (or on a server) on Ubuntu 22.04 using Ollama. Aimed at developers, researchers, and organizations who want more control and privacy in AI-driven applications, Ollama makes it straightforward to deploy and manage large language models (LLMs) on your own hardware or within a private network.
We use Open WebUI to access DeepSeek in the browser. Open WebUI is an extensible, self-hosted AI chat interface that works with Ollama and can run entirely offline.
Prerequisites
- Ubuntu 22.04 server (or desktop) up and running
- SSH access if the server is remote
- Sudo access on the server
- At least 8 GB RAM (16 GB recommended for smoother inference)
- At least 4 CPUs; more cores improve throughput
Ensure server is up to date
Before proceeding, ensure your server is up to date.
| |
Set hostname in the server. Mine will be ai-beast.citizix.com.
| |
Edit /etc/hosts and add the hostname to fix the name resolution for the hostname
| |
Update this line
| |
Install python and git
Install python, git, pip. These are needed for ollama and open-webui to run.
| |
Confirm that the versions installed are correct
| |
Install ollama
Use this command to install ollama
| |
Then confirm installation:
| |
Start the ollama service
| |
Confirm Ollama is running:
| |
If you see “no compatible GPUs were discovered”, Ollama will use CPU for inference. That works fine for smaller models like deepseek-r1:7b; add a supported GPU later if you want faster inference.
Enable ollama service to start on boot.
| |
Download deepseek model
Download and run deepseek model - DeepSeek-R1-Distill-Qwen-7B
| |
Exit the prompt with Ctrl+d
List available models
| |
Example output:
| |
For more DeepSeek and other models, see the Ollama library.
Set up open web ui for deepseek
Open Web UI is python based. That means we will need a python environment to set it up. We will use virtualenv.
First install python virtualenv package
| |
Then create a virtualenv in the path ~/open-webui-venv that we can use
| |
Finally activate the virtualenv
| |
Install Open WebUI
| |
Start Open WebUI
| |
This starts the Open WebUI service. It is available at http://localhost:8080 or http://<server-ip>:8080. If you access it from another machine, ensure port 8080 is allowed (e.g. sudo ufw allow 8080/tcp && sudo ufw reload).
Accessing Open Web UI
Once set up, you can load the UI in the browser. After the welcome screen, please proceed to do the following:
- Create the admin account when prompted
- Select the model you installed (e.g.
deepseek-r1:7b) from the dropdown - Start interacting with DeepSeek
Running Open WebUI persistently
The open-webui serve process stops when you close the terminal. To keep it running after logout, run it in the background or under a process manager. Example with nohup:
| |
For a proper always-on setup, use a systemd user service or install Open WebUI via Docker; see the Open WebUI docs for options.
Verifying the setup
- Ollama:
curl http://127.0.0.1:11434/api/tagsshould return JSON listing your models. - Open WebUI: Open
http://localhost:8080(or your server IP:8080), sign in, pickdeepseek-r1:7b, and send a test message.
Summary
You now have DeepSeek running locally with Ollama and a web interface via Open WebUI. You can add more models with ollama run <model> and manage them from the same UI.