How to Setup Deepseek Locally on Ubuntu 22.04 Server With Ollama

Step-by-step guide to run DeepSeek R1 locally on Ubuntu 22.04 with Ollama and Open WebUI. Self-host LLMs for privacy and control.

This guide walks you through setting up DeepSeek locally (or on a server) on Ubuntu 22.04 using Ollama. Aimed at developers, researchers, and organizations who want more control and privacy in AI-driven applications, Ollama makes it straightforward to deploy and manage large language models (LLMs) on your own hardware or within a private network.

We use Open WebUI to access DeepSeek in the browser. Open WebUI is an extensible, self-hosted AI chat interface that works with Ollama and can run entirely offline.

Prerequisites

  • Ubuntu 22.04 server (or desktop) up and running
  • SSH access if the server is remote
  • Sudo access on the server
  • At least 8 GB RAM (16 GB recommended for smoother inference)
  • At least 4 CPUs; more cores improve throughput

Ensure server is up to date

Before proceeding, ensure your server is up to date.

1
2
sudo apt update
sudo apt upgrade -y

Set hostname in the server. Mine will be ai-beast.citizix.com.

1
sudo hostnamectl set-hostname ai-beast.citizix.com

Edit /etc/hosts and add the hostname to fix the name resolution for the hostname

1
sudo vim /etc/hosts

Update this line

1
127.0.0.1 localhost ai-beast.citizix.com ai-beast

Install python and git

Install python, git, pip. These are needed for ollama and open-webui to run.

1
sudo apt install python3 python3-pip git -y

Confirm that the versions installed are correct

1
2
3
4
5
6
7
8
$ python3 --version
Python 3.12.3

$ pip3 --version
pip 24.0 from /usr/lib/python3/dist-packages/pip (python 3.12)

$ git --version
git version 2.43.0

Install ollama

Use this command to install ollama

1
curl -fsSL https://ollama.com/install.sh | sh

Then confirm installation:

1
ollama --version

Start the ollama service

1
sudo systemctl start ollama

Confirm Ollama is running:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
$ sudo systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: active (running) since Tue 2025-02-18 08:05:43 UTC; 58s ago
   Main PID: 21515 (ollama)
      Tasks: 7 (limit: 4586)
     Memory: 30.8M (peak: 31.2M)
        CPU: 80ms
     CGroup: /system.slice/ollama.service
             └─21515 /usr/local/bin/ollama serve

Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: Your new public key is:
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgjIKB86+V3H5Fs8dFiOeryo5kiMCqDAySLlqFa26e5
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: 2025/02/18 08:05:43 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_>
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: time=2025-02-18T08:05:43.307Z level=INFO source=images.go:432 msg="total blobs: 0"
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: time=2025-02-18T08:05:43.308Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: time=2025-02-18T08:05:43.309Z level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)"
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: time=2025-02-18T08:05:43.316Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: time=2025-02-18T08:05:43.329Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
Feb 18 08:05:43 ai-beast.citizix.com ollama[21515]: time=2025-02-18T08:05:43.329Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compu>

If you see “no compatible GPUs were discovered”, Ollama will use CPU for inference. That works fine for smaller models like deepseek-r1:7b; add a supported GPU later if you want faster inference.

Enable ollama service to start on boot.

1
sudo systemctl enable ollama

Download deepseek model

Download and run deepseek model - DeepSeek-R1-Distill-Qwen-7B

1
ollama run deepseek-r1:7b

Exit the prompt with Ctrl+d

List available models

1
ollama list

Example output:

1
2
3
4
$ ollama list

NAME              ID              SIZE      MODIFIED
deepseek-r1:7b    0a8c26691023    4.7 GB    27 minutes ago

For more DeepSeek and other models, see the Ollama library.

Set up open web ui for deepseek

Open Web UI is python based. That means we will need a python environment to set it up. We will use virtualenv.

First install python virtualenv package

1
sudo apt install python3-venv -y

Then create a virtualenv in the path ~/open-webui-venv that we can use

1
python3 -m venv ~/open-webui-venv

Finally activate the virtualenv

1
source ~/open-webui-venv/bin/activate

Install Open WebUI

1
pip install open-webui

Start Open WebUI

1
open-webui serve

This starts the Open WebUI service. It is available at http://localhost:8080 or http://<server-ip>:8080. If you access it from another machine, ensure port 8080 is allowed (e.g. sudo ufw allow 8080/tcp && sudo ufw reload).

Accessing Open Web UI

Once set up, you can load the UI in the browser. After the welcome screen, please proceed to do the following:

  • Create the admin account when prompted
  • Select the model you installed (e.g. deepseek-r1:7b) from the dropdown
  • Start interacting with DeepSeek

Running Open WebUI persistently

The open-webui serve process stops when you close the terminal. To keep it running after logout, run it in the background or under a process manager. Example with nohup:

1
nohup open-webui serve > ~/open-webui.log 2>&1 &

For a proper always-on setup, use a systemd user service or install Open WebUI via Docker; see the Open WebUI docs for options.

Verifying the setup

  • Ollama: curl http://127.0.0.1:11434/api/tags should return JSON listing your models.
  • Open WebUI: Open http://localhost:8080 (or your server IP:8080), sign in, pick deepseek-r1:7b, and send a test message.

Summary

You now have DeepSeek running locally with Ollama and a web interface via Open WebUI. You can add more models with ollama run <model> and manage them from the same UI.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy