Django is a free and open-source, Python-based web framework that follows the model–template–views architectural pattern. Django advertises itself as “the web framework for perfectionists with deadlines” and “Django makes it easier to build better Web apps more quickly and with less code”. Django is known for the speed at which you can develop apps without compromising on robustness.
Docker is an open platform that performs Operating System level virtualization also known as containerization. It makes it possible to build, ship, and run distributed applications in controlled environments with defined rules.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Docker basics
- The Dockerfile: It’s a text file with a recipe to build an image of your app. Here you will add all the dependencies, from the OS and OS-level packages to Python packages, and also the application source code and anything else needed to run it.
- Image: The Dockerfile is used to build an image with everything needed packed inside. Images sizes can vary from a few megabytes to several gigabytes and it’s recommended to keep them small, adding only what is needed. Images can also start from other base images so you don’t need to start from zero.
- Registry: It’s a repository of docker images. DockerHub is a public registry where you can find official images for most of the OSs and programming languages.
- Layers: Instructions from the Dockerfile are executed from top to bottom and each line adds a layer to the image. Layers are cached so they are reused in the next build if not modified. But the instructions order is important. If you modify a line in the dockerfile, that layer and all the layers bellow will be rebuild. So you should add the layers that you expect to change less frequently first and add the ones that can change more often later to optimize build times.
- Stages: With multistage builds, it is possible to build more than one artifact from a single Dockerfile. Also, the code and/or any generated artifacts of a stage can be reused in another stage reducing duplication and making docker files more maintainable.
- Container: It’s a running instance of an image. You can spin up several containers from a same image or from different images.
- Bind Mounts: They allow to mount part of the host filesystem in the container to share files. You can think of them as shared folders. They are useful during development, and in some use cases, but you should avoid them in production because they break the container isolation.
- Volumes: They are like virtual hard drives for the containers. They use some directory in the host filesystem to persist data, but the data can be only accessed from within docker containers.
In this guide we will learn how to package a Django application as a docker image and run it with a linked postgress container.
Related content:
- How to Run Postgresql 14 with Docker and Docker-Compose
- Getting started with Django – A Simple CRUD application
Prerequisites
Before creating a container for the API (or your Django application) and shipping it off, you need to install Docker on your local machine. Consult the docker installation page for specific installation instructions for your local machine.
Once installed, confirm that it is working as expected by checking the version in the terminal:
➜ docker --version
Docker version 20.10.17, build 100c701
Setting up the application
For this guide, it is assumed that you have a working django applicaition that connects to a postgres database. If you don’t you can get up and running quickly:
Create and use a virtualenv:
python3 -m pip install virtualenv
python3 -m venv <env-folder>
source /<path to venv>/bin/activate
Then install django
pip install django
Next create a django application:
django-admin startproject citizixproj
To connect to postgres database, first install the database driver
python3 -m pip install psycopg2-binary
Then update the django database settings
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': os.environ.get('DATABASE_DB'),
'USER': os.environ.get('DATABASE_USER'),
'PASSWORD': os.environ.get('DATABASE_PASSWORD'),
'HOST': os.environ.get('DATABASE_HOST'),
'PORT': os.environ.get('DATABASE_PORT'),
},
}
Finally, we will need to install gunicorn to serve our application. Install it with this command:
pip install gunicorn
To save all the requirements, let us add them to a requirements.txt file:
pip freeze > requirements.txt
For more information on how to work with django please checkout Getting started with Django – A Simple CRUD application.
Dockerize django
The_ Dockerfile will have a set of instructions on how Docker will build a container image for your application._ We will create a dockerfile in the root of our application. In the file, we instruct docker to use python slim then install dependencies both os level and python requirements. Finally we copy our code and serve with gunicorn when the container is started.
Save this as Dockerfile
FROM python:3.10-slim
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/app
RUN apt update && apt install -y g++ libpq-dev gcc musl-dev
COPY requirements.txt .
RUN python3 -m pip install -r requirements.txt --no-cache-dir
COPY . .
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "djposts.wsgi:application", "-w", "2"]
The first directive in the Dockerfile, FROM python:3.10-slim
tells Docker which image to base our container on. We use the official Python image from Dockerhub that comes with Python and Linux setup for you, ready for use in a python project.
The next directive ENV PYTHONUNBUFFERED 1
is an environment variable, which instructs Docker not to buffer the output from Python in the standard output buffer, but simply send it straight to the terminal.
The directive that starts with RUN
instructs Docker to execute what ever command that comes after as if you were executing it in a terminal on a server. The first RUN installs OS dependencies using apt while the second one instructs Docker to RUN
the pip
command to install the requirements listed in the requirements.txt
The directive that starts with WORKDIR
sets the working directory and all the directives that follow in the Dockerfile will be executed in that directory.
In the Dockerfile above, I set the working directory to/usr/src/app
. I then ADD
all files in my project root directory where the Dockerfile is to the/usr/src/app
directory in the container. The ADD
directive copies files and directories from the source to the destination specified in the directive.
Docker compose
A typical API deployment to a production environment will require you to use more than one container. You will need a separate container for the web server and a separate container for the database server.** Docker compose **will assist you in specifying how you want the containers to be built and connected, using a single command.
Docker Compose can be installed as a python pip package.
pip install docker-compose
Once installed, confirm that it is working as expected by checking the version
➜ docker-compose --version
Docker Compose version v2.6.1
If you have docker compose on your development machine, go ahead and create a docker-compose.yml file in your root directory, where the Dockerfile resides.
Open the docker-compose.yml file and add the following lines;
version: '3.9'
services:
app:
image: citizix/app:latest
working_dir: /usr/src/app
ports:
- 8000:8000
depends_on:
- db
environment:
- ENV=local
- DEBUG=1
- ALLOWED_HOSTS=*
- SECRET_KEY='django-insecure-zhrt2jo)!csk&d-x55-mzkugq)iw9)l43)4(^msx@t70zm6l(+'
- APP_PATH=/var/www/citizix
- DATABASE_DB=citizix
- DATABASE_USER=citizix
- DATABASE_PASSWORD=citizix
- DATABASE_HOST=db
- DATABASE_PORT=5432
volumes:
- .:/usr/src/app
networks:
- citizix_net
db:
image: postgres:14.4-alpine
restart: always
ports:
- 5432:5432
volumes:
- citizix_pg:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=citizix
- POSTGRES_USER=citizix
- POSTGRES_DB=citizix
networks:
- citizix_net
volumes:
citizix_pg:
citizix_app:
networks:
citizix_net:
The first line in the docker-compose.yml file specifies which syntax version of Docker compose you want to use.
Next, we define a service
called app
and db
. The image
directive tells Docker compose to use the docker image given. That image needs to exist so we can build by running this command in the directory where the Dockerfile exist:
docker build -t citizix/app:latest .
The ports
directive exposes port 8000 from the container to the port 8000 in our local machine.
The depends_on
directive instructs the container to wait for the db to start before starting. The Django app requires the database to be running, so here we tell docker-compose that the db
container needs to be started first before starting this container.
The volumes
– We use a bind mount to share the source code between the host and the container. This way every time we edit some source code it’ll get copied into the container. This will also trigger runserver auto-reloading so we don’t need to rebuild the container after every code change during development.
Finally, the environment
directive defines the environment variables that will be available to the application when it starts.
db
: Contains the PostgreSQL database server.
image
: In this case, we set image to tell docker-compose to use a prebuilt image from docker-hub which includes PostgreSQL server version 14. Postgres maintains official images which are ready to use so there is no need to define a Dockerfile nor to do a manual build in this case.restart
: The container will be automatically restarted whenever it is stopped (For example, if the container gets out of memory and exits)ports
: We expose the database port to the host machine just in case we want to connect to the database later through some local psql client app. This is optional and we do it only for the development as it can be useful for inspecting the DB.environment
: Here we pass environment variables (aka env vars) to the database container. As we don’t want to commit sensitive data such as database credentials in our repo, in addition we take the values from the host environment variables. We need to set those env vars locally before spinning up the containers as we will see later.volumes
: This adds a persistent volume that is used to save the database data in the host filesystem.
Starting the Containers
We can now start the containers using this command. The -d starts them in a detached mode:
docker-compose up -d
To check the logs generated, use this command. Appending -f
will follow the logs as they stream in, similar to tail -f
:
docker-compose logs -f
docker-compose logs [-f] <service>
Check docker processes
docker-compose ps
You can execute inside the container to obtain the shell:
docker-compose exec <service> <command>
docker-compose exec app python manage.py migrate
docker-compose exec app python manage.py createsuperuser
docker-compose exec app /bin/sh
Stop all the services:
$ docker-compose down
Stop a single service:
$ docker-compose stop app
Reset the database:
$ docker-compose down --volumes $ docker-compose up
Conclusion
In this guide we learnt how to dockerize a Django application. Docker and containerisation has become beneficial in solving the problem of **It works on my machine, but does not work on the production machine, **since the same container image can be used for all environments with the same requirements.
Other benefits of packaging software in containers are:
- Flexibility: Even the most complex applications can be containerized.
- Lightweight: Containers leverage and share the host kernel.
- Interchangeable: You can deploy updates and upgrades on-the-fly.
- Portability: You can build locally, deploy to the cloud, and run anywhere.
- Scalability: You can increase and automatically distribute container replicas.
- Stackable: You can stack services vertically and on-the-fly.