Lab 9 - Containers¶
Danger
Before starting with this lab make sure you have set and remember the root
password for your virtual machine as there is danger of a network confilct which may render your machine unreachable over network.
Danger
Be sure to run docker
commands as a non root user without sudo
Welcome to the 9th lab. Here is a short description of stuff we will cover in this lab:
- Install and get familiar with Docker
- Run a Docker container
- Write a Dockerfile and build an image from it
- Try different methods of routing traffic to it
- Mount a directory from the system in the container
1. Introduction to Docker¶
Docker is a containerisation solution that allows you to separate your application layer from the underlying infrastructure so that developers can deliver, build, and manage software without focusing too much on the underlying operating systems or hardware.
Containers are logical units separated from the underlying OS by a container engine (we will use Docker), creating an abstraction layer between the two. It is not unlike a Virtual Machine, but the main difference here is the fact that a VM has a full operating system running inside it. In contrast, Docker runs directly inside the OS with only the necessary bits and a small amount of compatibility code.
2. Installing Docker¶
Installation of Docker means installing the Docker runtime and client packages, setting up the configuration and starting it.
Danger
As there is an unfortunate coincidence between Docker and the University network, it is very important that you do not start Docker before you have set up the configuration file.
Create the following file /etc/docker/daemon.json
, and put the following inside of it:
{
"bip": "192.168.67.1/24",
"fixed-cidr": "192.168.67.0/24",
"storage-driver": "overlay2",
"default-address-pools": [
{
"base": "192.168.167.1/24",
"size": 24
},
{
"base": "192.168.168.1/24",
"size": 24
},
{
"base": "192.168.169.1/24",
"size": 24
},
{
"base": "192.168.170.1/24",
"size": 24
},
{
"base": "192.168.171.1/24",
"size": 24
},
{
"base": "192.168.172.1/24",
"size": 24
},
{
"base": "192.168.173.1/24",
"size": 24
},
{
"base": "192.168.174.1/24",
"size": 24
}
]
}
Then follow this guide to install Docker. Our recommendation is to use the Repository installation method and install the latest Docker version you can get from there.
If you did the configuration step appropriately first, then you can start up your Docker service.
Before you continue, also add your user (i.e centos
) to the docker
group. After adding the user, close the current terminal session and log in again for the change to take effect.
sudo usermod -aG docker $USER
The first step to test if everything works is to run a Hello World
container.
docker run registry.hpc.ut.ee/mirror/library/hello-world:latest
Verify
You should get a "Hello from Docker! ..."
message from the hello-world
container.
Note
You may notice we are using a docker image from registry.hpc.ut.ee/mirror/library/hello-world:latest
instead of hello-world
from the official Docker registry ( We'll talk about registries a bit later.). registry.hpc.ut.ee/mirror
caches the images from the official Docker registry because to Docker it seems that all connections from within the universty network are from the same IP. This would get rate limited quite fast.
You can explore docker info
for some fun information.
Task 1 - Install Docker recap
- Create docker daemon configuration file
/etc/docker/daemon.json
and append the provided content - Install docker by adding a repo and using
dnf
- Start docker daemon and ensure it is enabled
- Run
hello-world
container from University cache registry
3. Docker images¶
Another way containers are very different from VMs is that they usually utilise something called images. An image is a static, inert, immutable file that is essentially a snapshot or a container template.
Containers are always started from an image. Basically, the image is taken, and a copy of it is started up. This is what we did with the docker run hello-world
command. You can check the existing images in your machine by doing the command: docker image ls
Because running a container from an image in no way impacts the image, then you can spin up thousands of containers from the same image, if you want to, and they always start up from the same initial state - the image.
You can see all the containers currently running with the command docker ps
. We did not start any persistent containers (hello-world
only prints out a text and then exits), so it will not show anything. Appending a -a
flag to the command shows all the containers. You should see your hello-world
container there.
Docker images are hosted in something called a registry. Registries are basically customized object storage servers that know how to deal with images. There are even public registries - Docker Hub or AWS Public ECR, but they have different policies which sometimes make using them difficult.
This is why we use the University of Tartu cache registry, https://registry.hpc.ut.ee
, which pulls and caches images. If you run your container without specifying a registry, it will use the hub.docker.com
one. As that has a maximum limit of 60 pulls per 4 hours for the whole University internal network, then it is unlikely that you will be able to pull anything.
So, instead of running your container with: docker run hello-world
, we recommend you specify our cache registry, https://registry.hpc.ut.ee/mirror
hence, please do the following instead:
docker run registry.hpc.ut.ee/mirror/library/hello-world
4. Running Docker containers¶
Running a Docker container to be persistent - to stay up and consistently respond to queries, requires it to be run in detached
mode. Let's take an example container that just prints the hostname, IP address and a bit more information about the environment when queried over HTTP. Run the container like so:
docker run -d --name whoami registry.hpc.ut.ee/mirror/containous/whoami
After running it, you will get back a long cryptic ID. This is the ID of the running container. Because we specified --name whoami
we can also refer to this container with the name of whoami
. Checking docker ps
should list you a running container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb16cf413128 registry.hpc.ut.ee/mirror/containous/whoami "/whoami" 2 days ago Up 2 days 80/tcp whoami
You can see some information from the previous command. The main question now is, how to query it? It has a PORTS 80/tcp
defined. This means that the container itself listens on port 80
, but not the port 80
of your machine's main network interface.
When you set up Docker, it creates a new network interface, usually called docker0
. You can see this network interface with the command ip a
. We specified it to get the IP address 192.168.67.1/24
in the /etc/docker/daemon.json
file, but it can actually be whichever valid IP address space.
When you start a container, it is given an IP address in that specified range, in our case 192.168.67.1/24
. To see which IP address your container got, check the command docker inspect whoami
. You are interested in the NetworkSettings
section.
"NetworkSettings": {
"Bridge": "",
"SandboxID": "fa03bbe4998f048e5c2a78cf7aa27dad8f262ddf5dcecf363d899d7a958eb08f",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": null
},
"SandboxKey": "/var/run/docker/netns/fa03bbe4998f",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "0467b498ba8fe20bcd86f052ef80230744c518ebdccaaefba48e5f472189a59d",
"Gateway": "192.168.67.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"MacAddress": "02:42:c0:a8:43:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "2054a8ccadfe58741bc92b4ca9212e459e43e23b8a36eac9dbc7798fb725240a",
"EndpointID": "0467b498ba8fe20bcd86f052ef80230744c518ebdccaaefba48e5f472189a59d",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
This container got the IP address of 192.168.67.2
. If we now query this IP address, we should get an appropriate response:
$ curl 192.168.67.2
Hostname: 8052649b4dcb
IP: 127.0.0.1
IP: 192.168.67.2
RemoteAddr: 192.168.67.1:49636
GET / HTTP/1.1
Host: 192.168.67.2
User-Agent: curl/7.61.1
Accept: */*
Now, we have a nice working container accessible inside the machine. Accessing the container from the outside world is a bit more complicated, we will return to it later.
Task 2 - Running Docker container recap
- Run container
whoami
in detached mode fromregistry.hpc.ut.ee/mirror/containous/whoami
- Find its ip address
- curl your whoami docker container and get a response
5. Building Docker images¶
Before running containers visible to the outside world, we should learn how to build, debug and audit containers ourselves. Public containers are a great resource, but if you are careless, they are also a way for a bad actor to gain access to your machine. You should always know what is running inside your container. Otherwise, you will open up the possibility of being targeted by several types of attacks, including but not limited to supply chain attacks, attacks against non-updated software, attacks against misconfiguration, etc.
Building a container image yourself is one of the best ways to ensure you know what is happening inside your container. Image building is not black magic - anyone can do it. You need two things - docker build
command and a Dockerfile
, a code for the image.
The Dockerfile
is a new syntax that allows you to assemble a working container, which is then snapshotted. That snapshot is the image. The first step to building an image is to choose a base image. Think of a base image as an operating system or a Linux distribution. The only difference is that a base image might be any image; you could even use the image we used before. However, as we are worried about unknown stuff inside our image and container, let's use an image from a trusted source - an image called alpine
. This is a small linux distribution that specialises in making small images. This is a benefit for Docker, as larger containers require much more resources to be run. More information here: https://hub.docker.com/_/alpine
Let us set up our environment by building a Docker image. First, create a folder called docker_lab
(you can put it anywhere). Inside that folder, create two files, one named Dockerfile
and the second called server.py
.
The logic will be following - we will build the container using the Dockerfile
. Inside that Dockefile, there are instructions to install python3
, py3-flask
and to copy our Flask server.py file inside the container. Also, it is also instructed to expose the container port 5000
on startup, and run the server.py
file.
Dockerfile
:
FROM registry.hpc.ut.ee/mirror/library/alpine
RUN apk add --no-cache python3 py3-flask
COPY server.py /opt/server.py
EXPOSE 5000
CMD python3 /opt/server.py
server.py
:
#!/bin/env python3
from flask import Flask, request, jsonify
import socket
app = Flask(__name__)
@app.route("/")
def hello():
response = "Client IP: " + request.remote_addr + "\nHostname: " + socket.gethostname() + "\n"
return response, 200
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
After having both of these files inside the same folder, we need to build our container image with the command:
docker build -t docker_lab .
This line means that a container image is built with the tag of docker_lab
using the Dockerfile in the current directory. After entering the command, it starts running the commands in the same order we specified. Every step it runs creates something called a "layer". Every layer is one difference from the last layer, and this is how images are built. Some layers modify the filesystem inside a container and contribute to the image size, while some layers are 0B in size; in our case, the CMD
directive did not modify the filesystem and just added metadata or configuration to the image. The layering provides benefits, like reusing layers to save disk space when you rerun the build. On the second run, you should see how, for every layer, the cache is used, and the command is instantaneous. You can explore layers with docker history docker_lab
command.
After the image has been built, the only thing left to do is to run a container using the image we built. Let's run a detached container called docker_lab
from our image with the tag of docker_lab
.
docker run -d --name docker_lab docker_lab
After finding out the IP address of the container, and using curl against the container's exposed port, you should see output similar to the following:
Client IP: 192.168.67.1
Hostname: d1409f26cb5f
You can try deleting the container with docker rm -f docker_lab
, and rerunning it, to see the Hostname field change. The Client IP
field stays the same, as this is the IP the container sees the client query come from, which will always be the Docker IP address if we query from another network.
Task 3 - Building an image recap
- Create
Dockerfile
andserver.py
insidedocker_lab
directory - Build the image and run it in detached mode
Verify
- Find container IP address and curl against
5000
port - Delete and rerun the container to see Hostname change
6. Docker networking¶
We have now run containers in different ways, but running containers to only be accessible from inside the machine itself is kind of useless, as usually most services need someone to connect to it somehow.
This is why Docker and its ecosystem supports several ways to publish a container to the network. We are going to introduce three different ways. Two of them this lab, and the last one in the 11th lab which introduces Kubernetes.
- Opening a port from host port to container port.
- Using a static proxy to route traffic from public network (in our case the VM's external network) to the container.
- Using a dynamic proxy, but with the autodetection of containers.
Opening a port to the outside world¶
Warning
Before we continue with this part, make sure to open port 5005/tcp
in both firewalld and ETAIS firewall.
The easiest way to publish a container to the network is to just open a port between the host's public interface port and container's port.
To test this out, let's deploy a new container using the same image, but this time with -p
port flag to bind an exposed 5000
to the host machine 5005
port. Also, to run the container with the same name, you first have to stop and delete the previous one:
docker stop docker_lab
docker rm docker_lab
docker run -d --name docker_lab -p 5005:5000 docker_lab
After running this, you should be able to access the website on your machine's name and ip, and port 5005.
Note
The problem with this approach is that first, you need to open arbitrary ports, which is not always possible, and second is that users do not really want to use arbitrary ports through their browsers.
Task 4 - Opening a port to the outside world recap
- Open
5005/tcp
port in your VM and ETAIS - Delete previous
docker_lab
container - Run the container in detached mode and bind the exposed port to
5005
machine port - curl your machine's name/ip, and port 5005.
Proxy traffic from public network to the container using apache2
¶
The same way we proxied an application in the Web lab, we can do the same to the containers.
Let's proxy our own built container through our Apache server.
Warning
Before we continue with this part, point a DNS name container-proxy.<vm-name>.sa.cs.ut.ee
to your machine.
Create a virtual host in your web server.
<VirtualHost *:80>
ServerName container-proxy.<vm_name>.sa.cs.ut.ee
# ServerName sets the name to listen for with requests
ErrorLog /var/log/httpd/container-proxy-error_log
CustomLog /var/log/httpd/container-proxy-access_log common
ForensicLog /var/log/httpd/container-proxy-forensic_log
ProxyPreserveHost On
ProxyPass / http://<container_ip>:5000/
ProxyPassReverse / http://<container_ip>:5000/
</VirtualHost>
After restarting the webserver, you should be able to access the container on the specified name.
Note
This approach is better than the port opening one, because here yu can also specify extra TLS, security configuration and logging on the webserver level.
The problem with this approach is that every time you recreate the container, you need to come and change the IP.
Proxying the connections to the container without fear of container IP changing¶
To combine the best of both worlds you can forward the port 5005
from your host's public interface and keep it firewalled. Or even better - only forward the port from your host's local interface (127.0.0.1). After that you can proxy pass to your localhost's port 5005
without having to worry about the container's IP changing.
Edit your virtual host config for container-proxy
address.
<VirtualHost *:80>
ServerName container-proxy.<vm_name>.sa.cs.ut.ee
# ServerName sets the name to listen for with requests
ErrorLog /var/log/httpd/container-proxy-error_log
CustomLog /var/log/httpd/container-proxy-access_log common
ForensicLog /var/log/httpd/container-proxy-forensic_log
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:5005/
ProxyPassReverse / http://127.0.0.1:5005/
</VirtualHost>
And recreate your docker_lab
container to forward only trafific from the local
interface to the container:
docker stop docker_lab
docker rm docker_lab
docker run -d --name docker_lab -p 127.0.0.1:5005:5000 docker_lab
After restarting the webserver, you should be able to access the container on the specified name.
Verify
Check the link container-proxy.<vm-name>.sa.cs.ut.ee
in your browser or using curl
. After having made sure that the link is reachable, make sure to firewall port 5005
again.
Task 5 - static proxy recap
- Forward port 5005 from your
local
interface to thedocker_lab
container - Add
container-proxy.<vm-name>.sa.cs.ut.ee
record to your DNS - Add a virtual host in your web server
- Close port
5005
in firewall
7. Manipulating Docker¶
In this part of the lab we will go over a few debugging methods. This is not mandatory, but will help you in the next lab.
You can view the logs about a container like so: docker logs <container_name>
. You can also add a -f
flag to follow the logs (keep receiving log updates): docker logs -f <container_name>
This prints out all the information the container has printed into its stdout.
You can actually execute commands inside the container. This only works sometimes, if container has bash
or sh
built into it.
The command looks like this: docker exec -ti <container_name> /bin/bash
OR docker exec -ti <container_name> /bin/sh
If it worked, then you can traverse and use commands inside the container itself. Remember, the changes are not persistent - if you delete the container and then start a new one, it will be a fresh slate. Making persistent changes is possible and we will discuss it in the next chapter.
8. Persistent files in containers¶
Sometimes you need the changes in a container survive a reboot (i.e editing a configuration file, or containerizing databases). To do that, we can check the output of docker run --help
and find that there is a flag for mounting directories in the host machine to a container: -v, --volume list Bind mount a volume
.
To read a bit more about it, run man docker-run
. From there you can see the syntax to actually use the -v
flag: -v|--volume[=[[HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]
.
Create a directory on your host machine to store the persistent files of your container¶
Let's now create a directory called docker_lab_persistent_data
. The directory can be located anywhere your user has sufficient permissions, but it is recomemended to create this directory in the same directory you created your Dockerfile
in.
mkdir docker_lab_persistent_data
Create a file in the directory¶
Now create a file named motd.txt
(motd
is short hand for Message of the day
) in this directory and populate it with a message of your liking. For example What a beautiful sunny day it is!
touch docker_lab_persistent_data/motd.txt
echo "What a beautiful sunny day it is!" > docker_lab_persistent_data/motd.txt
Edit the server.py
file to serve the contents of the persistent file¶
After that edit the server.py
file you created before so you can display the contents of your file from your web server.
#!/bin/env python3
from flask import Flask, request, jsonify
import socket
from traceback import format_exc
import os
import time
#https://stackoverflow.com/questions/4060221/how-to-reliably-open-a-file-in-the-same-directory-as-the-currently-running-scrip
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
app = Flask(__name__)
def log_access_to_pesistent_storage():
try:
with open(os.path.join(__location__, "persistent_storage/last_access.txt"), "w+") as f:
time_string = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
f.write(time_string + "\n")
except FileNotFoundError as e:
app.logger.error(format_exc())
@app.route("/")
def hello():
response = "Client IP: " + request.remote_addr + "\nHostname: " + socket.gethostname() + "\n"
return response, 200
@app.route("/motd")
def message_of_the_day():
log_access_to_pesistent_storage()
response = "<h1>The message of the day is:</h1><br>"
try:
with open(os.path.join(__location__, "persistent_storage/motd.txt"), "r") as f:
response += f.read()
except FileNotFoundError as e:
response += "No message today"
app.logger.error(format_exc())
return response, 200
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Note
Reading and writing files is a fairly slow process and it would not be a good idea to do it on each request in a production server. But it works well enough for testing purposes of this task.
Rebuild your docker image¶
Now to update the python file in the container, you have to rebuild it. Navigate to the directory your Dockerfile
and server.py
files are in if you're not already in it and re run the docker build -t docker_lab .
command.
docker build -t docker_lab .
And finally run the container again. This time with the -v
flag to mount the previously created docker_lab_persistent_data
directory into the container. In addition, use the -u
flag to set the correct permissions for the directory
Danger
Make sure you are one directory above the previously created docker_lab_persistent_data
directory. You can verify it using the ls
command. docker_lab_persistent_data
should be in the output of the ls
command among other files in the diectory you are in.
docker stop docker_lab
docker rm docker_lab
docker run -d --name docker_lab -p 127.0.0.1:5005:5000 -v $(pwd)/docker_lab_persistent_data:/opt/persistent_storage docker_lab
Check that everything is working¶
Now navigate to container-proxy.<vm_name>.sa.cs.ut.ee/motd
from yout browser to see if the file contents are displayed. If everyting worked you should see something like
The message of the day is:
What a beautiful sunny day it is!
The script also logs the time of the last visit in a file called last_access.txt
on the persistent storage. On your host machine run cat docker_lab_persistent_data/last_access.txt
to see the last access time.
Also note the permissions of the last_access.txt
using, for example, ls -l
. If you did not create the file beforehand, it is extremely probable, that the file is owned by the root
user. This is because it was created by the script in the container, which is running as the root
user. To avoid complications in the future, change the ownership of all files in this directory to be your user
sudo chown -R $(id -u):$(id -g) docker_lab_persistent_data
Change the motd
¶
You can even do changes in the files in the directory mounted to the container and the changed files will be accessible to the container without rebuilding it. To test this, change the contents of motd.txt
to something else of your liking. For example, let's say the weather has changed and you want the message of the day to reflect that:
echo "Oh shoot, it started raining! Let's hope it'll get sunny again soon." > docker_lab_persistent_data/motd.txt
Refresh the page or renavigate to container-proxy.<vm_name>.sa.cs.ut.ee/motd
and see, if the new message is shown. You should see something like this:
The message of the day is:
Oh shoot, it started raining! Let's hope it'll get sunny again soon.
Finally set your docker container to restart always. This will ensure that Docker
brings up the container automatically when it exits due to an error and on system boot.
docker update --restart=always docker_lab
Also do this for the whoami
container created earlier. This is to keep this lab's checks from failing if you happen to restart your host.
docker update --restart=always whoami
Verify
Check the link container-proxy.<vm-name>.sa.cs.ut.ee/motd
in your browser or using curl
. Make sure it renders the new contents if you change the motd.txt
file.
Task 6 - Persistent files in containers
- Create a directory for persistent files in the host machine called
docker_lab_persistent_data
- Create a file in the directory called
motd.txt
- Populate
motd.txt
with some message - Update the
server.py
file with new content - Rebuild the
docker_lab
image - Run the
docker_lab
container with the new image and thedocker_lab_persistent_data
directory mounted to the container - Make sure the restart policy of
docker_lab
andwhoami
container is set toalways
9. Docker and security concerns¶
Danger
Before doing this task, back up /etc/shadow
, /etc/passwd
and /etc/group
files on your virtual machine. You can, for example, use cp
to copy these files to the /root
directory.
Without special configuration (not covered in this lab), docker
service runs as the root
user. As we saw in the previous task, this meant that user permissions and ownership from within the container translated to the same permissions and ownership on the host machine (the last_access.txt
file created by the scipt running in the container was owned by root
). This has some fairly serious security implications - a bad actor able to break out of the service served by the container can potentially run commands on the host system as root
. It also means that any user in the docker
group can gain root
privileges with very little effort. To illustrate this, let's to a small excercise.
Run a new container with host system's /etc/shadow
, /etc/group
and /etc/passwd
mounted to the container¶
docker run -it -v /etc/passwd:/etc/passwd -v /etc/shadow:/etc/shadow -v /etc/group:/etc/group quay.io/centos/centos:stream9
After a bit of waiting you should see something like this:
[root@0c119569e99f /]#
The host name part (c119569e99f
) should differ but if you otherwise see something similar, it means you're using an interactive shell (bash
) as the root
user inside the centos8
container you just ran.
Add a user to the /etc/passwd
file¶
/etc/passwd
is the file containing information about the system's users. But before we can add a user into the file, we should figure out what the ID of the user should be. To do that, we'll take a look at, what's the largest user ID (excluding 65534) in /etc/passwd
and add 1
to this ID. Here are a few handy commands to do that:
[root@0c119569e99f /]# LARGEST_USER_ID=$(cat /etc/passwd | awk -F: '{print $3}' | grep -v 65534 | sort -n | tail -n 1)
[root@0c119569e99f /]# NEW_USER_ID=$(( LARGEST_USER_ID + 1 ))
Now that we have figured out, what the user ID will be for our new user, let's add a line into /etc/passwd
to create a new user called evil_hacker
[root@0c119569e99f /]# echo "evil_hacker:x:$NEW_USER_ID:$NEW_USER_ID::/home/evil_hacker:/bin/bash" >> /etc/passwd
Add a password for the new user in /etc/shadow
¶
Now that we have created a user, let's generate and add a password for the user. First, to generate a password, we are going to use openssl
. You can run the openssl
command on your own PC if you have the command installed, or you can open a second terminal and SSH into your virtual machine, which should already have openssl
installed. Figure out a password you would like evil_hacker
to have and use the following command to create a /etc/shadow
compatible string of this password
[<YOUR USER>@<YOUR MACHINE>]$ openssl passwd -6
Password:
Verifying - Password:
$6$gNV33khHuymCeGkg$7i5fOhBcL0xyubuYin.HW091Sdak3ZxIpwo3J0XMo55/aYX5PjsfHHxQTT5yZnAWAEmAEkpqHFqBEOVetS1nN/
After you have generated the password string, hop back to the terminal where the shell of the container is open. Run the following command to add a password to evil_hacker
. Be sure to use single quotes with the echo
command this time. It keeps the $
caharacters from being treated as variable references.
[root@0c119569e99f /]# echo 'evil_hacker:<PASSWORD STRING GENERATED BEFORE>:20172:0:99999:7:::' >> /etc/shadow
Give the new user sudo permissions by editing /etc/group
¶
Finally, let's give our user sudo permissions. For that we're going to have to add evil_hacker
into the wheel
group.
First, let's save the contents of /etc/group
into a variable called NEW_GROUP
and add evil_hacker
to the end of the line containing the wheel
group.
[root@0c119569e99f /]# NEW_GROUP=$(sed '/^wheel/ s/$/,evil_hacker/' /etc/group)
And then replace the content of /etc/group
with th content of the variable:
[root@0c119569e99f /]# echo "$NEW_GROUP" > /etc/group
Let's test drive our new user¶
You can now exit the container shell and be dropped back into the shell of the host machine. Use the su
command to try and log in as evil_hacker
. Use the password you gave as the input to the openssl
command before. If the login succeeds, try to log in as the root
user by invoking sudo su -
. Use the same password as before.
[centos@some-machine ~]$ su - evil_hacker
Password:
Last login: Thu Apr 20 16:16:16 EEST 1616 on pts/0
su: warning: cannot change directory to /home/evil_hacker: No such file or directory
[evil_hacker@some-machine centos]$ sudo su -
[sudo] password for evil_hacker:
Last login: Thu Apr 10 20:12:44 EEST 2025 on pts/0
[root@some-machine ~]#
Recap¶
We have just created a user with permissions to become root
without using sudo, the only thing we needed, was our user being in the docker
group. So this is something to keep in mind: when using default docker
install not in rootless mode, being in docker
group is just having sudo
permissions with extra steps.
10. Ansible and Docker¶
Putting the whole lab into Ansible is not something that would result in a idempotent playbook. So we will list the things that can be automated in an idempotent way, but some part of the labs will require manual intervention by the user. The following suggestions and steps are just guidelines, if you think you can do better, feel free to do so.
- Create
/etc/docker
directory - Copy
daemon.json
file into that directory - Perform necessary steps to install docker on your VM according to this
- Add your user to the
docker
group - Create a separate directory for flask app.
- Copy the
server.py
and necessary Dockerfile into it. - Create the directory for persistent storage and
motd.txt
in the directory - Add a DNS name
container-proxy.<vm-name>.sa.cs.ut.ee
. - Copy the VirtualHost file into appropriate place.
Pay attention here. We don't really suggest starting containers and building them with ansible as that would result in a non-idempotent playbook. But setting up all the necessary files for few manual command is welcome.
11. Keep your playbook safe¶
As per usual always push your playbook to course's Gitlab.
- In your ansible playbook directory:
git add .
git commit -m "Docker lab"
git push -u origin main
Go to your gitlab page in to see all of your latest push reached the git repository. If you play around with your repository and have made changes to the ansible that you wish to utilize also in the future, always remember to commit and push them.