Skip to content

Containers

Overview

Containers are a lightweight form of virtualization that packages an application and its dependencies into an isolated unit that runs on a shared operating system kernel. Unlike virtual machines, containers do not include a full OS — they share the host kernel and only bundle the application layer, making them much faster to start and more resource-efficient.

Container images are immutable templates from which containers are instantiated. This means every container started from the same image begins in an identical state, enabling reproducible deployments. Container registries store and distribute images. Understanding container networking, storage (volumes), and the build process (Dockerfiles) is essential for modern system administration.

How It Works

Containers vs Virtual Machines

Containers and VMs both provide isolation, but at different levels:

  • VMs run a full operating system inside a hypervisor. Each VM has its own kernel, consuming significant resources.
  • Containers share the host OS kernel and only bundle the application layer (libraries, binaries, config files). This makes them much faster to start (seconds vs minutes) and more resource-efficient.

The container engine (e.g., Docker) creates an abstraction layer between the application and the underlying OS, using Linux kernel features like namespaces (isolation) and cgroups (resource limits).

Images and Containers

An image is a static, immutable template — a snapshot of everything needed to run an application. A container is a running instance created from an image.

Key properties:

  • Images are built in layers — each instruction in a Dockerfile creates a layer
  • Layers are cached and reused, making rebuilds fast and storage efficient
  • Running a container from an image never modifies the image itself
  • You can spin up thousands of containers from the same image, each starting from the identical initial state

Building Images with Dockerfiles

A Dockerfile is a text file containing instructions to assemble a container image:

FROM alpine                          # Base image (small Linux distro)
RUN apk add --no-cache python3       # Install dependencies
COPY server.py /opt/server.py        # Copy application code
EXPOSE 5000                          # Document the exposed port
CMD python3 /opt/server.py           # Default command to run

Build with: docker build -t myapp .

Choosing a trusted, minimal base image (like alpine) is important for both security and size.

Container Registries

Container images are stored and distributed through registries — specialized servers that host images. Examples:

  • Docker Hub (hub.docker.com) — the default public registry
  • Private registries — organizations run their own for internal images
  • Cache/mirror registries — local mirrors that cache images from public registries to avoid rate limits

Images are referenced as registry/repository:tag (e.g., registry.example.com/library/nginx:latest).

Container Networking

Docker creates a virtual network (typically docker0 bridge) with its own IP address range. Containers get IPs on this network and can communicate with each other, but are not directly accessible from outside the host.

Three common patterns for exposing containers:

  1. Port mapping (-p 8080:80) — binds a host port to a container port. Simple but requires managing arbitrary ports.
  2. Reverse proxy — a web server (Apache, Nginx, Traefik) routes traffic from standard ports (80/443) to container IPs. More flexible, supports TLS and virtual hosting.
  3. Localhost binding (-p 127.0.0.1:5005:5000) — only exposes the container on localhost, then use a reverse proxy. Best of both worlds: the container IP can change without breaking the proxy config.

Persistent Storage (Volumes)

By default, container filesystems are ephemeral — when a container is deleted, all changes are lost. For persistent data (databases, config files, uploads), Docker provides volumes:

docker run -v /host/path:/container/path myapp

The -v flag mounts a host directory into the container. Changes in either location are immediately visible in the other. Files created by the container in the mounted directory persist on the host after the container is removed.

Container Lifecycle

Common Docker commands for managing containers:

  • docker run -d --name myapp myimage — start a detached container
  • docker ps — list running containers (-a for all)
  • docker logs myapp — view container output
  • docker exec -ti myapp /bin/sh — open a shell inside a running container
  • docker stop myapp / docker rm myapp — stop and remove
  • docker update --restart=always myapp — auto-restart on failure or reboot

Key Terminology

Image
An immutable template containing the application and all its dependencies. Containers are instantiated from images.
Container
A running instance of an image. Ephemeral by default — stopping and removing it loses all changes not stored in volumes.
Dockerfile
A text file with instructions for building a container image, processed layer by layer.
Layer
A single step in the image build process. Layers are cached and shared between images to save space and build time.
Registry
A server that stores and distributes container images.
Volume
A mechanism for persisting data outside the container's ephemeral filesystem, typically by mounting a host directory.
Bridge Network
Docker's default network mode where containers get IPs on a private virtual network and communicate through the docker0 bridge interface.

Why It Matters

As a system administrator, you will:

  • Deploy applications as containers for consistent, reproducible environments
  • Build custom images with Dockerfiles to control exactly what runs in your containers
  • Manage container networking to expose services securely
  • Configure persistent storage for stateful applications (databases, file uploads)
  • Debug running containers using logs and exec commands
  • Understand the security implications of running containers (especially as root)

Common Pitfalls

  1. Running as root — by default, Docker runs as root. Any user in the docker group effectively has root access to the host. Mounting sensitive host files (e.g., /etc/shadow) into a container can compromise the entire system.
  2. Using untrusted images — public images may contain vulnerabilities or malicious code. Always know what's in your images; build from trusted base images when possible.
  3. Forgetting network conflicts — Docker's default bridge network may conflict with existing network ranges. Configure daemon.json with safe address pools before starting Docker.
  4. Not persisting data — databases and other stateful services lose all data when the container is removed if volumes aren't configured.
  5. Container IP changes — container IPs change on recreation. Use port mapping with localhost binding + reverse proxy instead of hardcoding container IPs.
  6. Not setting restart policies — without --restart=always, containers don't survive host reboots.
  7. Image bloat — installing unnecessary packages or not cleaning up in Dockerfiles creates large images. Use minimal base images like alpine.

Further Reading