What is Docker?

Docker is a container platform that lets you build, package, ship, and run applications in a consistent way across different environments.
By putting your app and its dependencies into containers, you can treat infrastructure more like code and shorten the time between writing software and running it in production.


The Docker platform

Docker lets you package an application inside a container – a lightweight, isolated environment that contains everything the app needs.
Because containers are isolated and self-contained, you can run many of them on one host without having to worry about what’s installed on that host.

With Docker you can manage the lifecycle of containers along a simple flow:

  • Develop your app and its supporting services as containers.

  • Use the container image as the unit you ship and test.

  • Deploy the same container image to production, whether that’s on-prem, in the cloud, or a hybrid setup.


What can I use Docker for?

Fast, consistent delivery of your applications

Containers give developers identical environments on every machine, which fits very well with CI/CD pipelines.

Typical flow:

  • Developers write code locally and package it into container images.

  • Those images are pushed to a test environment where automated and manual tests run.

  • Bugs are fixed in the same containerized setup and re-tested.

  • Releasing a fix is just pushing an updated image to the production environment.

Responsive deployment and scaling

Containerized workloads are highly portable:

  • The same container can run on a laptop, a VM, a bare-metal server, or a cloud instance.

  • Because containers are small and start quickly, you can scale services up or down close to real-time based on demand.

Running more workloads on the same hardware

Containers have less overhead than full virtual machines:

  • More of the host’s resources can be used for actual workloads instead of guest OS overhead.

  • This is attractive for dense deployments and for smaller setups where cost and efficiency matter.


Docker architecture

Docker follows a client–server model:

  • The Docker daemon does the heavy lifting: building images, running containers, managing networks and volumes.

  • The Docker client is the CLI you use (docker commands). It talks to the daemon over a REST API (Unix socket or network).

  • Docker Compose is another client that lets you define and run multi-container applications from a config file.

The client and daemon can be on the same machine or on separate systems.

The Docker daemon

  • Process name: dockerd.

  • Listens for Docker API requests.

  • Manages Docker objects: images, containers, networks, volumes, and services.

  • Can coordinate with other daemons when using orchestration features.

The Docker client

  • Command: docker.

  • Main interface for most users, sending commands like docker run, docker build, etc., to dockerd.

  • Can connect to multiple daemons if needed (for example, local and remote).

Docker Desktop

Docker Desktop is a bundled app for macOS, Windows, and Linux that includes:

  • The Docker daemon (dockerd)

  • The Docker CLI (docker)

  • Docker Compose

  • Additional components like Docker Content Trust, Kubernetes, and credential helpers

It provides an integrated experience for building and sharing containerized applications on a developer machine.

Docker registries

A Docker registry stores images:

  • Docker Hub is the default public registry.

  • You can host your own private registry for internal images.

Common workflows:

  • docker pull / docker run — download images from a registry.

  • docker push — upload your image to a registry for others (or other environments) to use.

Docker objects

When working with Docker you deal with several object types:

  • Images

  • Containers

  • Networks

  • Volumes

  • Plugins

  • (and more advanced objects in orchestration scenarios)

Below is a brief look at the key ones.

Images

  • An image is a read-only template with instructions for creating a container.

  • Images are often based on other images (for example, starting from ubuntu and adding a web server plus your app).

  • You define how to build an image in a Dockerfile, a text file describing each build step.

  • Each Dockerfile instruction creates a layer in the image.

  • When you change the Dockerfile and rebuild, only changed layers are rebuilt, which makes images efficient to build and reuse.

Containers

  • A container is a running (or stopped) instance of an image.

  • You can create, start, stop, move, or delete containers using the Docker CLI or API.

  • You can connect containers to networks, attach storage, and even turn a modified container back into a new image.

  • By default, containers are isolated from each other and from the host (network, filesystem, etc. can be tuned).

  • A container is defined by:

    • The image it’s based on.

    • The runtime configuration you pass at start (environment variables, mounts, ports, etc.).

  • If you remove a container, any state that wasn’t stored in persistent storage is lost.

Example docker run command
docker run -it ubuntu /bin/bash

What happens when this command runs (assuming default registry settings):

  1. If the ubuntu image isn’t present locally, Docker pulls it from the configured registry (for most setups, Docker Hub).

  2. Docker creates a new container object from that image.

  3. A writable layer (container filesystem) is added on top of the image layers so the container can modify files.

  4. Docker creates a network interface, connects it to the default network, and assigns the container an IP; outbound connectivity uses the host’s network.

  5. Docker starts the container process and runs /bin/bash attached to your terminal because of -it, so you can interact with the shell.

  6. When you type exit, the shell process stops and the container stops too, but it is kept around until you explicitly remove it.


The underlying technology

Docker is implemented in the Go programming language and relies on Linux kernel features such as namespaces to isolate containers:

  • Each container gets its own set of namespaces, giving it an isolated view of processes, networking, and other resources.

  • This isolation means processes inside a container can only see and interact with what’s inside that container’s assigned namespaces.


Next steps


Sources:

What is docker