Lesson 4.5: Container Network Models (CNM)
Welcome to the final lesson of Phase 4! You've learned about bridge, host, none networks, and how to publish ports. Now it's time to look under the hood. Docker's networking is built on the Container Network Model (CNM) – a design that provides pluggable networking for containers. In this lesson, you'll explore the CNM architecture, understand its components, and see how Docker implements network isolation and connectivity. By the end, you'll have a deeper appreciation for how container networking works and how you can extend it with plugins.
Learning Objectives
TIP
By the end of this lesson, you will be able to:
- Describe the three main components of the Container Network Model: Sandbox, Endpoint, and Network.
- Explain the role of
libnetworkin Docker networking. - Understand how network drivers (bridge, overlay, etc.) fit into the CNM.
- Differentiate between CNM and the older
docker daemonnetworking implementation. - Inspect CNM objects using
docker network inspectanddocker inspect. - Recognize the potential for third‑party network plugins.
1. What is the Container Network Model (CNM)?
The Container Network Model is a specification proposed by Docker that defines the building blocks for container networking. It was developed as part of the libnetwork project, which is the native network management library for Docker.
CNM aims to:
- Provide a pluggable architecture: you can swap out the network driver (e.g., use a third‑party overlay driver) without changing the container runtime.
- Separate the concerns of network management from the container runtime.
- Enable multi‑host networking (overlay networks) for orchestration tools like Docker Swarm and Kubernetes.
INFO
In practice, when you run a container, Docker (via libnetwork) creates a CNM‑compliant network stack that isolates the container's network namespace, connects it to a network, and provides service discovery.
2. The Three CNM Components
The model defines three primary objects:
2.1. Sandbox
A sandbox is an isolated network stack. It contains the container's network configuration: interfaces, routing tables, DNS settings, etc. In Linux, a sandbox corresponds to a network namespace. Each container has its own sandbox, ensuring network isolation.
- A sandbox can hold multiple endpoints.
- The sandbox can be moved between networks (by disconnecting and reconnecting endpoints).
2.2. Endpoint
An endpoint connects a sandbox to a network. It's the virtual network interface (veth pair) that plugs into the network. The endpoint has an IP address, MAC address, and other properties.
- An endpoint belongs to exactly one sandbox and exactly one network.
- In a bridge network, the endpoint is the container's side of the veth pair; the other side is plugged into the bridge.
2.3. Network
A network is a collection of endpoints that can communicate with each other. It provides the rules for connectivity (e.g., bridge, overlay). Networks are created and managed by network drivers.
- A network can contain multiple endpoints.
- Each network is isolated from others (unless explicitly connected via routing).
When you run docker run --network mynet, Docker:
- Creates a sandbox (if not already present).
- Creates an endpoint with an IP address from the network's IP pool.
- Attaches the endpoint to the sandbox (plugs the veth into the container's namespace).
- Attaches the endpoint to the network (plugs the other veth into the bridge or other driver).
3. libnetwork and Network Drivers
libnetwork is the reference implementation of CNM. It is integrated into Docker and provides the API for managing networks. It delegates the actual network creation to network drivers.
3.1. Built‑in Drivers
| Driver | Description |
|---|---|
| bridge | Creates a Linux bridge and veth pairs. |
| overlay | Creates an overlay network across multiple hosts (used by Swarm). |
| macvlan | Assigns a MAC address to containers, making them appear as physical devices. |
| ipvlan | Similar to macvlan but at layer 3. |
| host | Removes isolation, shares host network stack. |
| none | Disables all networking. |
3.2. Remote Drivers
Third‑party vendors can implement the CNM driver API to provide their own network solutions (e.g., Cisco Contiv, VMware NSX, Calico, Weave). These are loaded as plugins.
When you run docker network create -d weave mynet, Docker loads the Weave plugin, which implements the CNM driver interface.
4. How Docker Uses CNM (Step‑By‑Step)
Let's trace the creation of a container with a custom bridge network.
User runs:
docker network create mybridge- Docker creates a network object with driver
bridge. - The
bridgedriver creates a Linux bridge on the host (e.g.,br-abc123). - The network gets a subnet and gateway (either default or user‑specified).
- Docker creates a network object with driver
User runs:
docker run --network mybridge --name c1 alpine sleep 3600- Docker (via
libnetwork) creates a sandbox for the container (a new network namespace). - It creates an endpoint (veth pair) with one end placed in the sandbox and the other plugged into the bridge.
- The endpoint is assigned an IP from the network's subnet.
- The container's network namespace is configured with that IP, default route via the bridge gateway.
- The endpoint is added to the network's internal list.
- Docker (via
Second container on same network:
- Similar steps, but the sandbox already exists (new container, new sandbox).
- Endpoint added to the same bridge.
- Now the two containers can communicate because they are on the same bridge.
DNS resolution:
- Docker runs an embedded DNS server that listens on 127.0.0.11 inside each container.
- When container
c2tries to resolvec1, the DNS returns the IP ofc1(from the network's endpoint list). - This works for user‑defined bridge networks but not the default bridge.
5. Inspecting CNM Objects
Docker's CLI exposes CNM objects through network and container inspection.
5.1. Inspect a Network
docker network inspect mybridgeYou'll see:
- Driver: bridge
- Scope: local
- IPAM (IP Address Management): subnet, gateway, IP range
- Containers: list of endpoints attached, with their IPs and container names
5.2. Inspect a Container's Networking
docker inspect c1Look at the NetworkSettings section. It contains:
- Networks: For each network the container is attached to, you'll find the endpoint IP, gateway, and aliases.
- SandboxKey: The path to the network namespace (e.g.,
/var/run/docker/netns/...).
5.3. Advanced: See Namespaces
On Linux, you can enter the container's network namespace with nsenter (requires root and the PID). Example:
# Get container PID
docker inspect -f '{{.State.Pid}}' c1
# Enter its network namespace
sudo nsenter -t <PID> -n ip addrThis shows the network interfaces inside the sandbox.
6. CNM vs. Legacy Networking
Before libnetwork, Docker's networking was tightly coupled to the docker daemon. The introduction of CNM brought:
- Pluggable drivers (third‑party network plugins).
- Better multi‑host networking (overlay).
- Clear separation between the container runtime and network management.
- Improved service discovery (DNS) for user‑defined networks.
The default bridge network (the old docker0) still exists for backward compatibility, but user‑defined bridges are the CNM way.
7. Hands‑On Tasks
Task 1: Explore CNM Components with a User‑Defined Bridge
- Create a network:bash
docker network create cnm-net - Inspect the network and note the
Containerssection (empty). - Run two containers on this network:bash
docker run -d --name c1 --network cnm-net alpine sleep 3600 docker run -d --name c2 --network cnm-net alpine sleep 3600 - Inspect the network again. Now you'll see the containers listed under
Containerswith their endpoint IDs and IPs. - Inspect one container (
docker inspect c1) and look atNetworkSettings.Networks["cnm-net"]. You'll see its IP, gateway, and endpoint ID.
Task 2: View the Underlying Linux Interfaces (Linux Only)
- Find the bridge name associated with the network:bashThe output shows something like:
docker network inspect cnm-net | grep -i bridge"com.docker.network.bridge.name": "br-abc123" - Use
brctl showorip link showto list bridges. You'll see the bridge created by Docker. - List veth pairs:
ip link show | grep veth. One end of each veth is attached to the bridge; the other is inside the container. - If you have root access, enter the container's network namespace (see advanced) and observe the interfaces.
Task 3: Create a Network with a Remote Driver (Simulate)
Though you don't have a third‑party plugin, you can see the remote driver concept by looking at Docker's network plugin directory:
ls /var/lib/docker/plugins/ # may show plugins if installed(Not necessary for the exercise.)
Task 4: Compare Default Bridge vs. User‑Defined Bridge
- Inspect the default bridge:
docker network inspect bridge.- Note the
Containerssection – it shows containers attached to the default bridge (if any). - There is no DNS service; the
Optionsfield may show"com.docker.network.bridge.default_bridge": "true".
- Note the
- Compare with the
cnm-netyou created. The user‑defined network has an embedded DNS server and service discovery.
Task 5: Endpoint Details
- Create a new container on a user‑defined network, but do not start it:bash
docker create --name c3 --network cnm-net alpine sleep 3600 - Inspect the network: you'll see the container listed under
Containersbut with a placeholder (it doesn't have an IP yet because it's not running). - Start the container:bash
docker start c3 - Inspect again; now it has an IP.
- Disconnect the container from the network:bash
docker network disconnect cnm-net c3 - Inspect the network –
c3is gone. The endpoint is removed.
Task 6: Multi‑Endpoint Sandbox
- Run a container attached to two networks:bash
docker run -d --name c4 --network cnm-net alpine sleep 3600 docker network create other-net docker network connect other-net c4 - Inspect the container:
docker inspect c4– underNetworkSettings.Networksyou'll see two networks listed, each with its own endpoint IP. - This container has a single sandbox (one network namespace) but two endpoints (two virtual interfaces) inside that namespace.
Summary
Key Takeaways
- The Container Network Model (CNM) defines how container networking is structured, with sandbox, endpoint, and network as key abstractions.
- libnetwork is the reference implementation of CNM used by Docker.
- Network drivers (bridge, overlay, etc.) implement the underlying connectivity.
- User‑defined networks use CNM features like embedded DNS and service discovery.
- You can inspect CNM objects via
docker network inspectanddocker inspect. - CNM enables pluggable networking, allowing third‑party drivers to be used.
Check Your Understanding
- What are the three core components of the Container Network Model?
- How does a sandbox relate to a container's network namespace?
- What is the role of a network driver in CNM?
- How does Docker's embedded DNS work on user‑defined networks?
- Why does the default
bridgenetwork not support container name resolution? - How can you list the veth pairs created for a specific bridge network on Linux?
Click to see answers
- Sandbox (isolated network stack), Endpoint (connection between sandbox and network), and Network (collection of endpoints).
- A sandbox corresponds directly to a Linux network namespace—a container's isolated view of network interfaces, routing tables, and DNS configuration.
- Network drivers implement the actual network creation and management. They handle creating bridges, overlay tunnels, or other network types, and provide the connectivity rules for endpoints.
- Docker runs an embedded DNS server at 127.0.0.11 inside each container on user-defined networks. When a container queries for another container's name, the DNS server returns the IP from the network's endpoint list.
- The default bridge predates CNM and doesn't include the embedded DNS server. Only user-defined bridges have the DNS service that enables name resolution.
- Use
ip link show | grep vethto see veth pairs, or inspect the specific bridge withip link show br-xxxto see which veths are attached.
Additional Resources
- Docker libnetwork GitHub repository
- Container Network Model (CNM) design document
- Network plugins
- Understanding Docker networking with CNM
Next Up
This concludes Phase 4: Networking in Docker. You now have a comprehensive understanding of Docker's networking capabilities, from basic bridge networks to the underlying CNM architecture. In Phase 5, you'll learn Docker Compose – the tool for defining and running multi‑container applications. See you there!