Skip to content

Lesson 4.3: Host and None Networks

Welcome to Lesson 4.3! You've learned about bridge networks, which provide isolation and communication between containers. Now we'll explore two special network drivers: host and none. These networks give you either full host access or complete isolation. Understanding them will help you choose the right networking model for performance-critical applications or when you need to disable networking entirely.


Learning Objectives

TIP

By the end of this lesson, you will be able to:

  • Explain the host network driver and its use cases (performance, binding to all host ports).
  • Describe the limitations and security implications of the host network.
  • Use the none network driver to disable networking in a container.
  • Identify scenarios where host or none networks are appropriate.
  • Compare host and none networks with bridge networks.

1. The Host Network Driver

The host network driver removes network isolation between the container and the Docker host. The container shares the host's network stack directly.

1.1. How It Works

When you run a container with --network host:

  • The container does not get its own IP address; it uses the host's IP.
  • The container sees the same network interfaces as the host (e.g., eth0, lo).
  • Port bindings (-p) are ignored because the container already has full access to the host's ports.
  • The container can bind to any port on the host, and that port becomes directly available on the host.

WARNING

The host network driver is only available on Linux hosts. On Docker Desktop (macOS/Windows), it runs in a virtual machine, and the container's network appears as the VM's network, not the host's.

1.2. When to Use Host Network

TIP

Host network is useful for:

  • Performance‑sensitive applications: Avoiding the overhead of bridge NAT and port mapping can improve throughput and reduce latency.
  • When a container needs to bind to many ports: Instead of publishing hundreds of ports with -p, you can use host network and let the container bind directly.
  • When you need the container to behave like a native process on the host (e.g., running a network monitoring tool that needs to see host interfaces).

1.3. Security Implications

DANGER

  • Reduced isolation: The container can access all host network interfaces and potentially interfere with other services.
  • No port namespace: The container can bind to ports already in use by other containers or host processes, causing conflicts.
  • Privileged access: Host networking often requires additional privileges; avoid using it with untrusted containers.

1.4. Example: Running an Nginx Container with Host Network

bash
docker run -d --name nginx-host --network host nginx

Now Nginx is accessible on the host's port 80 directly (no -p). You can test with curl localhost.


2. The None Network Driver

The none network driver disables all networking for the container. The container gets only a loopback interface (lo), with no external network access.

2.1. How It Works

When you run a container with --network none:

  • No external network interfaces are created.
  • The container cannot communicate with other containers or the host.
  • It can only communicate with itself via localhost.

2.2. When to Use None Network

TIP

None network is ideal for:

  • Isolated batch jobs: Containers that process data but don't need any network.
  • Security‑sensitive tasks: When you want to guarantee no network leaks.
  • Testing: When you need to test how an application behaves without network access.
  • Running containers that will later be attached to a network (e.g., you can start with none and later connect to a bridge using docker network connect).

2.3. Example: Running a Container with None Network

bash
docker run -it --network none alpine sh

Inside, run ip addr – you'll see only lo (loopback). Attempting to ping anything will fail.


3. Comparing Host, None, and Bridge

FeatureBridge (default)HostNone
IsolationContainers isolated on private networkNo isolation; shares host stackComplete isolation (no network)
IP AddressUnique IP on bridge subnetUses host IPNone (loopback only)
Port MappingRequired for host access (-p)Not used; container binds directlyNot applicable
PerformanceSlight overhead (NAT)Near-native performanceNo network traffic
Use CaseMulti‑container apps, defaultPerformance, host integrationIsolated processes, security
PlatformAllLinux only (full feature)All

4. Combining Networks

A container can be attached to multiple networks. For example, you could start a container with --network none to prevent any initial access, then later connect it to a bridge network using docker network connect. This is useful for adding networking only when needed.

bash
# Start with no network
docker run -d --name isolated --network none alpine sleep 3600

# Later, connect to a bridge network
docker network connect mynet isolated

5. Hands-On Tasks

Task 1: Run a Container with Host Network (Linux Only)

  1. If you are on Linux, run:
    bash
    docker run -d --name host-nginx --network host nginx
  2. Access Nginx at http://localhost (port 80). If you have another web server on port 80, you may get a conflict.
  3. Stop and remove the container:
    bash
    docker stop host-nginx && docker rm host-nginx

Task 2: Inspect Network Interfaces

  1. Run an Alpine container with host network:
    bash
    docker run --rm --network host alpine ip addr
    Compare the output with ip addr on your host. They should be identical (or very similar on Docker Desktop).
  2. Run an Alpine container with bridge network:
    bash
    docker run --rm alpine ip addr
    Observe that it shows a virtual Ethernet interface (eth0) with a private IP.

Task 3: None Network Isolation

  1. Run a container with none network:
    bash
    docker run -it --network none alpine sh
  2. Inside, run ip addr – only lo appears.
  3. Try ping 8.8.8.8 – it will fail (network unreachable).
  4. Exit the container.

Task 4: Connect a None‑Network Container to a Bridge

  1. Create a bridge network:
    bash
    docker network create mybridge
  2. Start a container with --network none:
    bash
    docker run -d --name isolated --network none alpine sleep 3600
  3. Connect it to mybridge:
    bash
    docker network connect mybridge isolated
  4. Exec into the container and check interfaces:
    bash
    docker exec -it isolated ip addr
    Now you'll see both lo and a new interface (eth0) with an IP from the bridge network.
  5. Clean up:
    bash
    docker rm -f isolated
    docker network rm mybridge

Task 5: Host Network on Docker Desktop (Simulation)

If you are on macOS or Windows, host network behaves differently (it's a simulation that still provides some isolation). Run:

bash
docker run --rm --network host alpine ip addr

You'll see the VM's network interfaces, not your host's. This is expected.


Summary

Key Takeaways

  • The host network driver removes isolation, making the container use the host's network stack directly.
  • It is useful for performance‑critical applications or when a container needs to bind to many ports.
  • Host network is only fully functional on Linux; on Docker Desktop, it runs in the VM context.
  • The none network driver disables all networking, giving only a loopback interface.
  • None network is ideal for isolated tasks or as a starting point before attaching to a network.
  • Both drivers have specific use cases; for most multi‑container applications, a user‑defined bridge remains the best choice.

Check Your Understanding

  1. What does --network host do, and when might you use it?
  2. What are the security implications of using the host network driver?
  3. How do you run a container with no network access?
  4. Can you later connect a container that was started with --network none to a bridge network? How?
  5. Why is host network not as effective on Docker Desktop (macOS/Windows) as on Linux?
  6. A container on the host network tries to bind to port 80. What happens if another process on the host is already using port 80?
Click to see answers
  1. --network host makes the container share the host's network stack directly. Use it for performance‑critical applications or when you need the container to bind to many ports without -p.
  2. Reduced isolation (can interfere with host services), no port namespace (conflicts possible), and requires additional privileges. Avoid with untrusted containers.
  3. docker run -it --network none imagename
  4. Yes. Use docker network connect networkname containername to add a network interface to an existing container, even one that started with --network none.
  5. On Docker Desktop, containers run inside a Linux VM. The host network driver shows the VM's network interfaces, not the host's, so it's not true host networking.
  6. The container will fail to start with a "port is already allocated" error.

Additional Resources


Next Up

In the next lesson, we'll cover Exposing Ports in depth, focusing on how to publish ports correctly and understand the underlying NAT mechanics. See you there!