05 - Docker Networking
Networking Overview
Docker creates a virtual networking layer so containers can communicate with each other and the outside world.
┌─── Host Machine ───────────────────────────────────────┐
│ │
│ eth0 (host NIC) │
│ │ │
│ docker0 (bridge: 172.17.0.1) │
│ ├── veth1 ←→ Container A (172.17.0.2) │
│ ├── veth2 ←→ Container B (172.17.0.3) │
│ └── veth3 ←→ Container C (172.17.0.4) │
│ │
│ my-network (bridge: 172.18.0.1) │
│ ├── veth4 ←→ Container D (172.18.0.2) │
│ └── veth5 ←→ Container E (172.18.0.3) │
│ │
└────────────────────────────────────────────────────────┘
Network Drivers
| Driver | Use Case | Container-to-Container | Isolation |
|---|---|---|---|
| bridge | Default. Containers on same host | Via IP or DNS (user-defined) | Per-network |
| host | Performance-critical apps | N/A (shares host network) | None |
| none | Maximum isolation | No networking | Complete |
| overlay | Multi-host (Swarm/K8s) | Across hosts | Per-network |
| macvlan | Containers need real IPs on LAN | Like physical devices | Per-VLAN |
| ipvlan | Similar to macvlan, L3 routing | L2 or L3 mode | Per-network |
Bridge Network (Default)
Default Bridge (docker0)
bash# Containers on default bridge communicate by IP only (no DNS) docker run -d --name web nginx docker run -d --name api myapp # Get container IP docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web # 172.17.0.2 # From api container, can reach web by IP docker exec api ping 172.17.0.2 # Works docker exec api ping web # FAILS! No DNS on default bridge
User-Defined Bridge (Recommended)
bash# Create a custom network docker network create mynet # Run containers on custom network docker run -d --name web --network mynet nginx docker run -d --name api --network mynet myapp # DNS resolution works! docker exec api ping web # Works! Resolves to 172.18.0.2 docker exec api curl web:80 # Works! DNS + port # Containers on different networks are isolated docker run -d --name isolated --network bridge nginx docker exec isolated ping web # FAILS -- different network
Why user-defined bridges are better:
- Automatic DNS resolution (container name → IP)
- Better isolation (containers must share a network)
- Connect/disconnect without restarting
- Configurable subnets
Network Operations
bash# List networks docker network ls # Inspect a network docker network inspect mynet # Connect running container to additional network docker network connect mynet existing-container # Disconnect from network docker network disconnect mynet existing-container # Remove a network docker network rm mynet # Remove all unused networks docker network prune # Create with specific subnet docker network create --subnet=10.0.0.0/24 --gateway=10.0.0.1 mynet
Host Network
Container shares the host's network stack directly:
bash# Container uses host's IP and ports directly docker run -d --network host nginx # nginx is now accessible at host_ip:80 # No port mapping needed (or possible) # Check -- no separate IP docker exec <container> ip addr # Shows the host's interfaces
When to use:
- Maximum network performance (no NAT overhead)
- Container needs to access host network services
- Tools that need to see all host traffic
Downsides:
- No port isolation (port conflicts possible)
- No network-level security isolation
- Only works on Linux (not macOS Docker Desktop)
None Network
Completely isolated -- no networking at all:
bashdocker run -d --network none alpine sleep 3600 docker exec <container> ip addr # Only loopback interface (127.0.0.1)
When to use:
- Security-sensitive batch processing
- Containers that need zero network access
Overlay Network (Multi-Host)
For containers across multiple Docker hosts (Docker Swarm):
bash# Initialize Swarm (required for overlay) docker swarm init # Create overlay network docker network create --driver overlay --attachable my-overlay # Containers on different hosts can communicate docker run -d --name web --network my-overlay nginx # On another node: docker run -d --name api --network my-overlay myapi docker exec api ping web # Works across hosts!
How it works:
Host A Host B
┌────────────┐ ┌────────────┐
│ Container1 │ │ Container2 │
│ 10.0.0.2 │ │ 10.0.0.3 │
└─────┬──────┘ └─────┬──────┘
│ │
VXLAN Tunnel (UDP 4789) │
└─────────────────────────────┘
Macvlan Network
Containers get real MAC addresses and appear as physical devices on your LAN:
bash# Create macvlan network docker network create -d macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=eth0 \ my-macvlan # Container gets a real LAN IP docker run -d --network my-macvlan --ip=192.168.1.100 nginx # Accessible directly at 192.168.1.100 from your LAN
DNS Resolution
Built-in DNS Server (127.0.0.11)
User-defined networks include an embedded DNS server:
bashdocker network create mynet docker run -d --name db --network mynet postgres:16 docker run -d --name api --network mynet myapp # Inside api container: # DNS resolves "db" → 172.18.0.2 # ping db ✓ # psql -h db ✓
DNS Aliases
bash# Give a container a network alias docker run -d --name postgres-primary --network mynet \ --network-alias db \ --network-alias database \ postgres:16 # Both "db" and "database" resolve to this container docker exec api ping db # Works docker exec api ping database # Works too
Multiple containers with same alias (round-robin DNS):
bashdocker run -d --name web1 --network mynet --network-alias web nginx docker run -d --name web2 --network mynet --network-alias web nginx docker run -d --name web3 --network mynet --network-alias web nginx # "web" round-robins across web1, web2, web3 docker exec api ping web # Different IP each time
How Port Mapping Works (iptables)
When you use -p 8080:80:
bash# Docker creates iptables rules iptables -t nat -L -n | grep 8080 # DNAT rule: redirect host:8080 → container:80 # Chain DOCKER # target prot opt source destination # DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.17.0.2:80
External Request → Host:8080
↓ (iptables DNAT)
Container:80 (172.17.0.2)
↓
Response back via same path
Container Communication Patterns
1. Same Network (Direct)
bashdocker network create backend docker run -d --name db --network backend postgres:16 docker run -d --name api --network backend -e DB_HOST=db myapi # api connects to postgres at "db:5432"
2. Different Networks (Isolated by Default)
bashdocker network create frontend docker network create backend docker run -d --name web --network frontend nginx docker run -d --name db --network backend postgres:16 # web CANNOT reach db (different networks) # To bridge: connect a container to both networks docker run -d --name api --network backend myapi docker network connect frontend api # Now api can reach both web and db
3. Host Services from Container
bash# Linux: access host at 172.17.0.1 (docker0 gateway) # Or use host.docker.internal (Docker Desktop) docker run -d -e DB_HOST=host.docker.internal myapp
Network Troubleshooting
bash# Run a debug container on the same network docker run --rm -it --network mynet nicolaka/netshoot # Inside netshoot: ping db # Test connectivity dig db # DNS lookup nslookup web # DNS lookup curl -v http://web:80 # HTTP test traceroute api # Route tracing iftop # Live traffic monitor tcpdump -i eth0 # Packet capture # From host: inspect network details docker network inspect mynet docker inspect --format='{{json .NetworkSettings.Networks}}' mycontainer | jq
FAANG Interview Angle
Common questions:
- "How do containers on the same host communicate?"
- "What's the difference between bridge and overlay networks?"
- "How does Docker DNS work?"
- "How would you isolate frontend and backend containers?"
- "When would you use host networking?"
Key answers:
- Default bridge: IP only. User-defined bridge: DNS + isolation
- Overlay uses VXLAN tunnels for multi-host communication
- Embedded DNS (127.0.0.11) resolves container names on user-defined networks
- Use separate networks + connect shared containers to both
- Host networking for performance; no NAT overhead, but no isolation