Welcome to the final technical installment of Linux Networking Mastery!
By now you have a comprehensive toolkit:
- Part 1 – network stack basics and inspection tools
- Part 2 – interface and IP configuration (temporary + persistent via Netplan, nmcli, systemd-networkd)
- Part 3 – routing tables, static/policy routing, namespaces, simple router setup
- Part 4 – name resolution, systemd-resolved, per-link/global DNS, troubleshooting
- Part 5 – firewalls with nftables, firewalld, ufw, stateful rules
- Part 6 – services (hardened SSH, Nginx basics, NFS/Samba shares, DHCP with dnsmasq)
- Part 7 – monitoring (
ss,tcpdump,iperf3,iftop), troubleshooting workflows - Part 8 – bonding, VLANs, bridges, WireGuard
- Part 9 – wireless client & AP (
nmcli,hostapd)
In Part 10 we tie everything together by exploring how containers and virtual machines handle networking — one of the most common real-world applications of the concepts we’ve covered.
We’ll look at:
- Docker and Podman networking modes
- Bridge, host, macvlan, ipvlan, overlay networks
- libvirt / QEMU bridge networking
- Basic Kubernetes networking concepts
- A capstone hands-on project combining multiple techniques
1. Docker Networking Basics
Docker (still widely used in 2026) creates its own bridge network by default.
Common modes:
# Default bridge network (NAT + port publishing)
docker run -d -p 8080:80 nginx
# Host network (shares host’s network namespace)
docker run --network host nginx
# No network (completely isolated)
docker run --network none busybox
# Custom user-defined bridge
docker network create mybridge
docker run --network mybridge nginx
Inspect:
docker network ls
docker network inspect bridge
Under the hood: Docker uses a Linux bridge (docker0), iptables/nftables NAT rules, and veth pairs connecting each container to the bridge.
Important 2026 note: Many distributions and new deployments prefer Podman (daemonless, rootless by default).
2. Podman Networking (Rootless & Modern Preference)
Podman uses the same CNI (Container Network Interface) plugins as Kubernetes.
Default (bridge-like) behavior:
podman run -d -p 8080:80 docker.io/library/nginx
Rootless default network (slirp4netns or pasta):
podman network ls
podman network inspect podman
Create custom network:
podman network create mynet
podman run --network mynet -d nginx
Advanced modes (same as Docker):
podman run --network host ...
podman run --network macvlan ...
3. Specialized Network Drivers
macvlan
Container gets its own MAC address and appears directly on the parent network (no NAT).
ip link add macvlan0 link enp0s3 type macvlan mode bridge
podman network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=enp0s3 mymacvlan
podman run --network mymacvlan --ip 192.168.1.200 nginx
ipvlan
Similar to macvlan but shares the parent MAC (L3 mode).
overlay (multi-host)
Used with Docker Swarm or Kubernetes (requires a key-value store or control plane).
4. libvirt / QEMU Virtual Machine Networking
libvirt (used by virt-manager, GNOME Boxes, etc.) supports:
- NAT (default) — similar to Docker bridge
- bridge — connect VM directly to physical network (recommended for servers)
Create persistent bridge (from Part 8):
# Already created br0 with IP on it
Attach VM to bridge (edit XML or use virt-manager):
<interface type='bridge'>
<source bridge='br0'/>
<model type='virtio'/>
</interface>
VM gets IP from the same subnet as the host bridge interface (via DHCP or static).
5. Kubernetes Networking Concepts (High-Level)
Kubernetes uses CNI plugins (Calico, Flannel, Cilium, Weave, etc.).
Core abstractions (2026 perspective):
- Pod → gets its own IP (usually from a large overlay or underlay subnet)
- ClusterIP Service → internal VIP, kube-proxy NAT/rules
- NodePort → exposes service on every node’s IP at high port
- LoadBalancer → integrates with cloud LB or MetalLB
- Ingress → HTTP/HTTPS routing (nginx-ingress, Traefik, etc.)
Most users in 2026 run lightweight distributions (k3s, MicroK8s, kind) where networking is pre-configured.
Capstone Project: Multi-Container Routed Application
Goal: Deploy a small web app with backend, database, and reverse proxy — using custom networking, routing, firewall, DNS, and monitoring.
Scenario
- Container A: Nginx reverse proxy (port 80 → backend)
- Container B: Simple Python/Flask API
- Container C: PostgreSQL
- All on custom bridge network
- Expose only proxy to host/external
- Optional: macvlan for direct backend access, or WireGuard tunnel to another host
Steps outline (detailed commands in your lab VM):
Create custom bridge network
podman network create app-netRun Postgres
podman run -d --name db --network app-net -e POSTGRES_PASSWORD=secret postgres:16Run backend API (example image or your own)
podman run -d --name api --network app-net my-flask-apiRun Nginx proxy
podman run -d --name proxy -p 8080:80 --network app-net -v ./nginx.conf:/etc/nginx/conf.d/default.conf nginxSample nginx.conf (proxy_pass http://api:5000;)
Firewall: allow only 8080/tcp inbound (nftables/firewalld/ufw)
Test & monitor:
ss -tnlp
podman logs proxy
tcpdump -i any port 8080
curl http://localhost:8080
Extensions
- Add healthchecks & restarts
- Use macvlan for API to get real LAN IP
- Add WireGuard tunnel → access from remote host
- Move to Kubernetes (kind cluster) and use Services/Ingress
Hands-On Exercises
- Compare default bridge vs host vs macvlan performance with
iperf3between containers/host. - Set up a Podman pod (shared namespace) with sidecar pattern.
- Create libvirt VM attached to custom bridge → ping between VM and container.
- Build the capstone project — intentionally break connectivity, then debug using tools from Part 7.
Congratulations! You now have production-grade Linux networking knowledge.