Docker Networking
Normally, Docker creates a new network namespace for each container we run. As we attach the container to a network, we define an endpoint that connects the container network namespace with the actual network. This way, we have one container per network namespace. Docker provides an additional way to define the network namespace in which a container runs. When creating a new container, we can specify that it should be attached to or maybe we should say included in the network namespace of an existing container. With this technique, we can run multiple containers in a single network namespace.
When you install docker it creates 3 networks automatically:
- bridge
- docker-host NIC goes to promiscuous mode (allows all L2 packlets without checking destination MAC in other words MAC filtering is disabled)
- actually docker bridge is a switch inside docker host, this switch interconnects docker-host and docker-container
- network used by default when you run a container
- containers in this network can communicate with each other
- containers assigned IP from 172.17.0.0/16 subnet
- to go to outside world use must use port-mapping to the docker-host IP
- Overview:
- new network namespace created for container
- docker0 bridge is automatically created and attached to the docker-host NIC (docker-host namespace)
- veth (Virtual Ethernet) interface:
- automatically created
- attached to the docker0 bridge
- attached the container NIC
- veth interface is like media/cable connecting docker0-bridge/switch port to the container NIC
- none
- to use container without network: --network=none
- container no attached to any networks and also cannot communicate with any other container
- host
- to use host network: --network=host
- in this case container uses the same IP as docker-host uses
- ports are shared between docker-host and all containers connected to the "host" network
- container has direct access to the docker-host's NIC
To create custom network:
docker network create \
custom_isolated_network \
--driver bridge \
--subnet 192.168.190.0/24 \
List all docker networks:
docker network ls
To view bridges only:
brctl show
Other types of networks supported by docker:
docker network create \
custom_isolated_network \
--driver bridge \
--subnet 192.168.190.0/24 \
List all docker networks:
docker network ls
To view bridges only:
brctl show
Other types of networks supported by docker:
- macvlan (requires at least kernel 3.9 on docker-host) - docker-host NIC uses unicast filtering, so L2 with not known DST MAC would be discarded (except is passthru, which uses promiscuous mode)
- this type allows you to assign several IP addresses to the same NIC.
- MAC-VLAN allows to configure subinterfaces (slave devices) of a parent (master) device
- each subinterface will have it own randomly generated MAC and consequently IP address
- subinterfaces cannot interact directly with parent interface
- to communicate with parent interface - assign macvlan subinterface to the docker-host
- macvlan subinterfaces are for example mac0@eth0 (this notation clearly identifies subinterface's parent)
- The macvlan is a trivial bridge that doesn’t need to do learning as it knows every mac address it can receive, so it doesn’t need to implement learning or stp. Which makes it simple stupid and fast.
- Each sub-interface can be in one of 4 modes that affect possible traffic flows (these are macvlan modes and not all of them are presented in macvlan docker driver - currently docker support only macvlan-bridge mode):
- Private - traffic goes only from subinterfaces to the out, subinterfaces on the same parent cannot communicate with each-other. This is not bridge.
- VEPA (Virtual Ethernet Port Aggregator) - this mode need VEPA compatible switch. Subinterfaces of one parent can communicate with each other with the help of VEPA hardware switch which returns all frames where both source and destination are local to the macvlan interface
- bridge - all subinterfaces on a parent interface are interconnected with a simple bridge. Frames from one subinterface to the other delivered directly (through bridge) and not sent out. All MAC addresses are known so macvlan-bridge doesn't need STP and MAC learning
- passthru - allows a single VM to be connected directly to the physical interface. The advantage of this mode is that VM is then able to change MAC address and other interface parameters.
- docker network create --driver macvlan --subnet=10.0.0.0/24 --gateway=10.0.0.1 --opt parent=eth0 macvlanNetworkName
- gateway - external (not related to the docker-host) gateway
- parent - docker-host physical interface
- docker-host eth0 can be for example 10.0.0.2
- also you can use macvlan with VLAN interfaces. In this case subinterfaces are using different parent interfaces (ex. eth0.10 and eth0.20) and can communicate with each other only over gateway:
- create VLAN interface eth0.10 and eth0.20
- docker network create --driver macvlan --subnet=10.0.10.0/24 --gateway=10.0.10.1 --opt parent=eth0.10 macvlan10
- docker network create --driver macvlan --subnet=10.0.20.0/24 --gateway=10.0.20.1 --opt parent=eth0.20 macvlan20
- docker run --name='container0' --hostname='container0' --net=macvlan10 --ip=10.0.10.2 --detach=true centos
- To add additional IP to a container:
- docker network connect --ip=10.0.20.3 macvlan20 container1
- How to connect from macvlan subinterface to the host:
- This will prevent Docker from assigning 192.168.1.223 address to a container, --ip-range command says docker IPAM to allocate IP addresses from given sub-range:
- docker network create -d macvlan -o parent=eno1 --subnet 192.168.1.0/24 --gateway 192.168.1.1 --ip-range 192.168.1.192/27 --aux-address 'host=192.168.1.223' mynet
- Next, we create a new macvlan interface on the host. You can call it whatever you want:
- ip link add mynet-aux link eno1 type macvlan mode bridge
- Now we need to configure the interface with the address we reserved and bring it up:
- ip addr add 192.168.1.223/32 dev mynet-aux
- ip link set mynet-aux up
- The last thing we need to do is to tell our host to use that interface when communicating with the containers. This is relatively easy because we have restricted our containers to a particular CIDR subset of the local network; we just add a route to that range like this:
- ip route add 192.168.1.192/27 dev mynet-aux
- With that route in place, your host will automatically use this mynet-aux interface when communicating with containers on the mynet network.
- above NIC based configs are not persistent and will be lost after reboot, so add all related config to the appropriate configuration files (NIC and route)
- ipvlan is similar to the macvlan but uses the same MAC for all endpoints (docker containers). It's useful in situations when switch where docker-host is connected restricts maximum number of MAC addresses per physical port. ipvlan requires at least kernel 4.1 on docker host
An IPAM (IP Address Management) driver lets you delegate IP lease management to an external component. This way you can coordinate IP use with other virtual or bare metal servers in your datacenter.
Docker controls the IP address assignment for network and endpoint interfaces via the IPAM driver(s). Libnetwork has a default, built-in IPAM driver and allows third party IPAM drivers to be dynamically plugged. On network creation, the user can specify which IPAM driver libnetwork needs to use for the network’s IP address management. For the time being, there is no IPAM driver that would communicate with external DHCP server, so you need to rely on Docker’s default IPAM driver for container IP address and settings configuration. Containers use host’s DNS settings by default, so there is no need to configure DNS servers.
IPAM driver ensures the container got an IPv4 and an IPv6 address from the subnets configured for the macvlan network.
İf you use Hyper-V:
Macvlan uses a unique MAC address per ethernet interface, by default, Hyper-V only allows traffics with MAC address sticks to the virutal switch port, we need to "Enable MAC address spoofing" to prevent virtual switch dropping VLAN's traffic.
Docker controls the IP address assignment for network and endpoint interfaces via the IPAM driver(s). Libnetwork has a default, built-in IPAM driver and allows third party IPAM drivers to be dynamically plugged. On network creation, the user can specify which IPAM driver libnetwork needs to use for the network’s IP address management. For the time being, there is no IPAM driver that would communicate with external DHCP server, so you need to rely on Docker’s default IPAM driver for container IP address and settings configuration. Containers use host’s DNS settings by default, so there is no need to configure DNS servers.
IPAM driver ensures the container got an IPv4 and an IPv6 address from the subnets configured for the macvlan network.
İf you use Hyper-V:
Macvlan uses a unique MAC address per ethernet interface, by default, Hyper-V only allows traffics with MAC address sticks to the virutal switch port, we need to "Enable MAC address spoofing" to prevent virtual switch dropping VLAN's traffic.
No comments:
Post a Comment