- Print
- PDF
Integrating Multus CNI
- Print
- PDF
Available in VPC
Describes how to create a Pod with multiple network interfaces using Multus CNI.
Introduction to Multus CNI
Multus CNI is a CNI plug-in that provides an advanced network configuration function that allows you to connect multiple network interfaces to a single pod, each with a different address range.
- In Kubernetes, each Pod has only one network interface in addition to the loopback by default. If Multus CNI is used, a Multi-Home Pod with multiple interfaces can be created.
- Multus CNI acts as a kind of meta plug-in that can call other CNI plug-ins. Accordingly, another CNI to be connected to Multus CNI is necessary.
For detailed description of Multus CNI, see Multus CNI repository.
Install Multus CNI
The following describes how to install Multus CNI.
Disable CNI-Exclusive mode of Cilium CNI using the command below.
kubectl patch daemonset cilium -n kube-system --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/lifecycle/postStart/exec/command/2", "value": "--cni-exclusive=false"}]'
After cloning the Github repository of Multus CNI, install Daemonset.
git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni cat ./deployments/multus-daemonset.yml | kubectl apply -f -
- Multus Daemonset places the exec files of Multus in the /opt/cni/bin path at each worker node. And it creates the /etc/cni/net.d/00-multus.conf configuration file to configure CNI.
- The execution of Multus Pod in each node can be checked through kubectl get pods --all-namespaces | grep -i multus.
- Download ipvlan to be connected through Multus CNI to the worker node.
root@afaf-w-1vhl:~# curl -L https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz | tar zx -C /opt/cni/bin ./host-local ./ipvlan
Create an additional interface.
- After creating a network interface by referring to Network Interface of the server user guide, assign it to the worker node.
- Assign secondary IP to the created interface. At this time, the secondary IP must be a series of connected IPs.
- You can check if eth1 interface is successfully created as follows:
root@afaf-w-1vhl:~# ip a ... 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc mq state UP group default qlen 1000 link/ether f2:20:af:24:62:41 brd ff:ff:ff:ff:ff:ff inet 192.168.1.104/26 brd 192.168.1.127 scope global eth0 valid_lft forever preferred_lft forever 1304: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f2:20:af:a6:0a:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.0.12/24 brd 192.168.0.255 scope global eth1 valid_lft forever preferred_lft forever
Create Network Attachment Definition CRD.
- To provide the setting for an additional ipvlan interface to be used in the pod, create the Network Attachment Definition. Network Attachment Definition is a custom resource definition that defines how to connect a network to the pod.
- Create the setting for using ipvlan as follows:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ipvlan-conf-1 spec: config: '{ "cniVersion": "0.3.0", "type": "ipvlan", "master": "eth1", "mode": "l3", "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "rangeStart": "192.168.1.13", "rangeEnd": "192.168.1.17", "gateway": "192.168.1.1" } }'
- Specify the bandwidth of the secondary IP assigned to the interface earlier using rangeStart and rangeEnd.
- You can check the created settings through kubectl get network-attachment-definitions.
- In this example, the host-local IPAM manages IP pools on a per-node basis. If you require a cluster-wide IP pool, you must use whereabouts IPAM.
Integrating Multus CNI
The following describes how to connect Multus CNI.
Create a pod that uses an additional interface.
- Now, you can create a pod that uses an additional interface. You can specify the additional interface to be used through k8s.v1.cni.cncf.io/networks annotation. The name assigned to the field is the name of the previously created Network Attachment Definition.
- If you want to connect multiple interfaces, use a comma as a separator in this annotation to specify several networks.
apiVersion: v1 kind: Pod metadata: name: sampleapp-1 annotations: k8s.v1.cni.cncf.io/networks: ipvlan-conf-1 spec: containers: - name: multitool command: ["sh", "-c", "trap : TERM INT; sleep infinity & wait"] image: praqma/network-multitool --- apiVersion: v1 kind: Pod metadata: name: sampleapp-2 annotations: k8s.v1.cni.cncf.io/networks: ipvlan-conf-1 spec: containers: - name: multitool command: ["sh", "-c", "trap : TERM INT; sleep infinity & wait"] image: praqma/network-multitool
Check the additional interface.
- Check if the k8s.v1.cni.cncf.io/network-status annotation displays all the network interfaces through kubectl describe pod sampleapp-1.
$ kubectl describe pod sampleapp-1 Name: sampleapp-1 Namespace: default Priority: 0 Node: afaf-w-293f/192.168.1.104 Start Time: Mon, 06 Feb 2023 16:18:38 +0900 Labels: <none> Annotations: k8s.v1.cni.cncf.io/networks: ipvlan-conf-1 k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "198.18.1.173" ], "mac": "12:7d:62:5b:2e:57", "default": true, "dns": {} },{ "name": "default/ipvlan-conf-1", "interface": "net1", "ips": [ "192.168.1.13" ], "mac": "f2:20:af:a6:0a:e2", "dns": {} }]
- Check if the additional interface (net1) actually connected to the pod is running with the IP listed above.
$ kubectl exec -it sampleapp-1 -- ip a ... 2: net1@if1304: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether f2:20:af:a6:0a:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.13/24 brd 192.168.1.255 scope global net1 valid_lft forever preferred_lft forever 1975: eth0@if1976: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP group default link/ether 12:7d:62:5b:2e:57 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 198.18.1.173/32 scope global eth0 valid_lft forever preferred_lft forever
- Check the routing information for the added net1 interface and communication availability.
# kubectl exec -it sampleapp-1 -- route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 198.18.1.209 0.0.0.0 UG 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 net1 198.18.1.209 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 # kubectl exec -it sampleapp-1 -- ping -I net1 192.168.1.14 PING 192.168.1.14 (192.168.1.14) from 192.18.1.13 net1: 56(84) bytes of data. 64 bytes from 192.168.1.14: icmp_seq=1 ttl=64 time=0.055 ms 64 bytes from 192.168.1.14: icmp_seq=2 ttl=64 time=0.055 ms
- Check if the k8s.v1.cni.cncf.io/network-status annotation displays all the network interfaces through kubectl describe pod sampleapp-1.
Precautions
Manually added network interfaces are not applied to newly added worker nodes. Also, in the process of changing node pools, if an additional network interface is removed from the assigned worker node, pods that are already running may be affected.