Cilium v1.5 DocumentationDocker to request networking of a container from Cilium, a container must be started with a network of driver type “cilium”. With Cilium, all containers are connected to a single logical network, with isola�on network named ‘cilium-net’ for all containers: $ docker network create --ipv6 --subnet ::1/112 --driver cilium --ipam-driv Step 6: Start an Example Service with Docker In this tutorial, we’ll use a container cilium ... ==> default: Creating cilium Creating cilium ... done ==> default: Installing loopback driver... ==> default: Installing cilium-cni to /host/opt/cni/bin/ ... ==> default: Installing new /host/etc/cni/net0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.6 DocumentationDocker to request networking of a container from Cilium, a container must be started with a network of driver type “cilium”. With Cilium, all containers are connected to a single logical network, with isolation named ‘cilium-net’ for all containers: $ docker network create --ipv6 --subnet ::1/112 --driver cilium --ipam- driver cilium cilium-net Step 6: Start an Example Service with Docker In this tutorial, we’ll cilium ... ==> default: Creating cilium Creating cilium ... done ==> default: Installing loopback driver... ==> default: Installing cilium-cni to /host/opt/cni/bin/ ... ==> default: Installing new /host/etc/cni/net0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.7 DocumentationDocker to request networking of a container from Cilium, a container must be started with a network of driver type “cilium”. With Cilium, all containers are connected to a single logical network, with isolation create a single network named ‘cilium-net’ for all containers: $ docker network create --driver cilium --ipam-driver cilium cilium-net Step 6: Start an Example Service with Docker In this tutorial, we’ll cilium ... ==> default: Creating cilium Creating cilium ... done ==> default: Installing loopback driver... ==> default: Installing cilium-cni to /host/opt/cni/bin/ ... ==> default: Installing new /host/etc/cni/net0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.8 Documentationdp-layer] at the XDP (eXpress Data Path) layer where BPF is operating directly in the networking driver instead of a higher layer. The mode setting global.nodePort.acceleration allows to enable this acceleration underlying driver for eth0 must have native XDP support on all Cilium managed nodes. A list of drivers supporting native XDP can be found in the table below. The corresponding network driver name of an an interface can be determined as follows: # ethtool -i eth0 driver: nfp [...] Vendor Driver XDP Support Amazon ena >= 5.6 Vendor Driver XDP Support Broadcom bnxt_en >= 4.11 Cavium thunderx >= 4.120 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.10 Documentationlayer] at the XDP (eXpress Data Path) layer where eBPF is operating directly in the networking driver instead of a higher layer. The mode setting loadBalancer.acceleration allows to enable this acceleration devices, the XDP acceleration is enabled on all devices. This means that each underlying device’s driver must have native XDP support on all Cilium managed nodes. In addition, for the performance reasons in the table below. The corresponding network driver name of an interface can be determined as follows: # ethtool -i eth0 driver: nfp [...] Vendor Driver XDP Support Amazon ena >= 5.6 Broadcom bnxt_en0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.9 Documentation--memory=4096 Note If minikube is deployed as a container (that is if docker is the configured driver), then kube-proxy replacement features like host-reachable services may not work (GitHub issue [https://github service load-balancing issues, then set [https://minikube.sigs.k8s.io/docs/commands/config/] any other driver from the supported list [https://minikube.sigs.k8s.io/docs/drivers/]. minikube start --cni=cilium p-layer] at the XDP (eXpress Data Path) layer where eBPF is operating directly in the networking driver instead of a higher layer. The mode setting loadBalancer.acceleration allows to enable this acceleration0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.11 Documentationlayer] at the XDP (eXpress Data Path) layer where eBPF is operating directly in the networking driver instead of a higher layer. The mode setting loadBalancer.acceleration allows to enable this acceleration devices, the XDP acceleration is enabled on all devices. This means that each underlying device’s driver must have native XDP support on all Cilium managed nodes. In addition, for the performance reasons in the table below. The corresponding network driver name of an interface can be determined as follows: # ethtool -i eth0 driver: nfp [...] Vendor Driver XDP Support Amazon ena >= 5.6 Broadcom bnxt_en0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium的网络加速秘诀filter FORWARD mangle POSTROUING nat POSTROUING tc egress routing XDP kernel ethernet driver kube-proxy DNAT kube-proxy SNAT worker node nodePort request backend endpoint tc eBPF0 码力 | 14 页 | 11.97 MB | 1 年前3
共 8 条
- 1













