Cilium v1.10 Documentationrelying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Cilium implements bandwidth management through If you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events. If you do not want Cilium to store state in Kubernetes0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.11 Documentationrelying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Cilium implements bandwidth management through of reasons when to use a kvstore: If you are running in an environment where you observe a high overhead in state propagation caused by Kubernetes events. If you do not want Cilium to store state in Kubernetes0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium v1.9 Documentationrelying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Cilium implements bandwidth management through If you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events. If you do not want Cilium to store state in Kubernetes0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.8 Documentationrelying on BPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically If you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events. If you do not want Cilium to store state in Kubernetes number of open connections. Thus, clients are encouraged to cache their connections rather than the overhead of reopening TCP connections every time they need to store or retrieve data. Multiple clients can0 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.5 Documentationnumber of open connec�ons. Thus, clients are encouraged to cache their connec�ons rather than the overhead of reopening TCP connec�ons every �me they need to store or retrieve data. Mul�ple clients can benefit in-kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instruc�ons for na�ve execu�on efficiency. BPF programs can be run at various between 10 seconds and 30 minutes or 12 hours for LRU based maps. This should automa�cally op�mize CPU consump�on as much as possible while keeping the connec�on tracking table u�liza�on below 25%. If needed0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.6 DocumentationIf you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events. If you do not want Cilium to store state in Kubernetes number of open connections. Thus, clients are encouraged to cache their connections rather than the overhead of reopening TCP connections every time they need to store or retrieve data. Multiple clients can \ --min-cpu-platform "Intel Broadwell" \ kata-testing gcloud compute ssh kata-testing # While ssh'd into the VM: $ [ -z "$(lscpu|grep GenuineIntel)" ] && { echo "ERROR: Need an Intel CPU"; exit 1;0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.7 DocumentationIf you are running in an environment with more than 250 nodes, 5k pods, or if you observe a high overhead in state propagation caused by Kubernetes events. If you do not want Cilium to store state in Kubernetes number of open connections. Thus, clients are encouraged to cache their connections rather than the overhead of reopening TCP connections every time they need to store or retrieve data. Multiple clients can kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. BPF programs can be run at various0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium的网络加速秘诀性能提升的主要表现: • 不同场景下,不同程度地降低了 网络数据包的“转发延时” • 不同场景下,不同程度地提升了 网络数据包的“吞吐量” • 不同场景下,不同程度地降低了 转发数据包所需的“ CPU 开销” eBPF 简介 eBPF 技术 在 Linux kernel 3.19 开始被 引入,可在用户态进行 eBPF 程序编程,编译 后,动态加载到内核指定的 hook 点上,以 VM 方式安全运行,其能过通过 LoadBalancer service 的解析和转发,其转发性能能比肩 DPDK 技术,且能节省大量CPU资源 当 PPS 压力越大,提升效果越发显 著,相比 kube-proxy,测量得出以下 效果: 1. TC 转发方式,在10Mpps input压 力下提升 1 倍的吞吐量,在2Mpps 压力下,节省了30%的CPU利用率 2. XDP的性能上限极高,可能是 TC 的 10 倍左右 raw PREROUTING0 码力 | 14 页 | 11.97 MB | 1 年前3
共 8 条
- 1













