Cilium v1.5 DocumentationDefaultDependencies=no Before=local-fs.target umount.target After=swap.target [Mount] What=bpffs Where=/sys/fs/bpf Type=bpf [Install] WantedBy=multi-user.target EOF Container Runtimes CRIO If you suggested upgrade transi�ons, from a specified current version running in a cluster to a specified target version. If a specific combina�on is not listed in the table below, then it may not be safe. In that from 1.1.x to the latest 1.1.y release before subsequently upgrading to 1.2.z . Current version Target version DaemonSet upgrade L3 impact L7 impact 1.0.x 1.1.y Required N/A Clients must reconnect[1]0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.11 Documentation(Manifests) from target directory INFO Consuming Master Machines from target directory INFO Consuming Worker Machines from target directory INFO Consuming Openshift Manifests from target directory INFO Consuming Consuming Common Manifests from target directory INFO Credentials loaded from the "default" profile in file "/home/twp/.aws/credentials" INFO Creating infrastructure resources... INFO Waiting up to 20m0s application target: Be sure to configure the application to have cluster-wide scope: Configure any additional values for the Cilium chart and click Next. The application should deploy within the target cluster0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium v1.10 Documentation(Manifests) from target directory INFO Consuming Master Machines from target directory INFO Consuming Worker Machines from target directory INFO Consuming Openshift Manifests from target directory INFO Consuming Consuming Common Manifests from target directory INFO Credentials loaded from the "default" profile in file "/home/twp/.aws/credentials" INFO Creating infrastructure resources... INFO Waiting up to 20m0s \ --allow=tcp:4240,udp:8472,icmp \ --source-tags="${infraID}-worker,${infraID}-master" \ --target-tags="${infraID}-worker,${infraID}-master" \ "${infraID}-cilium" Azure: enable Cilium ports0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.6 Documentationdestination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service address, the corresponding DefaultDependencies=no Before=local-fs.target umount.target After=swap.target [Mount] What=bpffs Where=/sys/fs/bpf Type=bpf [Install] WantedBy=multi-user.target EOF Container Runtimes CRIO If you want to work as intended kernel configuration must include the following modules: CONFIG_NETFILTER_XT_TARGET_TPROXY=m CONFIG_NETFILTER_XT_MATCH_MARK=m CONFIG_NETFILTER_XT_MATCH_SOCKET=m When xt_socket kernel0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.8 Documentationthe target directory because its dependencies are dirty and it needs to be regenerated INFO Consuming OpenShift Install (Manifests) from target directory INFO Consuming Master Machines from target directory Machines from target directory INFO Consuming Bootstrap Ignition Config from target directory INFO Consuming Common Manifests from target directory INFO Consuming Openshift Manifests from target directory \ --allow=tcp:4240,udp:8472,icmp \ --source-tags="${infraID}-worker,${infraID}-master" \ --target-tags="${infraID}-worker,${infraID}-master" \ "${infraID}-cilium" Accessing the cluster To access0 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.9 Documentationthe target directory because its dependencies are dirty and it needs to be regenerated INFO Consuming OpenShift Install (Manifests) from target directory INFO Consuming Master Machines from target directory Machines from target directory INFO Consuming Bootstrap Ignition Config from target directory INFO Consuming Common Manifests from target directory INFO Consuming Openshift Manifests from target directory \ --allow=tcp:4240,udp:8472,icmp \ --source-tags="${infraID}-worker,${infraID}-master" \ --target-tags="${infraID}-worker,${infraID}-master" \ "${infraID}-cilium" Accessing the cluster To access0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.7 Documentationentirely in software on the Kubernetes worker node, and is policy driven, allowing inspection to target only selected network connectivity. This type of visibility is extremely valuable to be able to destination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service address, the corresponding Before=local-fs.target umount.target After=swap.target [Mount] What=bpffs Where=/sys/fs/bpf Type=bpf Options=rw,nosuid,nodev,noexec,relatime,mode=700 [Install] WantedBy=multi-user.target EOF Container0 码力 | 885 页 | 12.41 MB | 1 年前3
Steering connections to sockets with BPF socket lookup hook$ make echo_dispatch.bpf.o clang -I…/linux/usr/include -I…/linux/tools/lib -g -O2 -Wall -Wextra -target bpf -c -o echo_dispatch.bpf.o echo_dispatch.bpf.c # bpftool prog load echo_dispatch.bpf.o /sys pidfd_getfd(pidfd_open(1289, 0), 3, 0) target PID target FD duplicate socket FD WOW! sockmap_update.c - Put socket FD in BPF map $ ./sockmap-update Usage: ./sockmap-update <target pid> <target fd>rog", …) = 3 openat(…, "/proc/self/ns/net", …) = 4 bpf(BPF_LINK_CREATE, {link_create={prog_fd=3, target_fd=4, attach_type=BPF_SK_LOOKUP, …) = 5 bpf(BPF_OBJ_PIN, {pat 0 码力 | 23 页 | 441.22 KB | 1 年前3
Debugging the BPF Virtual Machinenokaslr" \ -serial stdio -display none Start the test VM cd /source/linux gdb build/vmlinux (gdb) target remote localhost:1234 (gdb) bpf/syscall.c:4180 (gdb) bpf/syscall.c:796 (gdb) b bpf/syscall.c:1210 码力 | 10 页 | 233.09 KB | 1 年前3
eBPF Summit 2020 Lightning Talkqdisc add dev [device name] clsact $ sudo tc filter add dev [device name] ingress \ bpf da obj target/bpf/programs/limit/limit.elf \ sec tc_action/limit Rabbit(MQ) Protected BPF (Kernel) vs. Application0 码力 | 22 页 | 1.81 MB | 1 年前3
共 11 条
- 1
- 2













