 Cilium v1.10 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination efficient service-to-backend translation right in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Management Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce0 码力 | 1307 页 | 19.26 MB | 1 年前3 Cilium v1.10 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination efficient service-to-backend translation right in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Management Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce0 码力 | 1307 页 | 19.26 MB | 1 年前3
 Cilium v1.8 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and destination --no-headers=true | grep ' Cilium v1.8 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and destination --no-headers=true | grep '- ' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted pod "heapster-v1 0 码力 | 1124 页 | 21.33 MB | 1 年前3
 Cilium v1.11 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination efficient service-to-backend translation right in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Management Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce0 码力 | 1373 页 | 19.37 MB | 1 年前3 Cilium v1.11 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination efficient service-to-backend translation right in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Management Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce0 码力 | 1373 页 | 19.37 MB | 1 年前3
 Cilium v1.9 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination efficient service-to-backend translation right in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Management Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce0 码力 | 1263 页 | 18.62 MB | 1 年前3 Cilium v1.9 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination efficient service-to-backend translation right in the Linux kernel’s socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers. Bandwidth Management Management Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce0 码力 | 1263 页 | 18.62 MB | 1 年前3
 Cilium v1.6 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and destination In order for the entire system to come up, the following components have to be running at the same time: kube-dns or coredns cilium-xxx cilium-etcd-operator etcd-operator etcd-xxx All timeouts are configured0 码力 | 734 页 | 11.45 MB | 1 年前3 Cilium v1.6 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and destination In order for the entire system to come up, the following components have to be running at the same time: kube-dns or coredns cilium-xxx cilium-etcd-operator etcd-operator etcd-xxx All timeouts are configured0 码力 | 734 页 | 11.45 MB | 1 年前3
 Cilium v1.7 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and destination --no-headers=true | grep ' Cilium v1.7 Documentationdemand. This results in a large number of application containers to be started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and destination --no-headers=true | grep '- ' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted pod "heapster-v1 0 码力 | 885 页 | 12.41 MB | 1 年前3
 Cilium v1.5 Documentationhearts, we strive to provide be�er tooling for troubleshoo�ng. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and des�na�on performed for kube-dns $ kubectl delete pods -n kube-system $(kubectl get pods -n kube-system -o pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted pod "heapster-v1 terminal window for A-Wing, set A-wing’s coordinates: >>> client.set("awing-coord","4309.432,918.980",time=2400) True >>> client.get("awing-coord") '4309.432,918.980' In your main terminal window, have0 码力 | 740 页 | 12.52 MB | 1 年前3 Cilium v1.5 Documentationhearts, we strive to provide be�er tooling for troubleshoo�ng. This includes tooling to provide: Event monitoring with metadata: When a packet is dropped, the tool doesn’t just report the source and des�na�on performed for kube-dns $ kubectl delete pods -n kube-system $(kubectl get pods -n kube-system -o pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted pod "heapster-v1 terminal window for A-Wing, set A-wing’s coordinates: >>> client.set("awing-coord","4309.432,918.980",time=2400) True >>> client.get("awing-coord") '4309.432,918.980' In your main terminal window, have0 码力 | 740 页 | 12.52 MB | 1 年前3
 Buzzing Across Spaceenabling users to programmatically extend almost any functionality of the operating system. eBPF is an event-driven architecture that runs specific programs when the kernel or an application passes a certain assembly language with a stable instruction set. eBPF programs can be loaded and upgraded in real time without the need to restart the kernel. System calls Bees of various talents took many roles in the lead. SLOW 0 0 0 0 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 The Just-in-Time (JIT) compilation step translates the generic bytecode of the program into the machine-specific instruction0 码力 | 32 页 | 32.98 MB | 1 年前3 Buzzing Across Spaceenabling users to programmatically extend almost any functionality of the operating system. eBPF is an event-driven architecture that runs specific programs when the kernel or an application passes a certain assembly language with a stable instruction set. eBPF programs can be loaded and upgraded in real time without the need to restart the kernel. System calls Bees of various talents took many roles in the lead. SLOW 0 0 0 0 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 The Just-in-Time (JIT) compilation step translates the generic bytecode of the program into the machine-specific instruction0 码力 | 32 页 | 32.98 MB | 1 年前3
 Hardware Breakpoint implementation in BCCvoid bpf_attach_breakpoint(uint64_t symbol_addr, int pid, int progfd, int bp_type) { struct perf_event_attr attr = {}; memset(&attr, 0, sizeof(attr)); attr.size = sizeof(attr); attr.type = PERF_TYPE_BREAKPOINT; bpf_prog_type prog_type = BPF_PROG_TYPE_PERF_EVENT; int efd = syscall(__NR_perf_event_open, attr, pid, cpu, -1, PERF_FLAG_FD_CLOEXEC); if (efd < 0) { printf("event fd %d err %s\n", efd, strerror(errno)); return;0 码力 | 8 页 | 2.02 MB | 1 年前3 Hardware Breakpoint implementation in BCCvoid bpf_attach_breakpoint(uint64_t symbol_addr, int pid, int progfd, int bp_type) { struct perf_event_attr attr = {}; memset(&attr, 0, sizeof(attr)); attr.size = sizeof(attr); attr.type = PERF_TYPE_BREAKPOINT; bpf_prog_type prog_type = BPF_PROG_TYPE_PERF_EVENT; int efd = syscall(__NR_perf_event_open, attr, pid, cpu, -1, PERF_FLAG_FD_CLOEXEC); if (efd < 0) { printf("event fd %d err %s\n", efd, strerror(errno)); return;0 码力 | 8 页 | 2.02 MB | 1 年前3
 Understanding Ruby with BPF - rbperftracing write(2) calls - $ rbperf record \ --pid=124 event \ --tracepoint=syscalls:sys_enter_write - $ rbperf report [...] Architecture 2. Event (timer, syscall, etc) BPF code (bpf/rbperf.c) Read0 码力 | 19 页 | 972.07 KB | 1 年前3 Understanding Ruby with BPF - rbperftracing write(2) calls - $ rbperf record \ --pid=124 event \ --tracepoint=syscalls:sys_enter_write - $ rbperf report [...] Architecture 2. Event (timer, syscall, etc) BPF code (bpf/rbperf.c) Read0 码力 | 19 页 | 972.07 KB | 1 年前3
共 15 条
- 1
- 2














