 Cilium v1.10 Documentationsupport Official Cilium repositories Update cilium-builder and cilium-runtime images Nightly Docker image Code Overview High-level Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator0 码力 | 1307 页 | 19.26 MB | 1 年前3 Cilium v1.10 Documentationsupport Official Cilium repositories Update cilium-builder and cilium-runtime images Nightly Docker image Code Overview High-level Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator0 码力 | 1307 页 | 19.26 MB | 1 年前3
 Cilium v1.9 Documentationimages Official Cilium repositories Update cilium-builder and cilium-runtime images Nightly Docker image Code Overview High-level Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Kubernetes from when the kind release was created is used. To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration [https://kind.sigs.k8s.io/docs/user/c https://helm.cilium.io/ Preload the cilium image into each worker node in the kind cluster: docker pull quay.io/cilium/cilium:v1.9.18 kind load docker-image quay.io/cilium/cilium:v1.9.18 Then, install0 码力 | 1263 页 | 18.62 MB | 1 年前3 Cilium v1.9 Documentationimages Official Cilium repositories Update cilium-builder and cilium-runtime images Nightly Docker image Code Overview High-level Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Kubernetes from when the kind release was created is used. To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration [https://kind.sigs.k8s.io/docs/user/c https://helm.cilium.io/ Preload the cilium image into each worker node in the kind cluster: docker pull quay.io/cilium/cilium:v1.9.18 kind load docker-image quay.io/cilium/cilium:v1.9.18 Then, install0 码力 | 1263 页 | 18.62 MB | 1 年前3
 Cilium v1.11 Documentationsupport Official Cilium repositories Update cilium-builder and cilium-runtime images Nightly Docker image Image Building Process Code Overview High-level Cilium Hubble Important common packages Debugging Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator0 码力 | 1373 页 | 19.37 MB | 1 年前3 Cilium v1.11 Documentationsupport Official Cilium repositories Update cilium-builder and cilium-runtime images Nightly Docker image Image Building Process Code Overview High-level Cilium Hubble Important common packages Debugging Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator Containers: cilium-operator Running: 2 cilium Running: 2 Image versions cilium quay.io/cilium/cilium:v1.9.5: 2 cilium-operator0 码力 | 1373 页 | 19.37 MB | 1 年前3
 Cilium v1.5 Documentation4-a" gcloud container --project $GKE_PROJECT clusters create cluster1 \ --username "admin" --image-type COS --num-nodes 2 --zone ${GKE_ZONE} When done, you should be able to access your cluster like etcd version 3.1.11 and the latest CoreOS stable image which sa�sfies the minimum kernel version requirement of Cilium. To get the latest CoreOS ami image, you can change the region value to your choice eu-west-1b,eu-west-1c : Zones where the worker nodes will be deployed --image 595879546273/CoreOS-stable-1745.3.1-hvm : Image name to be deployed (Cilium requires kernel version 4.8 and above so ensure0 码力 | 740 页 | 12.52 MB | 1 年前3 Cilium v1.5 Documentation4-a" gcloud container --project $GKE_PROJECT clusters create cluster1 \ --username "admin" --image-type COS --num-nodes 2 --zone ${GKE_ZONE} When done, you should be able to access your cluster like etcd version 3.1.11 and the latest CoreOS stable image which sa�sfies the minimum kernel version requirement of Cilium. To get the latest CoreOS ami image, you can change the region value to your choice eu-west-1b,eu-west-1c : Zones where the worker nodes will be deployed --image 595879546273/CoreOS-stable-1745.3.1-hvm : Image name to be deployed (Cilium requires kernel version 4.8 and above so ensure0 码力 | 740 页 | 12.52 MB | 1 年前3
 Cilium v1.6 DocumentationCREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID test-cluster ng-25560078 2019-07-23T06:05:35Z 0 2 0 t4-a" gcloud container --project $GKE_PROJECT clusters create cluster1 \ --username "admin" --image-type COS --num-nodes 2 --zone ${GKE_ZONE} When done, you should be able to access your cluster like etcd version 3.1.11 and the latest CoreOS stable image which satisfies the minimum kernel version requirement of Cilium. To get the latest CoreOS ami image, you can change the region value to your choice0 码力 | 734 页 | 11.45 MB | 1 年前3 Cilium v1.6 DocumentationCREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID test-cluster ng-25560078 2019-07-23T06:05:35Z 0 2 0 t4-a" gcloud container --project $GKE_PROJECT clusters create cluster1 \ --username "admin" --image-type COS --num-nodes 2 --zone ${GKE_ZONE} When done, you should be able to access your cluster like etcd version 3.1.11 and the latest CoreOS stable image which satisfies the minimum kernel version requirement of Cilium. To get the latest CoreOS ami image, you can change the region value to your choice0 码力 | 734 页 | 11.45 MB | 1 年前3
 Cilium v1.7 DocumentationDeveloper images Official release images Update cilium-builder and cilium-runtime images Nightly Docker image Release Management Organization Release tracking Release Cadence Backporting process Backport role: worker networking: disableDefaultCNI: true To change the version of kubernetes being run, image has to be defined for each node. See the Node Configration [https://kind.sigs.k8s.io/docs/user/configuration/#nodes] cluster so each worker doesn’t have to pull them. docker pull cilium/cilium:v1.7.16 kind load docker-image cilium/cilium:v1.7.16 Install Cilium release via Helm: helm install cilium cilium/cilium --version0 码力 | 885 页 | 12.41 MB | 1 年前3 Cilium v1.7 DocumentationDeveloper images Official release images Update cilium-builder and cilium-runtime images Nightly Docker image Release Management Organization Release tracking Release Cadence Backporting process Backport role: worker networking: disableDefaultCNI: true To change the version of kubernetes being run, image has to be defined for each node. See the Node Configration [https://kind.sigs.k8s.io/docs/user/configuration/#nodes] cluster so each worker doesn’t have to pull them. docker pull cilium/cilium:v1.7.16 kind load docker-image cilium/cilium:v1.7.16 Install Cilium release via Helm: helm install cilium cilium/cilium --version0 码力 | 885 页 | 12.41 MB | 1 年前3
 Cilium v1.8 DocumentationDeveloper images Official release images Update cilium-builder and cilium-runtime images Nightly Docker image Code Overview High-level Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Kubernetes from when the kind release was created is used. To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration [https://kind.sigs.k8s.io/docs/user/c cilium https://helm.cilium.io/ Preload the cilium image into each worker node in the kind cluster: docker pull cilium/cilium:v1.8.13 kind load docker-image cilium/cilium:v1.8.13 Then, install Cilium release0 码力 | 1124 页 | 21.33 MB | 1 年前3 Cilium v1.8 DocumentationDeveloper images Official release images Update cilium-builder and cilium-runtime images Nightly Docker image Code Overview High-level Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Kubernetes from when the kind release was created is used. To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration [https://kind.sigs.k8s.io/docs/user/c cilium https://helm.cilium.io/ Preload the cilium image into each worker node in the kind cluster: docker pull cilium/cilium:v1.8.13 kind load docker-image cilium/cilium:v1.8.13 Then, install Cilium release0 码力 | 1124 页 | 21.33 MB | 1 年前3
 Building a Secure and Maintainable PaaSTIP: To change picture:Right click on image > Replace image > Select file 3 Requirements for Scaling Up TIP: To change picture:Right click on image > Replace image > Select file ❏ Secure Network Isolation Evaluating Cilium and Hubble 11 Cilium Benefits TIP: To change picture:Right click on image > Replace image > Select file ❏ Pod network filtering uses eBPF rather than iptables ❏ More flexible network Policy Traffic Allowed by Policy 15 Hubble Benefits TIP: To change picture:Right click on image > Replace image > Select file ❏ Durable log storage and enterprise Security Information and Event Management0 码力 | 20 页 | 2.26 MB | 1 年前3 Building a Secure and Maintainable PaaSTIP: To change picture:Right click on image > Replace image > Select file 3 Requirements for Scaling Up TIP: To change picture:Right click on image > Replace image > Select file ❏ Secure Network Isolation Evaluating Cilium and Hubble 11 Cilium Benefits TIP: To change picture:Right click on image > Replace image > Select file ❏ Pod network filtering uses eBPF rather than iptables ❏ More flexible network Policy Traffic Allowed by Policy 15 Hubble Benefits TIP: To change picture:Right click on image > Replace image > Select file ❏ Durable log storage and enterprise Security Information and Event Management0 码力 | 20 页 | 2.26 MB | 1 年前3
 Debugging the BPF Virtual Machinelives in the kernel AND The kernel can be debugged using gdb The approach We need: ● A kernel image ● A root filesystem ● An eBPF program that doesn’t work ● gdb First - The environment git clone x86_64_defconfig make O=$PWD/build ARCH=x86_64 menuconfig make O=$PWD/build ARCH=x86_64 -j16 Kernel image Remember to: - Enable debugging symbols under Kernel Hacking -> compile options git clone git://git /source/buildroot cd buildroot make menuconfig make -j16 Rootfs Remember to: - Select ext4 as filesystem image - Enable networking - Enable the SSH daemon cd /source/linux qemu-system-x86_64 -kernel build/arch/x86/boot/bzImage0 码力 | 10 页 | 233.09 KB | 1 年前3 Debugging the BPF Virtual Machinelives in the kernel AND The kernel can be debugged using gdb The approach We need: ● A kernel image ● A root filesystem ● An eBPF program that doesn’t work ● gdb First - The environment git clone x86_64_defconfig make O=$PWD/build ARCH=x86_64 menuconfig make O=$PWD/build ARCH=x86_64 -j16 Kernel image Remember to: - Enable debugging symbols under Kernel Hacking -> compile options git clone git://git /source/buildroot cd buildroot make menuconfig make -j16 Rootfs Remember to: - Select ext4 as filesystem image - Enable networking - Enable the SSH daemon cd /source/linux qemu-system-x86_64 -kernel build/arch/x86/boot/bzImage0 码力 | 10 页 | 233.09 KB | 1 年前3
共 9 条
- 1














