 vmware组Kubernetes on vSphere Deep Dive KubeCon China VMware SIGVMware SIG Deep Dive into Kubernetes Scheduling Performance and high availability options for vSphere Steve Wong, Hui Luo VMware Cloud Native Applications Business Unit November 12, 2018 2 Open (sites, affinity groups, NUMA, etc.). This session will explain the options to gain better performance, resource optimization and availability through tuning of vSphere, and Kubernetes configuration the thread runs on, but can potentially come from other nodes with broad performance implications. Unpredictable performance? Swapping? This basically comes down to a choice of whether you would rather0 码力 | 25 页 | 2.22 MB | 1 年前3 vmware组Kubernetes on vSphere Deep Dive KubeCon China VMware SIGVMware SIG Deep Dive into Kubernetes Scheduling Performance and high availability options for vSphere Steve Wong, Hui Luo VMware Cloud Native Applications Business Unit November 12, 2018 2 Open (sites, affinity groups, NUMA, etc.). This session will explain the options to gain better performance, resource optimization and availability through tuning of vSphere, and Kubernetes configuration the thread runs on, but can potentially come from other nodes with broad performance implications. Unpredictable performance? Swapping? This basically comes down to a choice of whether you would rather0 码力 | 25 页 | 2.22 MB | 1 年前3
 VMware SIG Deep Dive into Kubernetes SchedulingVMware SIG Deep Dive into Kubernetes Scheduling Performance and high availability options for vSphere Steve Wong, Michael Gasch KubeCon North America December 13, 2018 2 Open Source Community Relations (sites, affinity groups, NUMA, etc.). This session will explain the options to gain better performance, resource optimization and availability through tuning of vSphere, and Kubernetes configuration come from this node the thread runs on, but can potentially come from other nodes with broad performance implications. This basically comes down to a choice of whether you would rather have a fast0 码力 | 28 页 | 1.85 MB | 1 年前3 VMware SIG Deep Dive into Kubernetes SchedulingVMware SIG Deep Dive into Kubernetes Scheduling Performance and high availability options for vSphere Steve Wong, Michael Gasch KubeCon North America December 13, 2018 2 Open Source Community Relations (sites, affinity groups, NUMA, etc.). This session will explain the options to gain better performance, resource optimization and availability through tuning of vSphere, and Kubernetes configuration come from this node the thread runs on, but can potentially come from other nodes with broad performance implications. This basically comes down to a choice of whether you would rather have a fast0 码力 | 28 页 | 1.85 MB | 1 年前3
 Kubernetes开源书 -  周立"canary" "environment" : "dev" , "environment" : "qa" , "environment" : "production" "tier" : "frontend" , "tier" : "backend" , "tier" : "cache" "partition" : "customerA" , "partition" : "customerB" production tier != frontend 前者选择所有与key = environment 并且value = production 的资源。后者选择key = tier 并且value不等 于 frontend ,以及key不等于 tier 的所有资源。可使⽤ , 过滤是⽣产环境中( production )⾮前端( frontend )的资源: environment=production n,tier!=frontend 。 Set-based requirement Set-based label requirement允许根据⼀组Value过滤Key。⽀持三种运算符: in 、 notin 和 exists (只有Key标识 符)。 例如: environment in (production, qa) tier notin (frontend, backend)0 码力 | 135 页 | 21.02 MB | 1 年前3 Kubernetes开源书 -  周立"canary" "environment" : "dev" , "environment" : "qa" , "environment" : "production" "tier" : "frontend" , "tier" : "backend" , "tier" : "cache" "partition" : "customerA" , "partition" : "customerB" production tier != frontend 前者选择所有与key = environment 并且value = production 的资源。后者选择key = tier 并且value不等 于 frontend ,以及key不等于 tier 的所有资源。可使⽤ , 过滤是⽣产环境中( production )⾮前端( frontend )的资源: environment=production n,tier!=frontend 。 Set-based requirement Set-based label requirement允许根据⼀组Value过滤Key。⽀持三种运算符: in 、 notin 和 exists (只有Key标识 符)。 例如: environment in (production, qa) tier notin (frontend, backend)0 码力 | 135 页 | 21.02 MB | 1 年前3
 Kubernetes + OAM 让开发者更简单Component Deployment Function apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: frontend annotations: description: Container workload spec: workload: apiVersion: apps/v1 kind: Deployment protocol: TCP $ kubectl get deployment NAME REVISION AGE frontend-c8bb659c5 1 2d15h $ kubectl get components NAME WORKLOAD frontend deployment.apps.k8s.io Component:应用中的一个组成部分,例如容器、 Function或者云服务等 Function或者云服务等 应用组件 运维能力 扩容策略 发布策略 分批策略 访问控制 流量配置 Deployment Function - componentName: frontend traits: - trait: apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler spec: minReplicas:0 码力 | 22 页 | 10.58 MB | 1 年前3 Kubernetes + OAM 让开发者更简单Component Deployment Function apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: frontend annotations: description: Container workload spec: workload: apiVersion: apps/v1 kind: Deployment protocol: TCP $ kubectl get deployment NAME REVISION AGE frontend-c8bb659c5 1 2d15h $ kubectl get components NAME WORKLOAD frontend deployment.apps.k8s.io Component:应用中的一个组成部分,例如容器、 Function或者云服务等 Function或者云服务等 应用组件 运维能力 扩容策略 发布策略 分批策略 访问控制 流量配置 Deployment Function - componentName: frontend traits: - trait: apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler spec: minReplicas:0 码力 | 22 页 | 10.58 MB | 1 年前3
 ⾸云容器产品Kubernetes操作指南nodePort: 30080 # 设置映射端⼝为 30080 14 selector: 15 app: wordpress 16 tier: frontend 17 --- 18 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 19 kind: Deployment wordpress 29 tier: frontend 30 strategy: 31 type: Recreate 32 template: 33 metadata: 34 labels: 35 app: wordpress 36 tier: frontend 37 spec: 38 targetPort: 80 12 selector: 3. 部署WordPress容器组 89 13 app: wordpress 14 tier: frontend 15 --- 16 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 17 kind: Deployment0 码力 | 94 页 | 9.98 MB | 1 年前3 ⾸云容器产品Kubernetes操作指南nodePort: 30080 # 设置映射端⼝为 30080 14 selector: 15 app: wordpress 16 tier: frontend 17 --- 18 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 19 kind: Deployment wordpress 29 tier: frontend 30 strategy: 31 type: Recreate 32 template: 33 metadata: 34 labels: 35 app: wordpress 36 tier: frontend 37 spec: 38 targetPort: 80 12 selector: 3. 部署WordPress容器组 89 13 app: wordpress 14 tier: frontend 15 --- 16 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 17 kind: Deployment0 码力 | 94 页 | 9.98 MB | 1 年前3
 Model and Operate Datacenter by Kubernetes at eBay (提交版)2015- Now 2010- Now 2012- Now Bare metals Way to Kubernetes Search Grid Hadoop PoP Database Frontend VM Kubernetes plays magic api etcd kind: metadata: spec: control loop control loop control0 码力 | 25 页 | 3.60 MB | 1 年前3 Model and Operate Datacenter by Kubernetes at eBay (提交版)2015- Now 2010- Now 2012- Now Bare metals Way to Kubernetes Search Grid Hadoop PoP Database Frontend VM Kubernetes plays magic api etcd kind: metadata: spec: control loop control loop control0 码力 | 25 页 | 3.60 MB | 1 年前3
 全球架构师峰会2019北京/云原生/阿里巴巴 Kubernetes 应用管理实践中的经验与教训&mdashkind: ApplicationConfiguration metadata: name: my-awesome-app spec: components: - componentName: frontend instanceName: web-front-end traits: - name: Ingress properties: - name: path value: "/" applicationScopes:0 码力 | 26 页 | 6.91 MB | 1 年前3 全球架构师峰会2019北京/云原生/阿里巴巴 Kubernetes 应用管理实践中的经验与教训&mdashkind: ApplicationConfiguration metadata: name: my-awesome-app spec: components: - componentName: frontend instanceName: web-front-end traits: - name: Ingress properties: - name: path value: "/" applicationScopes:0 码力 | 26 页 | 6.91 MB | 1 年前3
 用户界面State of the UI_ Leveraging Kubernetes Dashboard and Shaping its Futureintegrations 集成第三方插件 2. Feature parity with kubectl 功能与kubectl保持一致 3. Multi-cluster management 多集群管理 4. Improved security 提高安全性 Top requested changes 1. Third-party plugins or integrations 集成第三方插件 Which third-party 在一个地方集中查看来自于多个集群的资源对您来说有多重要? https://github.com/kubernetes/dashboard/issues /3256#issuecomment-437199403 4. Improved security “During the week of June 1st, 2018, [researchers] discovered more than 21,000 publicly Kubernetes represented more than 78% of all open IP's.” → Lacework: Container Security Research 4. Improved security bit.ly/securing-dashboard Securely running Dashboard is possible! “We operate a cluster0 码力 | 41 页 | 5.09 MB | 1 年前3 用户界面State of the UI_ Leveraging Kubernetes Dashboard and Shaping its Futureintegrations 集成第三方插件 2. Feature parity with kubectl 功能与kubectl保持一致 3. Multi-cluster management 多集群管理 4. Improved security 提高安全性 Top requested changes 1. Third-party plugins or integrations 集成第三方插件 Which third-party 在一个地方集中查看来自于多个集群的资源对您来说有多重要? https://github.com/kubernetes/dashboard/issues /3256#issuecomment-437199403 4. Improved security “During the week of June 1st, 2018, [researchers] discovered more than 21,000 publicly Kubernetes represented more than 78% of all open IP's.” → Lacework: Container Security Research 4. Improved security bit.ly/securing-dashboard Securely running Dashboard is possible! “We operate a cluster0 码力 | 41 页 | 5.09 MB | 1 年前3
 k8s操作手册 2.3master结点的6443/tcp端口即可。 高可用集群拓扑图: ★先配置HA高可用的反向代理 本例中vip为10.99.1.54(三台master ip为10.99.1.51~53)使用haproxy做反向代理 frontend k8s_api_tcp_6443 bind *:6443 mode tcp default_backend my_k8s_cluster_6443 backend0 码力 | 126 页 | 4.33 MB | 1 年前3 k8s操作手册 2.3master结点的6443/tcp端口即可。 高可用集群拓扑图: ★先配置HA高可用的反向代理 本例中vip为10.99.1.54(三台master ip为10.99.1.51~53)使用haproxy做反向代理 frontend k8s_api_tcp_6443 bind *:6443 mode tcp default_backend my_k8s_cluster_6443 backend0 码力 | 126 页 | 4.33 MB | 1 年前3
 绕过conntrack,使用eBPF增强 IPVS优化K8s网络性能eBPF Agenda 目录 01 Problems with K8s Service How to optimize 02 Comparison with industry Performance measurement 03 04 Future work 05 06 Lessons from eBPF What is K8s Service • It exposes a set control/data plane • Stably runs for two decades • Support rich scheduling algorithm • Cons • Performance cost caused by conntrack • Some bugs How to optimize • Guidelines • Use less CPU instructions Less modification to kernel Comparison with industry • Pitfalls • Performance of clusters of the same configure may differ • Performance of a cluster in different time slot may differ • Due to CPU oversold0 码力 | 24 页 | 1.90 MB | 1 年前3 绕过conntrack,使用eBPF增强 IPVS优化K8s网络性能eBPF Agenda 目录 01 Problems with K8s Service How to optimize 02 Comparison with industry Performance measurement 03 04 Future work 05 06 Lessons from eBPF What is K8s Service • It exposes a set control/data plane • Stably runs for two decades • Support rich scheduling algorithm • Cons • Performance cost caused by conntrack • Some bugs How to optimize • Guidelines • Use less CPU instructions Less modification to kernel Comparison with industry • Pitfalls • Performance of clusters of the same configure may differ • Performance of a cluster in different time slot may differ • Due to CPU oversold0 码力 | 24 页 | 1.90 MB | 1 年前3
共 18 条
- 1
- 2
相关搜索词














