 Kubernetes开源书 -  周立API将会不断变化和发展。但是,在很⻓⼀段时间内并不会破坏与现有客户端的兼容性。⼀般来说,新的API资源和新的 资源字段通常可被频繁添加。消除资源或字段将需遵循 API deprecation policy 。 API change document 详细介绍了兼容更改以及如何更改API的内容。 OpenAPI与Swagger定义 完整的API详情使⽤ Swagger v1.2 和 OpenAPI 记录。Kubernetes requirement混合使⽤。例如: partition in (customerA, customerB),environment!=qa 。 API LIST与WATCH过滤 09-Label和Selector 27 LIST和WATCH操作可指定Label选择器,这样就可以使⽤查询参数过滤返回的对象集。两种requirement都是允许的 (在这⾥表示,它们将显示在URL查询字符串中): labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29 两种Label选择器都可⽤于在REST客户端中LIST或WATCH资源。例如,使⽤ kubectl 定位 apiserver 并使⽤ equality-based 的⽅式可写为: $ kubectl get pods -l environment=production0 码力 | 135 页 | 21.02 MB | 1 年前3 Kubernetes开源书 -  周立API将会不断变化和发展。但是,在很⻓⼀段时间内并不会破坏与现有客户端的兼容性。⼀般来说,新的API资源和新的 资源字段通常可被频繁添加。消除资源或字段将需遵循 API deprecation policy 。 API change document 详细介绍了兼容更改以及如何更改API的内容。 OpenAPI与Swagger定义 完整的API详情使⽤ Swagger v1.2 和 OpenAPI 记录。Kubernetes requirement混合使⽤。例如: partition in (customerA, customerB),environment!=qa 。 API LIST与WATCH过滤 09-Label和Selector 27 LIST和WATCH操作可指定Label选择器,这样就可以使⽤查询参数过滤返回的对象集。两种requirement都是允许的 (在这⾥表示,它们将显示在URL查询字符串中): labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29 两种Label选择器都可⽤于在REST客户端中LIST或WATCH资源。例如,使⽤ kubectl 定位 apiserver 并使⽤ equality-based 的⽅式可写为: $ kubectl get pods -l environment=production0 码力 | 135 页 | 21.02 MB | 1 年前3
 K8S安装部署开放服务yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # 查看 docker-ce 安装包 yum list | grep docker-ce # 安装 docker-ce yum install docker-ce-19.03.12-3.el7 -y systemctl start docker /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": 部署 k8s node 节点 Step1: k8s master 上查看/创建 token、生成证书摘要 kubeadm token list kubeadmin token create --ttl 0 kubeadm token list openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa0 码力 | 54 页 | 1.23 MB | 1 年前3 K8S安装部署开放服务yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # 查看 docker-ce 安装包 yum list | grep docker-ce # 安装 docker-ce yum install docker-ce-19.03.12-3.el7 -y systemctl start docker /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": 部署 k8s node 节点 Step1: k8s master 上查看/创建 token、生成证书摘要 kubeadm token list kubeadmin token create --ttl 0 kubeadm token list openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa0 码力 | 54 页 | 1.23 MB | 1 年前3
 k8s操作手册 2.3/var/lib/chrony/dri� makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF # �medatectl set-�mezone Asia/Shanghai #设置时区 # systemctl restart chronyd # family="ipv4" source address="10.244.0.0/16" accept' # firewall-cmd --run�me-to-permanent # firewall-cmd --list-all ★如果有硬件交换机做ACL或基于云的安全组做访问控制,则可关闭服务器上的 防火墙软件 ⑨加载ipvs模块 # cat > /etc/modules-load.d/k8s-ipvs ③k8s集群初始化 # kubeadm version #先查看k8s版本 # GitVersion:"v1.19.4" # kubeadm config images list #查看k8s其他组件的docker镜像名,默认用 k8s.gcr.io/的镜像源地址 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kub0 码力 | 126 页 | 4.33 MB | 1 年前3 k8s操作手册 2.3/var/lib/chrony/dri� makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF # �medatectl set-�mezone Asia/Shanghai #设置时区 # systemctl restart chronyd # family="ipv4" source address="10.244.0.0/16" accept' # firewall-cmd --run�me-to-permanent # firewall-cmd --list-all ★如果有硬件交换机做ACL或基于云的安全组做访问控制,则可关闭服务器上的 防火墙软件 ⑨加载ipvs模块 # cat > /etc/modules-load.d/k8s-ipvs ③k8s集群初始化 # kubeadm version #先查看k8s版本 # GitVersion:"v1.19.4" # kubeadm config images list #查看k8s其他组件的docker镜像名,默认用 k8s.gcr.io/的镜像源地址 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kub0 码力 | 126 页 | 4.33 MB | 1 年前3
 QCon北京2018/QCon北京2018-《Kubernetes-+面向未来的开发和部署》-Michael+ChenNamespace events 2. A user creates a new K8s Namespace 3. The K8s API Server notifies NCP of the change (addition) of Namespaces 4. NCP creates the network topology for the Namespace : a) Requests Context Unstructured Data Logs Messages VMware vRealize Log Insight Log analytics, aggregation, and search Virtual Applications vRealize Ops, Log Insight For Comprehensive Visibility 32 UI and API0 码力 | 42 页 | 10.97 MB | 1 年前3 QCon北京2018/QCon北京2018-《Kubernetes-+面向未来的开发和部署》-Michael+ChenNamespace events 2. A user creates a new K8s Namespace 3. The K8s API Server notifies NCP of the change (addition) of Namespaces 4. NCP creates the network topology for the Namespace : a) Requests Context Unstructured Data Logs Messages VMware vRealize Log Insight Log analytics, aggregation, and search Virtual Applications vRealize Ops, Log Insight For Comprehensive Visibility 32 UI and API0 码力 | 42 页 | 10.97 MB | 1 年前3
 Kubernetes Native DevOps PracticeJob Job Job Job Pod Pod Pod Pod ElasticSearch ElasticSearch Logging Service agent to collecting log data ElasticSearch ElasticSearch Monitor/Alert Service CronJob Node Pod Node Pod Unified Service DevOps Service DevOps Manager Create job Update jobs status to buildjob Submit buildjob List/Watch buildjob Pod Pod Pod Pod Build task configuration - map to k8s Job, can also be a raw k8s collecting log data ElasticSearch ElasticSearch Monitor/Alert Service DevOps Operator Cluster AutoScaler k8s API DevOps Service DevOps Service DevOps Manager Restful API realtime log history0 码力 | 21 页 | 6.39 MB | 1 年前3 Kubernetes Native DevOps PracticeJob Job Job Job Pod Pod Pod Pod ElasticSearch ElasticSearch Logging Service agent to collecting log data ElasticSearch ElasticSearch Monitor/Alert Service CronJob Node Pod Node Pod Unified Service DevOps Service DevOps Manager Create job Update jobs status to buildjob Submit buildjob List/Watch buildjob Pod Pod Pod Pod Build task configuration - map to k8s Job, can also be a raw k8s collecting log data ElasticSearch ElasticSearch Monitor/Alert Service DevOps Operator Cluster AutoScaler k8s API DevOps Service DevOps Service DevOps Manager Restful API realtime log history0 码力 | 21 页 | 6.39 MB | 1 年前3
 Go Programming Pattern in Kubernetes Philosophyapi-server Etcd bind pod, node list pod GenericRuntime SyncPod CRI grpc dockershim remote (no-op) Sandbox Create Delete List Container Create Start Exec Image Pull List shim client api dockerd image: gcr.io/google_containers/testapp:v1 volumeMounts: - name: varlog mountPath: /var/log - name: logging-agent image: gcr.io/google_containers/fluentd:1.30 env: - name: FLUENTD_ARGS value: -c /etc/fluentd-config/fluentd.conf volumeMounts: - name: varlog mountPath: /var/log - name: config-volume mountPath: /etc/fluentd-config volumes: - name: varlog emptyDir:0 码力 | 29 页 | 2.12 MB | 1 年前3 Go Programming Pattern in Kubernetes Philosophyapi-server Etcd bind pod, node list pod GenericRuntime SyncPod CRI grpc dockershim remote (no-op) Sandbox Create Delete List Container Create Start Exec Image Pull List shim client api dockerd image: gcr.io/google_containers/testapp:v1 volumeMounts: - name: varlog mountPath: /var/log - name: logging-agent image: gcr.io/google_containers/fluentd:1.30 env: - name: FLUENTD_ARGS value: -c /etc/fluentd-config/fluentd.conf volumeMounts: - name: varlog mountPath: /var/log - name: config-volume mountPath: /etc/fluentd-config volumes: - name: varlog emptyDir:0 码力 | 29 页 | 2.12 MB | 1 年前3
 在大规模Kubernetes集群上实现高SLO的方法ContainerCrashLoopBackOff, FailedPostStartHook, Unhealthy… Trace system Increase of SLO Data Collect Audit log Event The unhealthy node Monitoring Isolation Recover Degrade Data Analysis Failures/Machine problems automatically。 The infrastructure Log Event End User Storage Analysis Platform Trace Report Weakness The trace system Data Collect: Collect Audit log for the whole cluster. Data analysis: Analyze Unhealthy node list Fast Taint Weight Adjust Recovery Manual Handling Improve Auto Human experience Improve of strategy …… 1. Collect data from metrics NPD, Trace system and Log. 2. Analyze the0 码力 | 11 页 | 4.01 MB | 1 年前3 在大规模Kubernetes集群上实现高SLO的方法ContainerCrashLoopBackOff, FailedPostStartHook, Unhealthy… Trace system Increase of SLO Data Collect Audit log Event The unhealthy node Monitoring Isolation Recover Degrade Data Analysis Failures/Machine problems automatically。 The infrastructure Log Event End User Storage Analysis Platform Trace Report Weakness The trace system Data Collect: Collect Audit log for the whole cluster. Data analysis: Analyze Unhealthy node list Fast Taint Weight Adjust Recovery Manual Handling Improve Auto Human experience Improve of strategy …… 1. Collect data from metrics NPD, Trace system and Log. 2. Analyze the0 码力 | 11 页 | 4.01 MB | 1 年前3
 用户界面State of the UI_ Leveraging Kubernetes Dashboard and Shaping its Futureng1 to ng2 (#3152) ● Migrating metrics from Heapster to Kubernetes Metrics API (#2986) ● Apps list page (#2980) Demo Future of Dashboard How do people use Dashboard today, and want to use it them would find the Dashboard extremely useful, but if we could ... have them [log in] with the same creds used to log into the cluster to only see their resources, that would be a huge win.” → Survey0 码力 | 41 页 | 5.09 MB | 1 年前3 用户界面State of the UI_ Leveraging Kubernetes Dashboard and Shaping its Futureng1 to ng2 (#3152) ● Migrating metrics from Heapster to Kubernetes Metrics API (#2986) ● Apps list page (#2980) Demo Future of Dashboard How do people use Dashboard today, and want to use it them would find the Dashboard extremely useful, but if we could ... have them [log in] with the same creds used to log into the cluster to only see their resources, that would be a huge win.” → Survey0 码力 | 41 页 | 5.09 MB | 1 年前3
 01. K8s扩展功能解析running stoped deleted Resource Item Resource status Resource Spec: running list/watch resource change Reconcile resource status to spec © 2017 Rancher Labs, Inc. API Aggregation • What0 码力 | 12 页 | 1.08 MB | 1 年前3 01. K8s扩展功能解析running stoped deleted Resource Item Resource status Resource Spec: running list/watch resource change Reconcile resource status to spec © 2017 Rancher Labs, Inc. API Aggregation • What0 码力 | 12 页 | 1.08 MB | 1 年前3
 Over engineeringthe core of Kubernetes kops“text/template” parse at runtime How do we develop ? ? ? No Yes make go-bindata go build ./kops change work? Kops 1.4 fin Kops 1.4 We were really good at dealing with YAML Kops 1.4 And we had test our “text/template” code Kops 1.5 ..also We would still get panics at runtime.. Kops 1.5 List of things we ne0 码力 | 75 页 | 4.56 MB | 1 年前3 Over engineeringthe core of Kubernetes kops“text/template” parse at runtime How do we develop ? ? ? No Yes make go-bindata go build ./kops change work? Kops 1.4 fin Kops 1.4 We were really good at dealing with YAML Kops 1.4 And we had test our “text/template” code Kops 1.5 ..also We would still get panics at runtime.. Kops 1.5 List of things we ne0 码力 | 75 页 | 4.56 MB | 1 年前3
共 40 条
- 1
- 2
- 3
- 4














 
 