OpenShift Container Platform 4.4 构建(build)OpenShift Container Platform 4.4 构建(build) 在 OpenShift Container Platform 中执行构建并与之交互 Last Updated: 2021-03-11 OpenShift Container Platform 4.4 构建(build) 在 OpenShift Container Platform 中执行构建并与之交互 . . . . . . . . . . . . . . . . . . . . . . . . . 目 目录 录 第 第 1 章 章 理解 理解镜 镜像 像构 构建 建 1.1. 构建(BUILD) 第 第 2 章 章 了解 了解构 构建配置 建配置 2.1. BUILDCONFIG 第 第 3 章 章 创 创建 建构 构建 建输 输入 入 3.1. 构建输入 3.2. DOCKERFILE 87 87 87 88 88 89 93 OpenShift Container Platform 4.4 构 构建( 建(build) ) 2 目 目录 录 3 第 1 章 理解镜像构建 1.1. 构建(BUILD) 构建 (build)是将输入参数转换为结果对象的过程。此过程最常用于将输入参数或源代码转换为可运行的镜 像。BuildConfig 对象是整个构建过程的定义。0 码力 | 101 页 | 1.12 MB | 1 年前3
Using Istio to Build the Next 5G PlatformUsing Istio to Build the Next 5G Platform David Lenrow Open Source Service Mesh Evangelist Neeraj Poddar Co-founder & Chief Architect, Aspen Mesh February 22, 2021 2 ©2021 Aspen Mesh. All rights reserved Observability, Debugging Uniform metrics and tracing for all CNF traffic Enforcement Primitives to Build Zero Trust Strong identity for users, workloads, devices, etc. Encrypting inter-CNF traffic via0 码力 | 18 页 | 3.79 MB | 1 年前3
Apache Karaf 3.0.5 Guidescentralized in the etc folder. Any change in a configuration file is taken on the fly. • Advanced Logging System: Apache Karaf supports a large set of Logging framework (slf4j, log4j, etc). Whatever the logging http://karaf.apache.org/ index/community/download.html • Download the binary distribution that matches your system (zip for windows, tar.gz for unixes) • Extract the archive a new folder on your hard drive; for commands and '[cmd] --help' for help on a specific command. 8 QUICK START Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf. karaf@root()> SOME SHELL BASICS You can now run your 0 码力 | 203 页 | 534.36 KB | 1 年前3
Apache Karaf Container 4.x - Documentation4.3.2. Stop 4.3.3. Status 4.3.4. Restart 4.3.5. SystemMBean 4.4. Integration in the operating system 4.4.1. Service Wrapper 4.4.2. Service Script Templates 4.5. Using the console 4.5.1. Available JMX LogMBean 4.7.6. Advanced configuration 4.8. Configuration 4.8.1. Environment Variables & System Properties 4.8.2. Files 4.8.3. config:* commands 4.8.4. JMX ConfigMBean 4.9. Artifacts repositories 4.14.7. Security providers 4.15. Docker 4.15.1. Docker images 4.15.2. Docker feature 4.15.3. System-wide information 4.15.4. Show the Docker version information 4.15.5. Search image 4.15.6. Pull0 码力 | 370 页 | 1.03 MB | 1 年前3
Apache Kyuubi 1.5.1 Documentationother. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both data processing e.g. ETL, and analytics e.g. BI. All workloads create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop- hdfs/HdfsDesign (JDBC) to handle massive data. It helps you focus on the design and implementation of your business system. Run Anywhere Kyuubi can submit Spark applications to all supported cluster managers, including0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.5.2 Documentationother. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both data processing e.g. ETL, and analytics e.g. BI. All workloads create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop- hdfs/HdfsDesign (JDBC) to handle massive data. It helps you focus on the design and implementation of your business system. Run Anywhere Kyuubi can submit Spark applications to all supported cluster managers, including0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.5.0 Documentationother. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both data processing e.g. ETL, and analytics e.g. BI. All workloads create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop- hdfs/HdfsDesign (JDBC) to handle massive data. It helps you focus on the design and implementation of your business system. Run Anywhere Kyuubi can submit Spark applications to all supported cluster managers, including0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.9.0-SNAPSHOT Documentationto combine some of the components above to build a modern data stack. For example, you can use Kyuubi, Spark and Iceberg [https://iceberg.apache.org/] to build and manage Data Lakehouse with pure SQL for (ODBC) interface over a JDBC-to-ODBC bridge to communicate with Kyuubi. RESTful APIs It provides system management APIs, including engines, sessions, operations, and miscellaneous ones. It provides methods single point of failures. It helps achieve zero downtime for planned system maintenance Failure detectability Failures and system load of kyuubi server and engines are visible via metrics, logs, and0 码力 | 405 页 | 4.96 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationto combine some of the components above to build a modern data stack. For example, you can use Kyuubi, Spark and Iceberg [https://iceberg.apache.org/] to build and manage Data Lakehouse with pure SQL for (ODBC) interface over a JDBC-to-ODBC bridge to communicate with Kyuubi. RESTful APIs It provides system management APIs, including engines, sessions, operations, and miscellaneous ones. It provides methods single point of failures. It helps achieve zero downtime for planned system maintenance Failure detectability Failures and system load of kyuubi server and engines are visible via metrics, logs, and0 码力 | 404 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0 Documentationto combine some of the components above to build a modern data stack. For example, you can use Kyuubi, Spark and Iceberg [https://iceberg.apache.org/] to build and manage Data Lakehouse with pure SQL for (ODBC) interface over a JDBC-to-ODBC bridge to communicate with Kyuubi. RESTful APIs It provides system management APIs, including engines, sessions, operations, and miscellaneous ones. It provides methods single point of failures. It helps achieve zero downtime for planned system maintenance Failure detectability Failures and system load of kyuubi server and engines are visible via metrics, logs, and0 码力 | 400 页 | 5.25 MB | 1 年前3
共 459 条
- 1
- 2
- 3
- 4
- 5
- 6
- 46













