 OpenShift Container Platform 4.10 专用硬件和驱动程序启用core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist 如果指定,这个值会被弃用的 --sleep-interval 命令行标志覆盖。 用法示例 用法示例 默认值为 60s。 core.sources core.sources 指定启用的功能源列表。特殊值 all 可启用所有功能源。 如果指定,这个值会被弃用的 --sources 命令行标志覆盖。 默认:[all] 用法示例 用法示例 core.labelWhiteList core.labelWhiteList0 码力 | 36 页 | 360.64 KB | 1 年前3 OpenShift Container Platform 4.10 专用硬件和驱动程序启用core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist 如果指定,这个值会被弃用的 --sleep-interval 命令行标志覆盖。 用法示例 用法示例 默认值为 60s。 core.sources core.sources 指定启用的功能源列表。特殊值 all 可启用所有功能源。 如果指定,这个值会被弃用的 --sources 命令行标志覆盖。 默认:[all] 用法示例 用法示例 core.labelWhiteList core.labelWhiteList0 码力 | 36 页 | 360.64 KB | 1 年前3
 OpenShift Container Platform 4.12 专用硬件和驱动程序启用core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist 如果指定,这个值会被弃用的 --sleep-interval 命令行标志覆盖。 用法示例 默认值为 60s。 core.sources core.sources 指定启用的功能源列表。特殊值 all 可启用所有功能源。 如果指定,这个值会被弃用的 --sources 命令行标志覆盖。 默认:[all] 用法示例 core.labelWhiteList core.labelWhiteList0 码力 | 54 页 | 591.48 KB | 1 年前3 OpenShift Container Platform 4.12 专用硬件和驱动程序启用core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist 如果指定,这个值会被弃用的 --sleep-interval 命令行标志覆盖。 用法示例 默认值为 60s。 core.sources core.sources 指定启用的功能源列表。特殊值 all 可启用所有功能源。 如果指定,这个值会被弃用的 --sources 命令行标志覆盖。 默认:[all] 用法示例 core.labelWhiteList core.labelWhiteList0 码力 | 54 页 | 591.48 KB | 1 年前3
 Scalable Stream Processing - Spark Streaming and Flinkstreaming sources: 1. Basic sources directly available in the StreamingContext API, e.g., file systems, socket connections. 2. Advanced sources, e.g., Kafka, Flume, Kinesis, Twitter. 3. Custom sources, e.g g., user-provided sources. 13 / 79 Input Operations ▶ Every input DStream is associated with a Receiver object. • It receives the data from a source and stores it in Spark’s memory for processing. streaming sources: 1. Basic sources directly available in the StreamingContext API, e.g., file systems, socket connections. 2. Advanced sources, e.g., Kafka, Flume, Kinesis, Twitter. 3. Custom sources, e.g0 码力 | 113 页 | 1.22 MB | 1 年前3 Scalable Stream Processing - Spark Streaming and Flinkstreaming sources: 1. Basic sources directly available in the StreamingContext API, e.g., file systems, socket connections. 2. Advanced sources, e.g., Kafka, Flume, Kinesis, Twitter. 3. Custom sources, e.g g., user-provided sources. 13 / 79 Input Operations ▶ Every input DStream is associated with a Receiver object. • It receives the data from a source and stores it in Spark’s memory for processing. streaming sources: 1. Basic sources directly available in the StreamingContext API, e.g., file systems, socket connections. 2. Advanced sources, e.g., Kafka, Flume, Kinesis, Twitter. 3. Custom sources, e.g0 码力 | 113 页 | 1.22 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc0 Documentationio/], etc, to query massive datasets distributed over fleets of machines from heterogeneous data sources. The Kyuubi Server lane of the below swimlane divides our prospective users into end users and administrators familiar SQL for various workloads. Extensive and secure data access capability across diverse data sources. High performance on large volumes of data with scalable computing resources. Besides, Kyuubi also Interacting with Different Versions of Hive Metastore [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with- different-versions-of-hive-metastore]. Further Readings Hive0 码力 | 404 页 | 5.25 MB | 1 年前3 Apache Kyuubi 1.7.0-rc0 Documentationio/], etc, to query massive datasets distributed over fleets of machines from heterogeneous data sources. The Kyuubi Server lane of the below swimlane divides our prospective users into end users and administrators familiar SQL for various workloads. Extensive and secure data access capability across diverse data sources. High performance on large volumes of data with scalable computing resources. Besides, Kyuubi also Interacting with Different Versions of Hive Metastore [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with- different-versions-of-hive-metastore]. Further Readings Hive0 码力 | 404 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc0 DocumentationTrino, etc, to query massive datasets distributed over fleets of machines from heteroge- neous data sources. The Kyuubi Server lane of the below swimlane divides our prospective users into end users and administrators familiar SQL for various workloads. • Extensive and secure data access capability across diverse data sources. • High performance on large volumes of data with scalable computing resources. Besides, Kyuubi Kyuubi Spark SQL Query Engine uses Spark DataSource APIs(V1/V2) to access data from different data sources. By default, it provides accessibility to hive warehouses with various file formats supported, such0 码力 | 210 页 | 3.79 MB | 1 年前3 Apache Kyuubi 1.7.0-rc0 DocumentationTrino, etc, to query massive datasets distributed over fleets of machines from heteroge- neous data sources. The Kyuubi Server lane of the below swimlane divides our prospective users into end users and administrators familiar SQL for various workloads. • Extensive and secure data access capability across diverse data sources. • High performance on large volumes of data with scalable computing resources. Besides, Kyuubi Kyuubi Spark SQL Query Engine uses Spark DataSource APIs(V1/V2) to access data from different data sources. By default, it provides accessibility to hive warehouses with various file formats supported, such0 码力 | 210 页 | 3.79 MB | 1 年前3
 Oracle VM VirtualBox 6.0.24 Programming Guide and ReferenceAPI Starting with version 4.3, VirtualBox offers a C binding which allows using the same C client sources for all platforms, covering Windows, Linux, Mac OS X and Solaris. It is the preferred way to write specification (see chapter 3.4, VirtualBox events, page 50), including how to aggregate multiple event sources for processing in one event loop. As mentioned, the sample illustrates the practical aspects of how Essentially it’s mechanism to aggregate multiple event sources into single one, and then work with this single aggregated event source instead of original sources. As an example, one can evaluate demo recorder0 码力 | 442 页 | 2.56 MB | 1 年前3 Oracle VM VirtualBox 6.0.24 Programming Guide and ReferenceAPI Starting with version 4.3, VirtualBox offers a C binding which allows using the same C client sources for all platforms, covering Windows, Linux, Mac OS X and Solaris. It is the preferred way to write specification (see chapter 3.4, VirtualBox events, page 50), including how to aggregate multiple event sources for processing in one event loop. As mentioned, the sample illustrates the practical aspects of how Essentially it’s mechanism to aggregate multiple event sources into single one, and then work with this single aggregated event source instead of original sources. As an example, one can evaluate demo recorder0 码力 | 442 页 | 2.56 MB | 1 年前3
 Oracle VM VirtualBox 6.0.0_BETA2 Programming Guide and ReferenceAPI Starting with version 4.3, VirtualBox offers a C binding which allows using the same C client sources for all platforms, covering Windows, Linux, Mac OS X and Solaris. It is the preferred way to write specification (see chapter 3.4, VirtualBox events, page 50), including how to aggregate multiple event sources for processing in one event loop. As mentioned, the sample illustrates the practical aspects of how Essentially it’s mechanism to aggregate multiple event sources into single one, and then work with this single aggregated event source instead of original sources. As an example, one can evaluate demo recorder0 码力 | 438 页 | 2.54 MB | 1 年前3 Oracle VM VirtualBox 6.0.0_BETA2 Programming Guide and ReferenceAPI Starting with version 4.3, VirtualBox offers a C binding which allows using the same C client sources for all platforms, covering Windows, Linux, Mac OS X and Solaris. It is the preferred way to write specification (see chapter 3.4, VirtualBox events, page 50), including how to aggregate multiple event sources for processing in one event loop. As mentioned, the sample illustrates the practical aspects of how Essentially it’s mechanism to aggregate multiple event sources into single one, and then work with this single aggregated event source instead of original sources. As an example, one can evaluate demo recorder0 码力 | 438 页 | 2.54 MB | 1 年前3
 Apache Kyuubi 1.3.0 Documentationmanager, including YARN, Kubernetes, Mesos, e.t.c… Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t Interacting with Different Versions of Hive Metastore [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore] 3.6. Further Readings Hive html#custom- hadoophive-configuration] Hive Tables [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html] 4. Kyuubi High Availability Guide As an enterprise-class ad-hoc SQL query service0 码力 | 199 页 | 4.42 MB | 1 年前3 Apache Kyuubi 1.3.0 Documentationmanager, including YARN, Kubernetes, Mesos, e.t.c… Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t Interacting with Different Versions of Hive Metastore [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore] 3.6. Further Readings Hive html#custom- hadoophive-configuration] Hive Tables [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html] 4. Kyuubi High Availability Guide As an enterprise-class ad-hoc SQL query service0 码力 | 199 页 | 4.42 MB | 1 年前3
 Apache Kyuubi 1.3.1 Documentationmanager, including YARN, Kubernetes, Mesos, e.t.c… Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t Interacting with Different Versions of Hive Metastore [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore] 3.6. Further Readings Hive html#custom- hadoophive-configuration] Hive Tables [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html] 4. Kyuubi High Availability Guide As an enterprise-class ad-hoc SQL query service0 码力 | 199 页 | 4.44 MB | 1 年前3 Apache Kyuubi 1.3.1 Documentationmanager, including YARN, Kubernetes, Mesos, e.t.c… Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t Interacting with Different Versions of Hive Metastore [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore] 3.6. Further Readings Hive html#custom- hadoophive-configuration] Hive Tables [http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html] 4. Kyuubi High Availability Guide As an enterprise-class ad-hoc SQL query service0 码力 | 199 页 | 4.44 MB | 1 年前3
 Apache Kyuubi 1.7.1-rc0 Documentationio/], etc, to query massive datasets distributed over fleets of machines from heterogeneous data sources. The Kyuubi Server lane of the below swimlane divides our prospective users into end users and administrators familiar SQL for various workloads. Extensive and secure data access capability across diverse data sources. High performance on large volumes of data with scalable computing resources. Besides, Kyuubi also Interacting with Different Versions of Hive Metastore [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with- different-versions-of-hive-metastore]. Further Readings Hive0 码力 | 401 页 | 5.25 MB | 1 年前3 Apache Kyuubi 1.7.1-rc0 Documentationio/], etc, to query massive datasets distributed over fleets of machines from heterogeneous data sources. The Kyuubi Server lane of the below swimlane divides our prospective users into end users and administrators familiar SQL for various workloads. Extensive and secure data access capability across diverse data sources. High performance on large volumes of data with scalable computing resources. Besides, Kyuubi also Interacting with Different Versions of Hive Metastore [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with- different-versions-of-hive-metastore]. Further Readings Hive0 码力 | 401 页 | 5.25 MB | 1 年前3
共 235 条
- 1
- 2
- 3
- 4
- 5
- 6
- 24














 
 