Zabbix 6.0 ManualZabbix 6.0.0 新功能 Zabbix server 的高可用集群 新版本附带了针对 Zabbix server 的原生高可用解决方案。 该解决方案由多个 zabbix_server 实例或节点组成,其中一次只能有一个节点处于活动状态(工作),而其他节点处于待机状态,准备好 在当前节点停止或故障时接管。 另请参阅:高可用集群。 服务 对服务的监控进行了一些更新。服务监控提供了 Zabbix Kubernetes 集群状态 • 通过 HTTP 的 Kubernetes API 服务器 • 通过 HTTP 的 Kubernetes 控制器管理器 • 基于 HTTP 的 Kubernetes 调度程序 • 基于 HTTP 的 Kubernetes kubelet 要启用 Kubernetes 监控,您需要使用新工具 Zabbix Helm Chart,它会在 Kubernetes 集群中安装 Zabbix alerting - 警报管理器统计 信息 lld - LLD 管理器统计信息 locks - 互斥锁列表(在 **BSD 系统 * 上为空) ha_status 记录高可用性 (HA) 集群状态。 ha_remove_node=target 删除由其列出的编号指定的高可用性 (HA) 节点。 请注意,无法删除活动/备用节点。 target - 列表中的节点编号 (可以通过运行 ha_status0 码力 | 1741 页 | 22.78 MB | 1 年前3
MySQL 企业版功能介绍自我修复式复制集群可提升可扩展性、性能和可用性。 联机模式更改可满足不断变化的业务需求。 Performance Schema 可监视各个用户和应用的性能及资源占用情况。 SQL 和 NoSQL 访问有助于执行复杂的查询以及快速完成简单快速的键值操作。 平台独立性让您可以灵活地在多个操作系统上开展开发和部署工作。 使用 MySQL 作为 Hadoop 和 Cassandra0 码力 | 6 页 | 509.78 KB | 1 年前3
Zabbix 5.0 Manual监控 Apache Cassandra - 请查看 JMX 模板的操作指引; • 通过 JMX 监控 Apache Kafka - 请查看 JMX 模板的操作指引; • 通过 HTTP 监控 Hadoop - 请查看 HTTP 模板的操作指引; • 通过 HTTP 监控 ZooKeeper - 请查看 HTTP 模板的操作指引. Morningstar • Morningstar ProStar instructions for JMX templates; • Apache Kafka by JMX - - see setup instructions for JMX templates; • Hadoop by HTTP - see setup instructions for HTTP templates; • Morningstar ProStar MPPT SNMP - monitoring s#plugins_supplied_out- of-the-box) 支 持。 ceph.status [, , ] 909 Key 集群总体状态。 JSON 对 **connSt ing** - URI 或 session 名称。 此监控项由 [Cephuser, password - Ceph 登录凭证。 lugin 0 码力 | 2715 页 | 28.60 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 DocumentationKyuubi Basics Configurations Engines Security Authentication Authorization Kinit Auxiliary Service Hadoop Credentials Manager Monitoring 1. Monitoring Kyuubi - Logging System 2. Monitoring Kyuubi - Server start. To install Spark, you need to unpack the tarball. For example, $ tar zxf spark-3.3.2-bin-hadoop3.tgz Configuration The kyuubi-env.sh file is used to set system environment variables to the kyuubiorg.apache.hadoop hadoop-common 2.7.4 0 码力 | 404 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.6.1 Documentationmanagers, e.g. Apache Hadoop YARN, Kubernetes (K8s) to create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions it in $KYUUBI_HOME/conf/kyuubi-env.sh too. For example, SPARK_HOME=~/Downloads/spark-3.2.0-bin-hadoop3.2 6.1. Quick Start 15 Kyuubi, Release 1.6.1-incubating Flink Engine Setup Flink Similar to SPARK_HOME: /Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2 SPARK_CONF_DIR: /Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2/conf HADOOP_CONF_DIR: YARN_CONF_DIR: Starting org.apache.kyuubi.server0 码力 | 199 页 | 3.89 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationversions. To install Spark, you need to unpack the tarball. For example, $ tar zxf spark-3.3.2-bin-hadoop3.tgz 3.1. Quick Start 9 Kyuubi, Release 1.7.0 Configuration The kyuubi-env.sh file is used to 7 IOException; import java.security.PrivilegedExceptionAction; import java.sql.*; import org.apache.hadoop.security.UserGroupInformation; public class JDBCTest { private static String driverName = "orgorg.apache.hadoop hadoop-common 2.7.4 0 码力 | 210 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.6.0 Documentationmanagers, e.g. Apache Hadoop YARN, Kubernetes (K8s) to create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions it in $KYUUBI_HOME/conf/kyuubi-env.sh too. For example, SPARK_HOME=~/Downloads/spark-3.2.0-bin-hadoop3.2 6.1. Quick Start 15 Kyuubi, Release 1.6.0-incubating Flink Engine Setup Flink Similar to SPARK_HOME: /Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2 SPARK_CONF_DIR: /Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2/conf HADOOP_CONF_DIR: YARN_CONF_DIR: Starting org.apache.kyuubi.server0 码力 | 195 页 | 3.88 MB | 1 年前3
Apache Kyuubi 1.7.1-rc0 DocumentationKyuubi Basics Configurations Engines Security Authentication Authorization Kinit Auxiliary Service Hadoop Credentials Manager Monitoring 1. Monitoring Kyuubi - Logging System 2. Monitoring Kyuubi - Server start. To install Spark, you need to unpack the tarball. For example, $ tar zxf spark-3.3.2-bin-hadoop3.tgz Configuration The kyuubi-env.sh file is used to set system environment variables to the kyuubi and the Spark derivatives, which are prefixed with spark.hive. or spark.hadoop., e.g spark.hive.metastore.uris or spark.hadoop.hive.metastore.uris, will be loaded as Hive primitives by the Hive client0 码力 | 401 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.6.1 DocumentationLists [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn- site/FairScheduler.html#Queue_Access_Control_Lists], from cluster managers, e.g. Apache Hadoop YARN [https://hadoop.apache.org/do org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html], Kubernetes (K8s) [https://kubernetes.io/] to create the Spark application; 2) a user account can only access data and metadata from a storage system system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language0 码力 | 401 页 | 5.42 MB | 1 年前3
Apache Kyuubi 1.6.0 DocumentationLists [https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn- site/FairScheduler.html#Queue_Access_Control_Lists], from cluster managers, e.g. Apache Hadoop YARN [https://hadoop.apache.org/do org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html], Kubernetes (K8s) [https://kubernetes.io/] to create the Spark application; 2) a user account can only access data and metadata from a storage system system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language0 码力 | 391 页 | 5.41 MB | 1 年前3
共 213 条
- 1
- 2
- 3
- 4
- 5
- 6
- 22













