Apache Kyuubi 1.7.0-rc0 Documentationcloud storage or an on-prem HDFS cluster. Lakehouse formation and analytics Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. Logical data warehouse Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_ge t_table_req(ThriftHiveMetastore0 码力 | 404 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0 Documentationcloud storage or an on-prem HDFS cluster. Lakehouse formation and analytics Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. Logical data warehouse Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_ge t_table_req(ThriftHiveMetastore0 码力 | 400 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0-rc1 Documentationcloud storage or an on-prem HDFS cluster. Lakehouse formation and analytics Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. Logical data warehouse Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_ge t_table_req(ThriftHiveMetastore0 码力 | 400 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0-rc1 Documentationstorage or an on-prem HDFS cluster. • Lakehouse formation and analytics – Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. • Logical data warehouse – Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | +------------+ method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_ ˓→table_req(ThriftHiveMetastore0 码力 | 206 页 | 3.78 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationstorage or an on-prem HDFS cluster. • Lakehouse formation and analytics – Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. • Logical data warehouse – Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | +------------+ method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_ ˓→table_req(ThriftHiveMetastore0 码力 | 210 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.7.0 Documentationstorage or an on-prem HDFS cluster. • Lakehouse formation and analytics – Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. • Logical data warehouse – Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | +------------+ method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_ ˓→table_req(ThriftHiveMetastore0 码力 | 206 页 | 3.78 MB | 1 年前3
Apache Kyuubi 1.6.1 Documentatione.t.c... Or, you can manipulate data from different data sources with the Spark Datasource/Flink Table API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c... Installation To install any query supported by Flink SQL now. For example, 0: jdbc:hive2://127.0.0.1:10009/default> CREATE TABLE T ( . . . . . . . . . . . . . . . . . . . . . . > a INT, . . . . . . . . . . . . . . . . . . . . ˓→query[22a73e39-d9d7-479b-a118-33f9d2a5ad3f]: INITIALIZED_STATE -> PENDING_STATE,␣ ˓→statement: CREATE TABLE T( a INT, b VARCHAR(10) ) WITH ( 'connector.type' = 'filesystem', 'connector.path' = 'file:///tmp/T0 码力 | 199 页 | 3.89 MB | 1 年前3
Apache Kyuubi 1.6.1 DocumentationMesos, e.t.c… Or, you can manipulate data from different data sources with the Spark Datasource/Flink Table API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c… Installation To install any query supported by Flink SQL now. For example, 0: jdbc:hive2://127.0.0.1:10009/default> CREATE TABLE T ( . . . . . . . . . . . . . . . . . . . . . . > a INT, . . . . . . . . . . . . . . . . . . query[22a73e39-d9d7-479b-a118-33f9d2a5ad3f]: INITIALIZED_STATE -> PENDING_STATE, statement: CREATE TABLE T( a INT, b VARCHAR(10) ) WITH ( 'connector.type' = 'filesystem', 'connector.path' = 'file:///tmp/T0 码力 | 401 页 | 5.42 MB | 1 年前3
Apache Kyuubi 1.6.0 Documentatione.t.c... Or, you can manipulate data from different data sources with the Spark Datasource/Flink Table API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c... Installation To install any query supported by Flink SQL now. For example, 0: jdbc:hive2://127.0.0.1:10009/default> CREATE TABLE T ( . . . . . . . . . . . . . . . . . . . . . . > a INT, . . . . . . . . . . . . . . . . . . . . ˓→query[22a73e39-d9d7-479b-a118-33f9d2a5ad3f]: INITIALIZED_STATE -> PENDING_STATE,␣ ˓→statement: CREATE TABLE T( a INT, b VARCHAR(10) ) WITH ( 'connector.type' = 'filesystem', 'connector.path' = 'file:///tmp/T0 码力 | 195 页 | 3.88 MB | 1 年前3
Apache Kyuubi 1.7.3 Documentationstorage or an on-prem HDFS cluster. • Lakehouse formation and analytics – Easily build an ACID table storage layer via Hudi, Iceberg, or/and Delta Lake. • Logical data warehouse – Provide a relational beeline console. For instance, > SHOW DATABASES; You will see a wall of operation logs, and a result table in the beeline console. omitted logs +------------+ | namespace | +------------+ | default | +------------+ method name: 'get_table_req' at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_ ˓→table_req(ThriftHiveMetastore0 码力 | 211 页 | 3.79 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5













