 Apache Kyuubi 1.8.0-rc0 Documentationoperation execution thread pool of SQL engine applications int 1.0.0 kyuubi.backend.en gine.exec.pool.wai t.queue.size 100 Size of the wait queue for the operation execution thread pool in SQL engine applications backend.server.even t.kafka.topic. Note: For the configs of Kafka producer, please specify them with the prefix: kyuubi.backend.server.even t.kafka.. For example, kyuubi.backend.server.even t.kafka.bootstrap credentials. update.wait.timeou t PT1M How long to wait until the credentials are ready. durati on 1.5.0 Ctl Key Default Meaning Type Since kyuubi.ctl.batch.lo g.on.failure.timeou t PT10S The timeout for0 码力 | 428 页 | 5.28 MB | 1 年前3 Apache Kyuubi 1.8.0-rc0 Documentationoperation execution thread pool of SQL engine applications int 1.0.0 kyuubi.backend.en gine.exec.pool.wai t.queue.size 100 Size of the wait queue for the operation execution thread pool in SQL engine applications backend.server.even t.kafka.topic. Note: For the configs of Kafka producer, please specify them with the prefix: kyuubi.backend.server.even t.kafka.. For example, kyuubi.backend.server.even t.kafka.bootstrap credentials. update.wait.timeou t PT1M How long to wait until the credentials are ready. durati on 1.5.0 Ctl Key Default Meaning Type Since kyuubi.ctl.batch.lo g.on.failure.timeou t PT10S The timeout for0 码力 | 428 页 | 5.28 MB | 1 年前3
 Apache Kyuubi 1.8.0-rc1 Documentationoperation execution thread pool of SQL engine applications int 1.0.0 kyuubi.backend.en gine.exec.pool.wai t.queue.size 100 Size of the wait queue for the operation execution thread pool in SQL engine applications backend.server.even t.kafka.topic. Note: For the configs of Kafka producer, please specify them with the prefix: kyuubi.backend.server.even t.kafka.. For example, kyuubi.backend.server.even t.kafka.bootstrap credentials. update.wait.timeou t PT1M How long to wait until the credentials are ready. durati on 1.5.0 Ctl Key Default Meaning Type Since kyuubi.ctl.batch.lo g.on.failure.timeou t PT10S The timeout for0 码力 | 429 页 | 5.28 MB | 1 年前3 Apache Kyuubi 1.8.0-rc1 Documentationoperation execution thread pool of SQL engine applications int 1.0.0 kyuubi.backend.en gine.exec.pool.wai t.queue.size 100 Size of the wait queue for the operation execution thread pool in SQL engine applications backend.server.even t.kafka.topic. Note: For the configs of Kafka producer, please specify them with the prefix: kyuubi.backend.server.even t.kafka.. For example, kyuubi.backend.server.even t.kafka.bootstrap credentials. update.wait.timeou t PT1M How long to wait until the credentials are ready. durati on 1.5.0 Ctl Key Default Meaning Type Since kyuubi.ctl.batch.lo g.on.failure.timeou t PT10S The timeout for0 码力 | 429 页 | 5.28 MB | 1 年前3
 Apache Kyuubi 1.8.0 Documentationoperation execution thread pool of SQL engine applications int 1.0.0 kyuubi.backend.en gine.exec.pool.wai t.queue.size 100 Size of the wait queue for the operation execution thread pool in SQL engine applications backend.server.even t.kafka.topic. Note: For the configs of Kafka producer, please specify them with the prefix: kyuubi.backend.server.even t.kafka.. For example, kyuubi.backend.server.even t.kafka.bootstrap credentials. update.wait.timeou t PT1M How long to wait until the credentials are ready. durati on 1.5.0 Ctl Key Default Meaning Type Since kyuubi.ctl.batch.lo g.on.failure.timeou t PT10S The timeout for0 码力 | 429 页 | 5.28 MB | 1 年前3 Apache Kyuubi 1.8.0 Documentationoperation execution thread pool of SQL engine applications int 1.0.0 kyuubi.backend.en gine.exec.pool.wai t.queue.size 100 Size of the wait queue for the operation execution thread pool in SQL engine applications backend.server.even t.kafka.topic. Note: For the configs of Kafka producer, please specify them with the prefix: kyuubi.backend.server.even t.kafka.. For example, kyuubi.backend.server.even t.kafka.bootstrap credentials. update.wait.timeou t PT1M How long to wait until the credentials are ready. durati on 1.5.0 Ctl Key Default Meaning Type Since kyuubi.ctl.batch.lo g.on.failure.timeou t PT10S The timeout for0 码力 | 429 页 | 5.28 MB | 1 年前3
 Apache Kyuubi 1.7.1-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 401 页 | 5.25 MB | 1 年前3 Apache Kyuubi 1.7.1-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 401 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 404 页 | 5.25 MB | 1 年前3 Apache Kyuubi 1.7.0-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 404 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 400 页 | 5.25 MB | 1 年前3 Apache Kyuubi 1.7.0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 400 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc1 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 400 页 | 5.25 MB | 1 年前3 Apache Kyuubi 1.7.0-rc1 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 400 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BASE_IMAGE=repo/spark:3 BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 210 页 | 3.79 MB | 1 年前3 Apache Kyuubi 1.7.0-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BASE_IMAGE=repo/spark:3 BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 210 页 | 3.79 MB | 1 年前3
 Apache Kyuubi 1.7.2-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 405 页 | 5.26 MB | 1 年前3 Apache Kyuubi 1.7.2-rc0 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 405 页 | 5.26 MB | 1 年前3
 Apache Kyuubi 1.7.2 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 405 页 | 5.26 MB | 1 年前3 Apache Kyuubi 1.7.2 Documentationdocker.io/myrepo -t v1.4.0 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -b BA BASE_IMAGE=repo/spark:3.2.1 build $0 -r docker.io/myrepo -t v1.4.0 push - Build and push for multiple archs to docker.io/myrepo $0 -r docker.io/myrepo -t v1.4.0 -X build - Build with Spark placed "/path/spark" -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you don’t have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the -S ${SPARK_HOME_IN_DOCKER}0 码力 | 405 页 | 5.26 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5














