Apache Kyuubi 1.8.0-rc0 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD The most important thing,0 码力 | 428 页 | 5.28 MB | 1 年前3
Apache Kyuubi 1.8.1 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD value: 60 - name: SCHEDULE_INTERVAL0 码力 | 405 页 | 5.28 MB | 1 年前3
Apache Kyuubi 1.8.0-rc1 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD The most important thing,0 码力 | 429 页 | 5.28 MB | 1 年前3
Apache Kyuubi 1.8.0 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD The most important thing,0 码力 | 429 页 | 5.28 MB | 1 年前3
Apache Kyuubi 1.7.1-rc0 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD value: 60 - name: SCHEDULE_INTERVAL0 码力 | 401 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.2-rc0 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD value: 60 - name: SCHEDULE_INTERVAL0 码力 | 405 页 | 5.26 MB | 1 年前3
Apache Kyuubi 1.7.2 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD value: 60 - name: SCHEDULE_INTERVAL0 码力 | 405 页 | 5.26 MB | 1 年前3
Apache Kyuubi 1.7.3 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD value: 60 - name: SCHEDULE_INTERVAL0 码力 | 405 页 | 5.26 MB | 1 年前3
Apache Kyuubi 1.7.3-rc0 Documentationresources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s less than the specified value(control by FREE_SPACE_THRESHOLD), will trigger deep clean(which file expired time control by DEEP_CLEAN_FILE_EXPIRED_TIME). Usage Before you start using Spark Block Cleaner CACHE_DIRS value: /data/data1,/data/data2 - name: FILE_EXPIRED_TIME value: 604800 - name: DEEP_CLEAN_FILE_EXPIRED_TIME value: 432000 - name: FREE_SPACE_THRESHOLD value: 60 - name: SCHEDULE_INTERVAL0 码力 | 405 页 | 5.26 MB | 1 年前3
Apache Kyuubi 1.5.1 DocumentationOR i_size = 'small') ) OR (i_category = 'Men' AND (i_color = 'floral' OR i_color = 'deep') AND (i_units = 'N/A' OR i_units = 'Dozen') AND (i_size = 'petite' OR i_size = 'large') resources more efficiently. On the one hand, we need to rely on the resource manager’s capabilities for efficient resource allocation, resource isolation, and sharing. On the other hand, we need to enable Spark’s data e.g. minimum and maximum values, the good data clustering let the pushed down filter more efficient 3.1.1. Supported table format Table Format Supported parquet Y orc Y json N csv N text N0 码力 | 267 页 | 5.80 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5













