《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression TechniquesThe sparsify_smallest() sets the absolute smallest weights in the input weight matrix to zero. The number of the absolute smallest weights is computed based on the sparsity_rate parameter which denotes the absolute magnitude of the weights. w_1d_sorted_indices = np.argsort(np.abs(w_1d)) # Compute the number of elements to zero. num_elements_to_zero = int(w_1d.shape[0] * sparsity_rate) # Set the respective also define a sparsity_rate variable initialized with the value 0.4 to sparsify 40% of the total number of weights. Finally, we compute the original weight matrix size, compressed weight matrix size, and0 码力 | 34 页 | 3.18 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - AutomationOptimization improves two aspects of the training process: performance and convergence. Hyperparameters like number of filters in a convolution network or 1 Note that this search space is just choosing if we are couple of additional drawbacks. First, it suffers from the curse of dimensionality where the total number of trials grows quickly for each additional hyperparameter value or a new hyperparameter. Second differentiate between unimportant and important hyperparameters. Important hyperparameters have a larger number of subspaces or subranges than unimportant parameters that need to be searched for an optimal value0 码力 | 33 页 | 2.48 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesnumerical format, having an algorithmic way to meaningfully represent these inputs using a small number of numerical features, will help us solve tasks related to these inputs. Ideally this representation representation. It is useful because it is often computationally infeasible to work with data that has a large number of features. However, not all features might be equally important, thus selecting the most informative / Not Suitable), since there were very few examples. What if you have multiple classes / a large number of examples / more than two features? In those cases, we could use classical machine learning algorithms0 码力 | 53 页 | 3.92 MB | 1 年前3
Apache Kyuubi 1.4.1 DocumentationSQL 2. Auxiliary SQL Functions for Spark SQL 3. Z-order Benchmark Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.4.0 DocumentationSQL 2. Auxiliary SQL Functions for Spark SQL 3. Z-order Benchmark Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.3.0 Documentationextension for Spark SQL 2. Auxiliary SQL Functions for Spark SQL Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 199 页 | 4.42 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentationextension for Spark SQL 2. Auxiliary SQL Functions for Spark SQL Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 199 页 | 4.44 MB | 1 年前3
Apache Kyuubi 1.5.1 DocumentationFunctions for Spark SQL 3. Z-order introduction 4. Z-order Benchmark Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to 1000; historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.5.2 DocumentationFunctions for Spark SQL 3. Z-order introduction 4. Z-order Benchmark Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to 1000; historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.5.0 DocumentationFunctions for Spark SQL 3. Z-order introduction 4. Z-order Benchmark Tools 1. Kubernetes Tools Spark Block Cleaner Kyuubi Insider Overview Architecture Kyuubi v.s. HiveServer2 Kyuubi v.s. Spark Thrift JDBC/ODBC Only applicable if -- outputformat=table. --incrementalBufferRows=NUMROWS the number of rows to buffer when printing rows on stdout, defaults to 1000; historic behavior of printing null as empty string --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --help display this message Example:0 码力 | 267 页 | 5.80 MB | 1 年前3
共 379 条
- 1
- 2
- 3
- 4
- 5
- 6
- 38













