 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression TechniquesCompute the number of elements to zero. num_elements_to_zero = int(w_1d.shape[0] * sparsity_rate) # Set the respective indices to zero. w_1d[w_1d_sorted_indices[:num_elements_to_zero]] = 0.0 w = np centroids as well as we can, so that they closely mimic the original distribution of the tensor’s elements. For a moment, let’s assume that the centroids we obtain are optimal, i.e. the reconstruction error in the codebook, which will only take up bits. For a tensor with elements, the cost would be bytes. Originally, storing all the elements would have cost bytes. Therefore, the compression ratio turns out0 码力 | 34 页 | 3.18 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression TechniquesCompute the number of elements to zero. num_elements_to_zero = int(w_1d.shape[0] * sparsity_rate) # Set the respective indices to zero. w_1d[w_1d_sorted_indices[:num_elements_to_zero]] = 0.0 w = np centroids as well as we can, so that they closely mimic the original distribution of the tensor’s elements. For a moment, let’s assume that the centroids we obtain are optimal, i.e. the reconstruction error in the codebook, which will only take up bits. For a tensor with elements, the cost would be bytes. Originally, storing all the elements would have cost bytes. Therefore, the compression ratio turns out0 码力 | 34 页 | 3.18 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesvocabulary[:10] ['', '[UNK]', 'the', 'in', 'of', 'is', 'a', 'and', 'was', 'by'] Notice that the first two elements are an empty string (a reserved token for padding) and a 'UNK' token (a token reserved for words of the hash function % vocab_size is an index in [0, vocab_size - 1] and is used to refer to the elements in the embedding table. In the figure, ‘bar’ and ‘hello’ map to the same slot in the embedding previous timestep. Another drawback of a sequential architecture is the loss of context between the elements that are far apart in the sequence. In other words, a sequential architecture has inherent limitations0 码力 | 53 页 | 3.92 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesvocabulary[:10] ['', '[UNK]', 'the', 'in', 'of', 'is', 'a', 'and', 'was', 'by'] Notice that the first two elements are an empty string (a reserved token for padding) and a 'UNK' token (a token reserved for words of the hash function % vocab_size is an index in [0, vocab_size - 1] and is used to refer to the elements in the embedding table. In the figure, ‘bar’ and ‘hello’ map to the same slot in the embedding previous timestep. Another drawback of a sequential architecture is the loss of context between the elements that are far apart in the sequence. In other words, a sequential architecture has inherent limitations0 码力 | 53 页 | 3.92 MB | 1 年前3
 PyTorch Brand GuidelinesBrand Guidelines PyTorch Symbol Clearspace While our system encourages a flexible use of elements, it’s important to present the symbol in its entirety maintaining legibility and clarity. maintain a clear area surrounding the wordmark. This insulates our wordmark from distracting visual elements such as copy, illustrations or photography. This spacing is determined by the measurements hex code equivalent. When printing, please use CMYK or the listed Pantone code. For UI button elements, please reference “Color Variations for UI Buttons” to apply the color properly. 9 Brand Guidelines0 码力 | 12 页 | 34.16 MB | 1 年前3 PyTorch Brand GuidelinesBrand Guidelines PyTorch Symbol Clearspace While our system encourages a flexible use of elements, it’s important to present the symbol in its entirety maintaining legibility and clarity. maintain a clear area surrounding the wordmark. This insulates our wordmark from distracting visual elements such as copy, illustrations or photography. This spacing is determined by the measurements hex code equivalent. When printing, please use CMYK or the listed Pantone code. For UI button elements, please reference “Color Variations for UI Buttons” to apply the color properly. 9 Brand Guidelines0 码力 | 12 页 | 34.16 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewdataset, a few simple pretext tasks can be to predict the last element (future) from the previous elements (past), or the other way around. Again to re-emphasize we are just pretending that the data is missing play around with the arrangement of the input, and make the model predict the right order of the elements of . The next question is where do we get the data for creating these tasks though? Since for each0 码力 | 31 页 | 4.03 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewdataset, a few simple pretext tasks can be to predict the last element (future) from the previous elements (past), or the other way around. Again to re-emphasize we are just pretending that the data is missing play around with the arrangement of the input, and make the model predict the right order of the elements of . The next question is where do we get the data for creating these tasks though? Since for each0 码力 | 31 页 | 4.03 MB | 1 年前3
 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning TechniquesNote that as increases, the relative differences between the various elements of decreases. This happens because if all elements are divided by the same constant, the softmax function would lead to0 码力 | 56 页 | 18.93 MB | 1 年前3 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning TechniquesNote that as increases, the relative differences between the various elements of decreases. This happens because if all elements are divided by the same constant, the softmax function would lead to0 码力 | 56 页 | 18.93 MB | 1 年前3
 机器学习课程-温州大学-03机器学习-逻辑回归org/course/ml [3] 李航. 统计学习方法[M]. 北京: 清华大学出版社,2019. [4] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [5] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 23 页 | 1.20 MB | 1 年前3 机器学习课程-温州大学-03机器学习-逻辑回归org/course/ml [3] 李航. 统计学习方法[M]. 北京: 清华大学出版社,2019. [4] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [5] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 23 页 | 1.20 MB | 1 年前3
 机器学习课程-温州大学-06机器学习-KNN算法classification[J]. IEEE Trans.inf.theory, 1953, 13(1):21-27. [5] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [6] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 26 页 | 1.60 MB | 1 年前3 机器学习课程-温州大学-06机器学习-KNN算法classification[J]. IEEE Trans.inf.theory, 1953, 13(1):21-27. [5] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [6] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 26 页 | 1.60 MB | 1 年前3
 机器学习课程-温州大学-09机器学习-支持向量机org/course/ml [3] 李航. 统计学习方法[M]. 北京: 清华大学出版社,2019. [4] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [5] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 29 页 | 1.51 MB | 1 年前3 机器学习课程-温州大学-09机器学习-支持向量机org/course/ml [3] 李航. 统计学习方法[M]. 北京: 清华大学出版社,2019. [4] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [5] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 29 页 | 1.51 MB | 1 年前3
 机器学习课程-温州大学-04机器学习-朴素贝叶斯Learning[M]. New York: McGraw-Hill Companies,Inc,1997. [2] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [3] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 31 页 | 1.13 MB | 1 年前3 机器学习课程-温州大学-04机器学习-朴素贝叶斯Learning[M]. New York: McGraw-Hill Companies,Inc,1997. [2] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [3] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 31 页 | 1.13 MB | 1 年前3
 机器学习课程-温州大学-05机器学习-机器学习实践Learning[M]. New York: McGraw-Hill Companies,Inc,1997. [4] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [5] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 33 页 | 2.14 MB | 1 年前3 机器学习课程-温州大学-05机器学习-机器学习实践Learning[M]. New York: McGraw-Hill Companies,Inc,1997. [4] Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning[M]. New York: Springer,2001. [5] CHRISTOPHER M. BISHOP. Pattern Recognition0 码力 | 33 页 | 2.14 MB | 1 年前3
共 21 条
- 1
- 2
- 3













